Using Model Checking for Analyzing Distributed Power Control Problems
DEFF Research Database (Denmark)
Brihaye, Thomas; Jungers, Marc; Lasaulce, Samson
2010-01-01
Model checking (MC) is a formal verification technique which has been known and still knows a resounding success in the computer science community. Realizing that the distributed power control ( PC) problem can be modeled by a timed game between a given transmitter and its environment, the authors...... wanted to know whether this approach can be applied to distributed PC. It turns out that it can be applied successfully and allows one to analyze realistic scenarios including the case of discrete transmit powers and games with incomplete information. The proposed methodology is as follows. We state some...... objectives a transmitter-receiver pair would like to reach. The network is modeled by a game where transmitters are considered as timed automata interacting with each other. The objectives are then translated into timed alternating-time temporal logic formulae and MC is exploited to know whether the desired...
Using Model Checking for Analyzing Distributed Power Control Problems
Directory of Open Access Journals (Sweden)
Thomas Brihaye
2010-01-01
Full Text Available Model checking (MC is a formal verification technique which has been known and still knows a resounding success in the computer science community. Realizing that the distributed power control (PC problem can be modeled by a timed game between a given transmitter and its environment, the authors wanted to know whether this approach can be applied to distributed PC. It turns out that it can be applied successfully and allows one to analyze realistic scenarios including the case of discrete transmit powers and games with incomplete information. The proposed methodology is as follows. We state some objectives a transmitter-receiver pair would like to reach. The network is modeled by a game where transmitters are considered as timed automata interacting with each other. The objectives are then translated into timed alternating-time temporal logic formulae and MC is exploited to know whether the desired properties are verified and determine a winning strategy.
Where is positional uncertainty a problem for species distribution modelling?
Naimi, N.; Hamm, N.A.S.; Groen, T.A.; Skidmore, A.K.; Toxopeus, A.G.
2014-01-01
Species data held in museum and herbaria, survey data and opportunistically observed data are a substantial information resource. A key challenge in using these data is the uncertainty about where an observation is located. This is important when the data are used for species distribution modelling
The production-distribution problem with order acceptance and package delivery: models and algorithm
Directory of Open Access Journals (Sweden)
Khalili Majid
2016-01-01
Full Text Available The production planning and distribution are among the most important decisions in the supply chain. Classically, in this problem, it is assumed that all orders have to produced and separately delivered; while, in practice, an order may be rejected if the cost that it brings to the supply chain exceeds its revenue. Moreover, orders can be delivered in a batch to reduce the related costs. This paper considers the production planning and distribution problem with order acceptance and package delivery to maximize the profit. At first, a new mathematical model based on mixed integer linear programming is developed. Using commercial optimization software, the model can optimally solve small or even medium sized instances. For large instances, a solution method, based on imperialist competitive algorithms, is also proposed. Using numerical experiments, the proposed model and algorithm are evaluated.
A Data Flow Model to Solve the Data Distribution Changing Problem in Machine Learning
Directory of Open Access Journals (Sweden)
Shang Bo-Wen
2016-01-01
Full Text Available Continuous prediction is widely used in broad communities spreading from social to business and the machine learning method is an important method in this problem.When we use the machine learning method to predict a problem. We use the data in the training set to fit the model and estimate the distribution of data in the test set.But when we use machine learning to do the continuous prediction we get new data as time goes by and use the data to predict the future data, there may be a problem. As the size of the data set increasing over time, the distribution changes and there will be many garbage data in the training set.We should remove the garbage data as it reduces the accuracy of the prediction. The main contribution of this article is using the new data to detect the timeliness of historical data and remove the garbage data.We build a data flow model to describe how the data flow among the test set, training set, validation set and the garbage set and improve the accuracy of prediction. As the change of the data set, the best machine learning model will change.We design a hybrid voting algorithm to fit the data set better that uses seven machine learning models predicting the same problem and uses the validation set putting different weights on the learning models to give better model more weights. Experimental results show that, when the distribution of the data set changes over time, our time flow model can remove most of the garbage data and get a better result than the traditional method that adds all the data to the data set; our hybrid voting algorithm has a better prediction result than the average accuracy of other predict models
DEFF Research Database (Denmark)
Chemi, Tatiana
2016-01-01
a perspective that is relevant to higher education. The focus here is on how artists solve problems in distributed paths, and on the elements of creative collaboration. Creative problem-solving will be looked at as an ongoing dialogue that artists engage with themselves, with others, with recipients......, what can educators at higher education learn from the ways creative groups solve problems? How can artists contribute to inspiring higher education?...
Particle size-shape distributions: the general spheroid problem. I. Mathematical model.
Orive, L M
1976-08-01
The development of stereological methods for the study of dilute phases of particles, voids or organelles embedded in a matrix, from measurements made on plane or linear intercepts through the aggregate, has deserved a great deal of effort. With almost no exception, the problem of describing the particulate phase is reduced to that of identifying the statistical distribution--histogram in practice--of a relevant size parameter, with the previous assumption that the particles are modelled by geometrical objects of a constant shape (e.g. spheres). Therefore, particles exhibiting a random variation about a given type of shape as well as a random variation in size, escape previous analyses. Such is the case of unequiaxed particles modelled by triaxial ellipsoids of variable size and eccentricity parameters. It has been conjectured (Moran, 1972) that this problem is indetermined in its generally (i.e. the elliptical sections do not furnish a sufficient information which permits a complete description of the ellipsoids). A proof of this conjecture is given in the Appendix. When the ellipsoids are biaxial (spheroids) and of the same type (prolate or oblate), the problem is identifiable. Previous attempts to solve it assume statistical independence between size and shape. A complete, theoretical solution of the spheroids problem--with the independence condition relaxed--is presented. A number of exact relationships--some of them of a striking simplicity--linking particle properties (e.g. mean-mean caliper length, mean axial ratio, correlation coefficient between principal diameters, etc.) on the one hand, with the major and minor dimensions of the ellipses of section on the other, emerge, and natural, consistent estimators of the mentioned properties are made easily accessible for practical computation. Finally, the scope and limitations of the mathematical model are discussed.
Cayuela, L.; Golicher, J.D.; Newton, A.C.; Kolb, M.; Alburquerque, de F.S.; Arets, E.J.M.M.; Alkemade, J.R.M.; Pérez, A.M.
2009-01-01
In this paper we aim to investigate the problems and potentialities of species distribution modeling (SDM) as a tool for conservation planning and policy development and implementation in tropical regions. We reviewed 123 studies published between 1995 and 2007 in five of the leading journals in
Fazayeli, Saeed; Eydi, Alireza; Kamalabadi, Isa Nakhai
2017-07-01
Nowadays, organizations have to compete with different competitors in regional, national and international levels, so they have to improve their competition capabilities to survive against competitors. Undertaking activities on a global scale requires a proper distribution system which could take advantages of different transportation modes. Accordingly, the present paper addresses a location-routing problem on multimodal transportation network. The introduced problem follows four objectives simultaneously which form main contribution of the paper; determining multimodal routes between supplier and distribution centers, locating mode changing facilities, locating distribution centers, and determining product delivery tours from the distribution centers to retailers. An integer linear programming is presented for the problem, and a genetic algorithm with a new chromosome structure proposed to solve the problem. Proposed chromosome structure consists of two different parts for multimodal transportation and location-routing parts of the model. Based on published data in the literature, two numerical cases with different sizes generated and solved. Also, different cost scenarios designed to better analyze model and algorithm performance. Results show that algorithm can effectively solve large-size problems within a reasonable time which GAMS software failed to reach an optimal solution even within much longer times.
Distributed Systems: The Hard Problems
CERN. Geneva
2015-01-01
**Nicholas Bellerophon** works as a client services engineer at Basho Technologies, helping customers setup and run distributed systems at scale in the wild. He has also worked in massively multiplayer games, and recently completed a live scalable simulation engine. He is an avid TED-watcher with interests in many areas of the arts, science, and engineering, including of course high-energy physics.
Problem solving environment for distributed interactive applications
Rycerz, K.; Bubak, M.; Sloot, P.; Getov, V.; Gorlatch, S.; Bubak, M.; Priol, T.
2008-01-01
Interactive Problem Solving Environments (PSEs) offer an integrated approach for constructing and running complex systems, such as distributed simulation systems. To achieve efficient execution of High Level Architecture (HLA)-based distributed interactive simulations on the Grid, we introduce a PSE
Problems in Cybersemiotic Modelling
DEFF Research Database (Denmark)
Brier, Søren
2014-01-01
on the basis of the evolutionary semiotics paradigm of C.S. Peirce . Semiotics underlines realism more, but is also relational in its whole project. In Cybersemiotics the autopoietic model in integrated in the Peircean framework which is of a far greater scope than autopoiesis. Thus in Cybersemiotic we have...... Uexküll’s cybernetic-behavioral model, which has the problem of being placed in a Platonic, static worldview. The Umwelt of an animal is a construction limited of its functional realism of survival. It is connected to the species. 2. Ture von Uexküll and Søren Brier both realized that Maturana and Varela...
The capacitated distribution and waste disposal problem
Bloemhof-Ruwaard, Jacqueline; Salomon, Marc; Wassenhove, Luk
1996-01-01
textabstractWe study the problem of the simultaneous design of a distribution network with plants and waste disposal units, and the coordination of product flows and waste flows within this network. The objective is to minimize the sum of fixed costs for opening plants and waste disposal units, and variable costs related to product and waste flows. The problem is complicated by (i) capacity constraints on plants and waste disposal units, (ii) service requirements (i.e. production must cover t...
A practical solution for a newspaper distribution problem
Mantel, Ronald; Fontein, M.
1993-01-01
In this paper the problem of distributing newspapers is treated. After a general introduction on this topic, a mathematical model for a hierarchical distribution system is given explicitly and a heuristic consisting of several solution techniques is described. Furthermore, some results of the
Hierarchical species distribution models
Hefley, Trevor J.; Hooten, Mevin B.
2016-01-01
Determining the distribution pattern of a species is important to increase scientific knowledge, inform management decisions, and conserve biodiversity. To infer spatial and temporal patterns, species distribution models have been developed for use with many sampling designs and types of data. Recently, it has been shown that count, presence-absence, and presence-only data can be conceptualized as arising from a point process distribution. Therefore, it is important to understand properties of the point process distribution. We examine how the hierarchical species distribution modeling framework has been used to incorporate a wide array of regression and theory-based components while accounting for the data collection process and making use of auxiliary information. The hierarchical modeling framework allows us to demonstrate how several commonly used species distribution models can be derived from the point process distribution, highlight areas of potential overlap between different models, and suggest areas where further research is needed.
The capacitated distribution and waste disposal problem
J.M. Bloemhof-Ruwaard (Jacqueline); M. Salomon (Marc); L.N. van Wassenhove (Luk)
1996-01-01
textabstractWe study the problem of the simultaneous design of a distribution network with plants and waste disposal units, and the coordination of product flows and waste flows within this network. The objective is to minimize the sum of fixed costs for opening plants and waste disposal units, and
Problems in Cybersemiotic Modelling
DEFF Research Database (Denmark)
Brier, Søren
2014-01-01
’s constructivist biology came closer to a modern version of Jacob von Uexküll’s. Maturana’s model is a relational model. Cognition and communication aims to conserve a viable relation between living system and environment. It is as such not an objective modeling. 3. This model is reinterpreted in biosemiotics......Going from an empirical to an informational paradigm of cognition and communication, does not really help us to analyze, how the living systems manage to make a meaningful interpretation of environment that is useful for their survival and procreation. Other models are needed. 1. There is von...... Uexküll’s cybernetic-behavioral model, which has the problem of being placed in a Platonic, static worldview. The Umwelt of an animal is a construction limited of its functional realism of survival. It is connected to the species. 2. Ture von Uexküll and Søren Brier both realized that Maturana and Varela...
Integrating packing and distribution problems and optimization through mathematical programming
Directory of Open Access Journals (Sweden)
Fabio Miguel
2016-06-01
Full Text Available This paper analyzes the integration of two combinatorial problems that frequently arise in production and distribution systems. One is the Bin Packing Problem (BPP problem, which involves finding an ordering of some objects of different volumes to be packed into the minimal number of containers of the same or different size. An optimal solution to this NP-Hard problem can be approximated by means of meta-heuristic methods. On the other hand, we consider the Capacitated Vehicle Routing Problem with Time Windows (CVRPTW, which is a variant of the Travelling Salesman Problem (again a NP-Hard problem with extra constraints. Here we model those two problems in a single framework and use an evolutionary meta-heuristics to solve them jointly. Furthermore, we use data from a real world company as a test-bed for the method introduced here.
Simulating quantum correlations as a distributed sampling problem
International Nuclear Information System (INIS)
Degorre, Julien; Laplante, Sophie; Roland, Jeremie
2005-01-01
It is known that quantum correlations exhibited by a maximally entangled qubit pair can be simulated with the help of shared randomness, supplemented with additional resources, such as communication, postselection or nonlocal boxes. For instance, in the case of projective measurements, it is possible to solve this problem with protocols using one bit of communication or making one use of a nonlocal box. We show that this problem reduces to a distributed sampling problem. We give a new method to obtain samples from a biased distribution, starting with shared random variables following a uniform distribution, and use it to build distributed sampling protocols. This approach allows us to derive, in a simpler and unified way, many existing protocols for projective measurements, and extend them to positive operator value measurements. Moreover, this approach naturally leads to a local hidden variable model for Werner states
Bounding species distribution models
Directory of Open Access Journals (Sweden)
Thomas J. STOHLGREN, Catherine S. JARNEVICH, Wayne E. ESAIAS,Jeffrey T. MORISETTE
2011-10-01
Full Text Available Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for “clamping” model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART and maximum entropy (Maxent models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5: 642–647, 2011].
Bounding Species Distribution Models
Stohlgren, Thomas J.; Jarnevich, Cahterine S.; Morisette, Jeffrey T.; Esaias, Wayne E.
2011-01-01
Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5): 642-647, 2011].
Sen, Sedat
2018-01-01
Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood…
Algorithms and ordering heuristics for distributed constraint satisfaction problems
Wahbi , Mohamed
2013-01-01
DisCSP (Distributed Constraint Satisfaction Problem) is a general framework for solving distributed problems arising in Distributed Artificial Intelligence.A wide variety of problems in artificial intelligence are solved using the constraint satisfaction problem paradigm. However, there are several applications in multi-agent coordination that are of a distributed nature. In this type of application, the knowledge about the problem, that is, variables and constraints, may be logically or geographically distributed among physical distributed agents. This distribution is mainly due to p
PCB contaminated distribution transfromers: problems and solutions
Energy Technology Data Exchange (ETDEWEB)
Dinelli, G.; Quattroni, G. [ENEL SpA, DSR, CRR (Italy); Como, G.; Quintavalle, R. [ENEL SpA, DDI (Italy); Nigris, M. de; Pigini, A. [CESI (Italy)
1997-12-31
As required by European and Italian legislation, a systematic screening of the distribution transformers in service was made by ENEL. This screening was related to a number of about 300000 medium to low voltage transformers, with rated power from 50 to 630 kVA. The screening has shown that a certain number of transformers was contaminated with various degrees of PCBs. Economical evaluation was made to compare the different solutions to the problem: continuation of service without any intervention, decontamination, direct disposal or disposal after treatment. The influence of the content of PCB and of the size of the transformer on the solution to be adopted was analyzed. Technical evaluation of the various decontamination procedures were made. In particular, the different commercial methods for the dehalogenation of the dielectric oil and for the decontamination of the solid parts of the transformers were investigated: from the washing of the transformer with fresh oil to the use of solvents, both in open and closed configuration in liquid or steam form, to the use of vacuum technologies. Functional tests were carried out on the transformers before and after decontamination, to identify any possible impact of the decontamination on the transformer functional and electrical characteristics. Finally, a simulation of service conditions was made to reveal any process of PCB release after treatment and during service. On the basis of the activity performed, the lines for solving the PCB problem for distribution transformers in Italy were drawn, by means of the setting up of technical specifications for verifying the adequacy of dehalogenation and/or decontamination methods and of the qualification of the decontamination service supplies. (author)
B. Kaynar; S.I. Birbil (Ilker); J.B.G. Frenk (Hans)
2007-01-01
textabstractIn this paper portfolio problems with linear loss functions and multivariate elliptical distributed returns are studied. We consider two risk measures, Value-at-Risk and Conditional-Value-at-Risk, and two types of decision makers, risk neutral and risk averse. For Value-at-Risk, we show
Species Distribution Modelling
DEFF Research Database (Denmark)
Gomes, Vitor H. F.; Ijff, Stephanie D.; Raes, Niels
2018-01-01
Species distribution models (SDMs) are widely used in ecology and conservation. Presence-only SDMs such as MaxEnt frequently use natural history collections (NHCs) as occurrence data, given their huge numbers and accessibility. NHCs are often spatially biased which may generate inaccuracies in SD...
The Military Theater Distribution Network Design Problem
2015-03-26
manufacturing, power distribution, resource management, financial planning, and many others. The study of network flow models in effect addresses three questions...vary in levels of economic, social, and political stability. As a nation formed by colonial powers in the 18th century, Nigerias borders are at times...Absolute : •False ) ’ • •• ’ Cell~ (l3 + 1 , 7 ) . A<:Ldre~~ (Re!erenceS~yle : •xlAl , RowAhs o l u t e: • P"aln , Col wtnAbs olu t e : • r a lae
Optimizing Distribution Problems using WinQSB Software
Directory of Open Access Journals (Sweden)
Daniel Mihai Amariei
2015-07-01
Full Text Available In the present paper we are presenting a problem of distribution using the Network Modeling Module of the WinQSB software, were we have 5 athletes which we must assign the optimal sample, function of the obtained time, so as to obtain the maximum output of the athletes. Also we analyzed the case of an accident of 2 athletes, the coupling of 3 athletes with 5 various athletic events causing the maximum coupling, done using the Hungarian algorithm.
Modelling of a collage problem
Directory of Open Access Journals (Sweden)
Abdelaziz Ait Moussa
2006-09-01
Full Text Available In this paper we study the behavior of elastic adherents connected with an adhesive. We use the $Gamma$-convergence method to approximate the problem modelling the assemblage with density energies assumed to be quasiconvex. In particular for the adhesive problem, we assume periodic density energy and some growth conditions with respect to the spherical and deviational components of the gradient. We obtain a problem depending on small parameters linked to the thickness and the stiffness of the adhesive.
Distribution-Preserving Stratified Sampling for Learning Problems.
Cervellera, Cristiano; Maccio, Danilo
2017-06-09
The need for extracting a small sample from a large amount of real data, possibly streaming, arises routinely in learning problems, e.g., for storage, to cope with computational limitations, obtain good training/test/validation sets, and select minibatches for stochastic gradient neural network training. Unless we have reasons to select the samples in an active way dictated by the specific task and/or model at hand, it is important that the distribution of the selected points is as similar as possible to the original data. This is obvious for unsupervised learning problems, where the goal is to gain insights on the distribution of the data, but it is also relevant for supervised problems, where the theory explains how the training set distribution influences the generalization error. In this paper, we analyze the technique of stratified sampling from the point of view of distances between probabilities. This allows us to introduce an algorithm, based on recursive binary partition of the input space, aimed at obtaining samples that are distributed as much as possible as the original data. A theoretical analysis is proposed, proving the (greedy) optimality of the procedure together with explicit error bounds. An adaptive version of the algorithm is also introduced to cope with streaming data. Simulation tests on various data sets and different learning tasks are also provided.
Mathematical problems in meteorological modelling
Csomós, Petra; Faragó, István; Horányi, András; Szépszó, Gabriella
2016-01-01
This book deals with mathematical problems arising in the context of meteorological modelling. It gathers and presents some of the most interesting and important issues from the interaction of mathematics and meteorology. It is unique in that it features contributions on topics like data assimilation, ensemble prediction, numerical methods, and transport modelling, from both mathematical and meteorological perspectives. The derivation and solution of all kinds of numerical prediction models require the application of results from various mathematical fields. The present volume is divided into three parts, moving from mathematical and numerical problems through air quality modelling, to advanced applications in data assimilation and probabilistic forecasting. The book arose from the workshop “Mathematical Problems in Meteorological Modelling” held in Budapest in May 2014 and organized by the ECMI Special Interest Group on Numerical Weather Prediction. Its main objective is to highlight the beauty of the de...
Achievements and Problems of Conceptual Modelling
Thalheim, Bernhard
Database and information systems technology has substantially changed. Nowadays, content management systems, (information-intensive) web services, collaborating systems, internet databases, OLAP databases etc. have become buzzwords. At the same time, object-relational technology has gained the maturity for being widely applied. Conceptual modelling has not (yet) covered all these novel topics. It has been concentrated for more than two decades around specification of structures. Meanwhile, functionality, interactivity and distribution must be included into conceptual modelling of information systems. Also, some of the open problems that have been already discussed in 1987 [15, 16] still remain to be open. At the same time, novel models such as object-relational models or XML-based models have been developed. They did not overcome all the problems but have been sharpening and extending the variety of open problems. The open problem presented are given for classical areas of database research, i.e., structuring and functionality. The entire are of distribution and interaction is currently an area of very intensive research.
A model for routing problem in quay management problem
Zirour, Mourad; Oughalime, Ahmed; Liong, Choong-Yeun; Ismail, Wan Rosmanira; Omar, Khairuddin
2014-06-01
Quadratic Assignment Problem (QAP), like Vehicle Routing Problem, is one of those optimization problems that interests many researchers in the last decades. The Quay Management Problem is a specific problem which could be presented as a QAP which involves a double assignment of customers and products toward loading positions using lifting trucks. This study focuses on the routing problem while delivering the customers' demands. In this problem, lifting trucks will route around the storage sections to collect the products then deliver to the customers who are assigned to specific loading positions. The objective of minimizing the residence time for each customer is sought. This paper presents the problem and the proposed model.
Numerical models for differential problems
Quarteroni, Alfio
2017-01-01
In this text, we introduce the basic concepts for the numerical modelling of partial differential equations. We consider the classical elliptic, parabolic and hyperbolic linear equations, but also the diffusion, transport, and Navier-Stokes equations, as well as equations representing conservation laws, saddle-point problems and optimal control problems. Furthermore, we provide numerous physical examples which underline such equations. We then analyze numerical solution methods based on finite elements, finite differences, finite volumes, spectral methods and domain decomposition methods, and reduced basis methods. In particular, we discuss the algorithmic and computer implementation aspects and provide a number of easy-to-use programs. The text does not require any previous advanced mathematical knowledge of partial differential equations: the absolutely essential concepts are reported in a preliminary chapter. It is therefore suitable for students of bachelor and master courses in scientific disciplines, an...
Distributed Interior-point Method for Loosely Coupled Problems
DEFF Research Database (Denmark)
Pakazad, Sina Khoshfetrat; Hansson, Anders; Andersen, Martin Skovgaard
2014-01-01
In this paper, we put forth distributed algorithms for solving loosely coupled unconstrained and constrained optimization problems. Such problems are usually solved using algorithms that are based on a combination of decomposition and first order methods. These algorithms are commonly very slow...... and require many iterations to converge. In order to alleviate this issue, we propose algorithms that combine the Newton and interior-point methods with proximal splitting methods for solving such problems. Particularly, the algorithm for solving unconstrained loosely coupled problems, is based on Newton......’s method and utilizes proximal splitting to distribute the computations for calculating the Newton step at each iteration. A combination of this algorithm and the interior-point method is then used to introduce a distributed algorithm for solving constrained loosely coupled problems. We also provide...
Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution
Energy Technology Data Exchange (ETDEWEB)
Hamadameen, Abdulqader Othman [Optimization, Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia); Zainuddin, Zaitul Marlizawati [Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia)
2014-06-19
This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.
Chang, Yung-Chia; Li, Vincent C.; Chiang, Chia-Ju
2014-04-01
Make-to-order or direct-order business models that require close interaction between production and distribution activities have been adopted by many enterprises in order to be competitive in demanding markets. This article considers an integrated production and distribution scheduling problem in which jobs are first processed by one of the unrelated parallel machines and then distributed to corresponding customers by capacitated vehicles without intermediate inventory. The objective is to find a joint production and distribution schedule so that the weighted sum of total weighted job delivery time and the total distribution cost is minimized. This article presents a mathematical model for describing the problem and designs an algorithm using ant colony optimization. Computational experiments illustrate that the algorithm developed is capable of generating near-optimal solutions. The computational results also demonstrate the value of integrating production and distribution in the model for the studied problem.
Distribution system modeling and analysis
Kersting, William H
2001-01-01
For decades, distribution engineers did not have the sophisticated tools developed for analyzing transmission systems-often they had only their instincts. Things have changed, and we now have computer programs that allow engineers to simulate, analyze, and optimize distribution systems. Powerful as these programs are, however, without a real understanding of the operating characteristics of a distribution system, engineers using the programs can easily make serious errors in their designs and operating procedures. Distribution System Modeling and Analysis helps prevent those errors. It gives readers a basic understanding of the modeling and operating characteristics of the major components of a distribution system. One by one, the author develops and analyzes each component as a stand-alone element, then puts them all together to analyze a distribution system comprising the various shunt and series devices for power-flow and short-circuit studies. He includes the derivation of all models and includes many num...
Robust and distributed genetic algorithm for ordering problems
Energy Technology Data Exchange (ETDEWEB)
Kumar, A.; Srivastava, A.; Singru, A. [Univ. of Louisville, KY (United States); Ghosh, R.K. [Indian Institute of Technology, Kanpur (India)
1996-12-31
This paper presents a distributed Genetic Algorithm implementation for obtaining good quality consistent results for different ordering problems. Most importantly, the solution found by the proposed Distributed GA is not only of high-quality but also robust and does not require fine-tuning of the probabilities of crossover and mutation. In addition, implementation of the Distributed GA is simple and does not require the use of any specialized, expensive hardware. Fault tolerance has also been provided by dynamic reconfiguration of the distributed system in the event of a process or machine failure. The effectiveness of using a simple crossover scheme with Distributed GA is demonstrated by solving three variations of the Traveling Salesman Problem (TSP).
Hydronic distribution system computer model
Energy Technology Data Exchange (ETDEWEB)
Andrews, J.W.; Strasser, J.J.
1994-10-01
A computer model of a hot-water boiler and its associated hydronic thermal distribution loop has been developed at Brookhaven National Laboratory (BNL). It is intended to be incorporated as a submodel in a comprehensive model of residential-scale thermal distribution systems developed at Lawrence Berkeley. This will give the combined model the capability of modeling forced-air and hydronic distribution systems in the same house using the same supporting software. This report describes the development of the BNL hydronics model, initial results and internal consistency checks, and its intended relationship to the LBL model. A method of interacting with the LBL model that does not require physical integration of the two codes is described. This will provide capability now, with reduced up-front cost, as long as the number of runs required is not large.
Modeling a hydroform springback problem
Energy Technology Data Exchange (ETDEWEB)
Korzekwa, D.A.; Guerra, F.M.
1986-01-01
A shallow stretch - draw sheet forming operation performed on a hydroform machine was modeled with the finite element codes NIKE2D and ADINA78. The forming process produces a a thin spherical shell segment with a flange. The final shape of the part differs substantially from the die because of springback, yet the part must be very accurate to meet the specified tolerance. This difficult problem was chosen specifically to determine the limits of accuracy of currently available codes. NIKE2D produced reasonably good results on a macroscopic scale. However, the strain predictions were not quantitatively correct, and the shape predictions were not accurate enough to predict whether the part would satisfy the very restrictive tolerance. The ADINA results were similar. The experimental results strongly suggest that the friction conditions at the flange are not being modeled accurately, which results in the inaccurate strain predictions. The springback predictions were qualitatively correct, indicating that improvements in the predicted strains should give much better shape predictions.
SAMICS marketing and distribution model
1978-01-01
A SAMICS (Solar Array Manufacturing Industry Costing Standards) was formulated as a computer simulation model. Given a proper description of the manufacturing technology as input, this model computes the manufacturing price of solar arrays for a broad range of production levels. This report presents a model for computing these marketing and distribution costs, the end point of the model being the loading dock of the final manufacturer.
B. Kaynar; S.I. Birbil (Ilker); J.B.G. Frenk (Hans)
2007-01-01
textabstractWe discuss a class of risk measures for portfolio optimization with linear loss functions, where the random returns of financial instruments have a multivariate elliptical distribution. Under this setting we pay special attention to two risk measures, Value-at-Risk and
A distribution-free newsvendor problem with nonlinear holding cost
Pal, Brojeswar; Sankar Sana, Shib; Chaudhuri, Kripasindhu
2015-05-01
In this paper, we analyse a single-period newsvendor model to determine the optimal order quantity where the customers' balking occurs.This scenario occurs when the customers are opposed to buy a product for various reasons, such as decreasing quality of product, product is not as good as fresh when it reaches under a threshold level, etc. The model is investigated by assuming that the holding cost function depends on order quantity and the inventory level at which customer balking occurs depends on holding cost. The model allows partial backlogging and permits part of the backlogged shortages to turn into lost sales. We develop the model without taking any specific distributional form of demand, only assuming the mean and the variance of the distribution of demand. Finally, we illustrate the model by numerical examples and compare our distribution-free model with the specific distributional form of demand.
Vehicle Routing Problem Using Genetic Algorithm with Multi Compartment on Vegetable Distribution
Kurnia, Hari; Gustri Wahyuni, Elyza; Cergas Pembrani, Elang; Gardini, Syifa Tri; Kurnia Aditya, Silfa
2018-03-01
The problem that is often gained by the industries of managing and distributing vegetables is how to distribute vegetables so that the quality of the vegetables can be maintained properly. The problems encountered include optimal route selection and little travel time or so-called TSP (Traveling Salesman Problem). These problems can be modeled using the Vehicle Routing Problem (VRP) algorithm with rating ranking, a cross order based crossing, and also order based mutation mutations on selected chromosomes. This study uses limitations using only 20 market points, 2 point warehouse (multi compartment) and 5 vehicles. It is determined that for one distribution, one vehicle can only distribute to 4 market points only from 1 particular warehouse, and also one such vehicle can only accommodate 100 kg capacity.
Organizational problems of Water Distribution in Khorezm, Uzbekistan
Wegerich, K.
2004-01-01
The paper addresses problems of water resource management on the district and provincial level in the Khorezm province, Uzbekistan. The district water organizations are responsible for equitable water distribution to the agricultural users. These organizations do not have the necessary logistical
Impacts of Transportation Cost on Distribution-Free Newsboy Problems
Directory of Open Access Journals (Sweden)
Ming-Hung Shu
2014-01-01
Full Text Available A distribution-free newsboy problem (DFNP has been launched for a vendor to decide a product’s stock quantity in a single-period inventory system to sustain its least maximum-expected profits when combating fierce and diverse market circumstances. Nowadays, impacts of transportation cost on determination of optimal inventory quantity have become attentive, where its influence on the DFNP has not been fully investigated. By borrowing an economic theory from transportation disciplines, in this paper the DFNP is tackled in consideration of the transportation cost formulated as a function of shipping quantity and modeled as a nonlinear regression form from UPS’s on-site shipping-rate data. An optimal solution of the order quantity is computed on the basis of Newton’s approach to ameliorating its complexity of computation. As a result of comparative studies, lower bounds of the maximal expected profit of our proposed methodologies surpass those of existing work. Finally, we extend the analysis to several practical inventory cases including fixed ordering cost, random yield, and a multiproduct condition.
The Distribution-Free Newsboy Problem with Multiple Discounts and Upgrades
Directory of Open Access Journals (Sweden)
Ilkyeong Moon
2016-01-01
Full Text Available Most papers on the newsboy problem assume that excess inventory is either sold after discount or discarded. In the real world, overstocks are handled with multiple discounts, upgrades, or a combination of these measures. For example, a seller may offer a series of progressively increasing discounts for units that remain on the shelf, or the seller may use incrementally applied innovations aimed at stimulating greater product sophistication. Moreover, the normal distribution does not provide better protection than other distributions with the same mean and variance. In this paper, we find the differences between normal distribution approaches and distribution-free approaches in four scenarios with mean and variance of demand as the only available data to decision-makers. First, we solve the newsboy problem by considering multiple discounts. Second, we formulate and solve the newsboy problem by considering multiple upgrades. Third, we formulate and solve a mixed newsboy problem characterized with multiple discounts and upgrades. Finally, we extend the model to solve a multiproduct newsboy problem with a storage or a budget constraint and develop an algorithm to find the solutions of the models. Concavity of the models is proved analytically. Extensive computational experiments are presented to verify the robustness of the distribution-free approach. The results show that the distribution-free approach is robust.
Modeling a four-layer location-routing problem
Directory of Open Access Journals (Sweden)
Mohsen Hamidi
2012-01-01
Full Text Available Distribution is an indispensable component of logistics and supply chain management. Location-Routing Problem (LRP is an NP-hard problem that simultaneously takes into consideration location, allocation, and vehicle routing decisions to design an optimal distribution network. Multi-layer and multi-product LRP is even more complex as it deals with the decisions at multiple layers of a distribution network where multiple products are transported within and between layers of the network. This paper focuses on modeling a complicated four-layer and multi-product LRP which has not been tackled yet. The distribution network consists of plants, central depots, regional depots, and customers. In this study, the structure, assumptions, and limitations of the distribution network are defined and the mathematical optimization programming model that can be used to obtain the optimal solution is developed. Presented by a mixed-integer programming model, the LRP considers the location problem at two layers, the allocation problem at three layers, the vehicle routing problem at three layers, and a transshipment problem. The mathematical model locates central and regional depots, allocates customers to plants, central depots, and regional depots, constructs tours from each plant or open depot to customers, and constructs transshipment paths from plants to depots and from depots to other depots. Considering realistic assumptions and limitations such as producing multiple products, limited production capacity, limited depot and vehicle capacity, and limited traveling distances enables the user to capture the real world situations.
Stochastic inverse problems: Models and metrics
International Nuclear Information System (INIS)
Sabbagh, Elias H.; Sabbagh, Harold A.; Murphy, R. Kim; Aldrin, John C.; Annis, Charles; Knopp, Jeremy S.
2015-01-01
In past work, we introduced model-based inverse methods, and applied them to problems in which the anomaly could be reasonably modeled by simple canonical shapes, such as rectangular solids. In these cases the parameters to be inverted would be length, width and height, as well as the occasional probe lift-off or rotation. We are now developing a formulation that allows more flexibility in modeling complex flaws. The idea consists of expanding the flaw in a sequence of basis functions, and then solving for the expansion coefficients of this sequence, which are modeled as independent random variables, uniformly distributed over their range of values. There are a number of applications of such modeling: 1. Connected cracks and multiple half-moons, which we have noted in a POD set. Ideally we would like to distinguish connected cracks from one long shallow crack. 2. Cracks of irregular profile and shape which have appeared in cold work holes during bolt-hole eddy-current inspection. One side of such cracks is much deeper than other. 3. L or C shaped crack profiles at the surface, examples of which have been seen in bolt-hole cracks. By formulating problems in a stochastic sense, we are able to leverage the stochastic global optimization algorithms in NLSE, which is resident in VIC-3D®, to answer questions of global minimization and to compute confidence bounds using the sensitivity coefficient that we get from NLSE. We will also address the issue of surrogate functions which are used during the inversion process, and how they contribute to the quality of the estimation of the bounds
Stochastic inverse problems: Models and metrics
Sabbagh, Elias H.; Sabbagh, Harold A.; Murphy, R. Kim; Aldrin, John C.; Annis, Charles; Knopp, Jeremy S.
2015-03-01
In past work, we introduced model-based inverse methods, and applied them to problems in which the anomaly could be reasonably modeled by simple canonical shapes, such as rectangular solids. In these cases the parameters to be inverted would be length, width and height, as well as the occasional probe lift-off or rotation. We are now developing a formulation that allows more flexibility in modeling complex flaws. The idea consists of expanding the flaw in a sequence of basis functions, and then solving for the expansion coefficients of this sequence, which are modeled as independent random variables, uniformly distributed over their range of values. There are a number of applications of such modeling: 1. Connected cracks and multiple half-moons, which we have noted in a POD set. Ideally we would like to distinguish connected cracks from one long shallow crack. 2. Cracks of irregular profile and shape which have appeared in cold work holes during bolt-hole eddy-current inspection. One side of such cracks is much deeper than other. 3. L or C shaped crack profiles at the surface, examples of which have been seen in bolt-hole cracks. By formulating problems in a stochastic sense, we are able to leverage the stochastic global optimization algorithms in NLSE, which is resident in VIC-3D®, to answer questions of global minimization and to compute confidence bounds using the sensitivity coefficient that we get from NLSE. We will also address the issue of surrogate functions which are used during the inversion process, and how they contribute to the quality of the estimation of the bounds.
Stochastic inverse problems: Models and metrics
Energy Technology Data Exchange (ETDEWEB)
Sabbagh, Elias H.; Sabbagh, Harold A.; Murphy, R. Kim [Victor Technologies, LLC, Bloomington, IN 47407-7706 (United States); Aldrin, John C. [Computational Tools, Gurnee, IL 60031 (United States); Annis, Charles [Statistical Engineering, Palm Beach Gardens, FL 33418 (United States); Knopp, Jeremy S. [Air Force Research Laboratory (AFRL/RXCA), Wright Patterson AFB, OH 45433-7817 (United States)
2015-03-31
In past work, we introduced model-based inverse methods, and applied them to problems in which the anomaly could be reasonably modeled by simple canonical shapes, such as rectangular solids. In these cases the parameters to be inverted would be length, width and height, as well as the occasional probe lift-off or rotation. We are now developing a formulation that allows more flexibility in modeling complex flaws. The idea consists of expanding the flaw in a sequence of basis functions, and then solving for the expansion coefficients of this sequence, which are modeled as independent random variables, uniformly distributed over their range of values. There are a number of applications of such modeling: 1. Connected cracks and multiple half-moons, which we have noted in a POD set. Ideally we would like to distinguish connected cracks from one long shallow crack. 2. Cracks of irregular profile and shape which have appeared in cold work holes during bolt-hole eddy-current inspection. One side of such cracks is much deeper than other. 3. L or C shaped crack profiles at the surface, examples of which have been seen in bolt-hole cracks. By formulating problems in a stochastic sense, we are able to leverage the stochastic global optimization algorithms in NLSE, which is resident in VIC-3D®, to answer questions of global minimization and to compute confidence bounds using the sensitivity coefficient that we get from NLSE. We will also address the issue of surrogate functions which are used during the inversion process, and how they contribute to the quality of the estimation of the bounds.
Modeling error distributions of growth curve models through Bayesian methods.
Zhang, Zhiyong
2016-06-01
Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.
PROBLEM IDENTIFICATION OF FOREIGN TOURIST DISTRIBUTION IN INDONESIA
Directory of Open Access Journals (Sweden)
Supriono
2017-07-01
Full Text Available Indonesia should be able to distribute evenly the visits of foreign tourists so that the visit is not merely focused on certain places. It is expected that all the tourism objects in Indonesia can attract and be visited by foreign tourists with the same quantity or number in every tourist destination. In the first year, this study aimed to identify the motivation of foreign tourists visiting Indonesia and identify the problems of distribution of foreign tourists in Indonesia. The study sites were in DKI Jakarta, Batam, and Bali. In the second year later, a distribution channel strategy will be developed in order to create competiveness of tourism. This study was conducted using qualitative research methods with descriptive analysis. The data were collected using in-depth interviews with tourism stakeholders (the Government, International Travelers, and Tourism Bureau/Travel Agencies. The research results show that the motivation of foreign tourists visiting Indonesia was related to business and purely on vacation. Additionally, the problems of foreign tourist distribution in Indonesia emerged because of some aspects, including limited entrance of foreign tourists to Indonesia, lack of connectivity between airports in Indonesia and international flights, lack of inter-regional cooperation between tourism actors, lack of infrastructure, and the ignorance of foreign tourists to all tourist destinations in Indonesia due to less effective and efficient promotion activities.
Brane world model and hierarchy problem
International Nuclear Information System (INIS)
Alba, V.
2007-01-01
In this paper I wrote description of Kaluza-Klein model. Also I wrote how we can solve the hierarchy problem in Randall-Sundrum model. In fact, it's my motivation to study this part of theoretical physics
Broadband model of the distribution network
DEFF Research Database (Denmark)
Jensen, Martin Høgdahl
of the four-wire cable, but above and below the natural frequency there is good agreement between simulation and measurements. The problem with the natural frequency is not IV related specificly with the four-wire cable model, but is a general problem related with the distributed nature of transmission lines...... measurement and simulation, once the Phase model is used. No explanation is found on why the new material properties cause error in the Phase model. At the kyndby 10 kV test site a non-linear load is inserted on the secondary side of normal distribution transformer and the phase voltage and current...... is measured. The measurement are performed with and without the four-wire cable inserted between the transformer and load. The 10 kV test-site is modelled in EMTDC with standard components. Similarly, the non-linear load is modelled as a six-pulse diode bridge loaded with a resistor on the DC...
Crack problem in superconducting cylinder with exponential distribution of critical-current density
Zhao, Yufeng; Xu, Chi; Shi, Liang
2018-04-01
The general problem of a center crack in a long cylindrical superconductor with inhomogeneous critical-current distribution is studied based on the extended Bean model for zero-field cooling (ZFC) and field cooling (FC) magnetization processes, in which the inhomogeneous parameter η is introduced for characterizing the critical-current density distribution in inhomogeneous superconductor. The effect of the inhomogeneous parameter η on both the magnetic field distribution and the variations of the normalized stress intensity factors is also obtained based on the plane strain approach and J-integral theory. The numerical results indicate that the exponential distribution of critical-current density will lead a larger trapped field inside the inhomogeneous superconductor and cause the center of the cylinder to fracture more easily. In addition, it is worth pointing out that the nonlinear field distribution is unique to the Bean model by comparing the curve shapes of the magnetization loop with homogeneous and inhomogeneous critical-current distribution.
Water Distribution and Removal Model
Energy Technology Data Exchange (ETDEWEB)
Y. Deng; N. Chipman; E.L. Hardin
2005-08-26
The design of the Yucca Mountain high level radioactive waste repository depends on the performance of the engineered barrier system (EBS). To support the total system performance assessment (TSPA), the Engineered Barrier System Degradation, Flow, and Transport Process Model Report (EBS PMR) is developed to describe the thermal, mechanical, chemical, hydrological, biological, and radionuclide transport processes within the emplacement drifts, which includes the following major analysis/model reports (AMRs): (1) EBS Water Distribution and Removal (WD&R) Model; (2) EBS Physical and Chemical Environment (P&CE) Model; (3) EBS Radionuclide Transport (EBS RNT) Model; and (4) EBS Multiscale Thermohydrologic (TH) Model. Technical information, including data, analyses, models, software, and supporting documents will be provided to defend the applicability of these models for their intended purpose of evaluating the postclosure performance of the Yucca Mountain repository system. The WD&R model ARM is important to the site recommendation. Water distribution and removal represents one component of the overall EBS. Under some conditions, liquid water will seep into emplacement drifts through fractures in the host rock and move generally downward, potentially contacting waste packages. After waste packages are breached by corrosion, some of this seepage water will contact the waste, dissolve or suspend radionuclides, and ultimately carry radionuclides through the EBS to the near-field host rock. Lateral diversion of liquid water within the drift will occur at the inner drift surface, and more significantly from the operation of engineered structures such as drip shields and the outer surface of waste packages. If most of the seepage flux can be diverted laterally and removed from the drifts before contacting the wastes, the release of radionuclides from the EBS can be controlled, resulting in a proportional reduction in dose release at the accessible environment. The purposes
Water Distribution and Removal Model
International Nuclear Information System (INIS)
Y. Deng; N. Chipman; E.L. Hardin
2005-01-01
The design of the Yucca Mountain high level radioactive waste repository depends on the performance of the engineered barrier system (EBS). To support the total system performance assessment (TSPA), the Engineered Barrier System Degradation, Flow, and Transport Process Model Report (EBS PMR) is developed to describe the thermal, mechanical, chemical, hydrological, biological, and radionuclide transport processes within the emplacement drifts, which includes the following major analysis/model reports (AMRs): (1) EBS Water Distribution and Removal (WD and R) Model; (2) EBS Physical and Chemical Environment (P and CE) Model; (3) EBS Radionuclide Transport (EBS RNT) Model; and (4) EBS Multiscale Thermohydrologic (TH) Model. Technical information, including data, analyses, models, software, and supporting documents will be provided to defend the applicability of these models for their intended purpose of evaluating the postclosure performance of the Yucca Mountain repository system. The WD and R model ARM is important to the site recommendation. Water distribution and removal represents one component of the overall EBS. Under some conditions, liquid water will seep into emplacement drifts through fractures in the host rock and move generally downward, potentially contacting waste packages. After waste packages are breached by corrosion, some of this seepage water will contact the waste, dissolve or suspend radionuclides, and ultimately carry radionuclides through the EBS to the near-field host rock. Lateral diversion of liquid water within the drift will occur at the inner drift surface, and more significantly from the operation of engineered structures such as drip shields and the outer surface of waste packages. If most of the seepage flux can be diverted laterally and removed from the drifts before contacting the wastes, the release of radionuclides from the EBS can be controlled, resulting in a proportional reduction in dose release at the accessible environment
Topology optimization of mass distribution problems in Stokes flow
DEFF Research Database (Denmark)
Gersborg-Hansen, Allan; Berggren, Martin; Dammann, Bernd
enabled an evaluation of the design with a body fitted mesh in a standard analysis software relevant in engineering practice prior to design manufacturing. This work investigates the proper choice of a maximum penalization value during the optimization process that ensures that the target outflow rates......We consider topology optimization of mass distribution problems in 2D and 3D Stokes flow with the aim of designing devices that meet target outflow rates. For the purpose of validation, the designs have been post processed using the image processing tools available in FEMLAB. In turn, this has...
Optimization of the imported air express cargo distribution problem
Directory of Open Access Journals (Sweden)
Hwang, T.L.
2013-03-01
Full Text Available This study examines the delivering network of imported air express cargo as an integrated multi-depot vehicle routing problem. Integrated multi-depot vehicle routing problem attempts to decide which service centers should be used and how much freight should be unloaded in each service center. The role of an exchange point which is allowing the delivery vans and shuttles to exchange imported and exported goods is also addressed. Test results demonstrate the feasibility of the four models so these are highly promising for use in a diverse array of applications, such as in home delivery and reverse logistics.
Electric power scheduling: A distributed problem-solving approach
Mellor, Pamela A.; Dolce, James L.; Krupp, Joseph C.
1990-01-01
Space Station Freedom's power system, along with the spacecraft's other subsystems, needs to carefully conserve its resources and yet strive to maximize overall Station productivity. Due to Freedom's distributed design, each subsystem must work cooperatively within the Station community. There is a need for a scheduling tool which will preserve this distributed structure, allow each subsystem the latitude to satisfy its own constraints, and preserve individual value systems while maintaining Station-wide integrity. The value-driven free-market economic model is such a tool.
Ising model for distribution networks
Hooyberghs, H.; Van Lombeek, S.; Giuraniuc, C.; Van Schaeybroeck, B.; Indekeu, J. O.
2012-01-01
An elementary Ising spin model is proposed for demonstrating cascading failures (breakdowns, blackouts, collapses, avalanches, etc.) that can occur in realistic networks for distribution and delivery by suppliers to consumers. A ferromagnetic Hamiltonian with quenched random fields results from policies that maximize the gap between demand and delivery. Such policies can arise in a competitive market where firms artificially create new demand, or in a solidarity environment where too high a demand cannot reasonably be met. Network failure in the context of a policy of solidarity is possible when an initially active state becomes metastable and decays to a stable inactive state. We explore the characteristics of the demand and delivery, as well as the topological properties, which make the distribution network susceptible of failure. An effective temperature is defined, which governs the strength of the activity fluctuations which can induce a collapse. Numerical results, obtained by Monte Carlo simulations of the model on (mainly) scale-free networks, are supplemented with analytic mean-field approximations to the geometrical random field fluctuations and the thermal spin fluctuations. The role of hubs versus poorly connected nodes in initiating the breakdown of network activity is illustrated and related to model parameters.
General problems of modeling for accelerators
International Nuclear Information System (INIS)
Luccio, A.
1991-01-01
In this presentation the author only discusses problems of modeling for circular accelerators and bases the examples on the AGS Booster Synchrotron presently being commissioned at BNL. A model is a platonic representation of an accelerator. With algorithms, implemented through computer codes, the model is brought to life. At the start of a new accelerator project, the model and the real machine are taking shape somewhat apart. They get closer and closer as the project goes on. Ideally, the modeler is only satisfied when the model or the machine cannot be distinguished. Accelerator modeling for real time control has specific problems. If one wants fast responses, algorithms may be implemented in hardware or by parallel computation, perhaps by neural networks. Algorithms and modeling is not only for accelerator control. It is also for: accelerator parameter measurement; hardware problem debugging, perhaps with some help of artificial intelligence; operator training, much like a flight simulator
Directory of Open Access Journals (Sweden)
Joan C. Durrance
2006-01-01
Full Text Available Introduction. This article results from a qualitative study of 1 information behavior in community problem-solving framed as a distributed information use environment and 2 approaches used by a best-practice library to anticipate information needs associated with community problem solving. Method. Several approaches to data collection were used - focus groups, interviews, observation of community and library meetings, and analysis of supporting documents. We focused first on the information behaviour of community groups. Finding that the library supported these activities we sought to understand its approach. Analysis. Data were coded thematically for both information behaviour concepts and themes germane to problem-solving activity. A grounded theory approach was taken to capture aspects of the library staff's practice. Themes evolved from the data; supporting documentation - reports, articles and library communication - was also coded. Results. The study showed 1 how information use environment components (people, setting, problems, problem resolutions combine in this distributed information use environment to determine specific information needs and uses; and 2 how the library contributed to the viability of this distributed information use environment. Conclusion. Community problem solving, here explicated as a distributed IUE, is likely to be seen in multiple communities. The library model presented demonstrates that by reshaping its information practice within the framework of an information use environment, a library can anticipate community information needs as they are generated and where they are most relevant.
Bezier Curve Modeling for Neutrosophic Data Problem
Directory of Open Access Journals (Sweden)
Ferhat Tas
2017-02-01
Full Text Available Neutrosophic set concept is defined with membership, non-membership and indeterminacy degrees. This concept is the solution and representation of the problems with various fields. In this paper, a geometric model is introduced for Neutrosophic data problem for the first time. This model is based on neutrosophic sets and neutrosophic relations. Neutrosophic control points are defined according to these points, resulting in neutrosophic Bezier curves.
The Aalborg Model and The Problem
DEFF Research Database (Denmark)
Qvist, Palle
To know the definition of a problem in is an important implication for the possibility to identify and formulate the problem1, the starting point of the learning process in the Aalborg Model2 3. For certification it has been suggested that: A problem grows out of students’ wondering within...... different disciplines and professional environments4. This article goes through the definitions of a problem formulated by researchers at Aalborg University during the lifetime of the university5 and raises the question to each of them: Leads the definition to creation of a feeling or experience...
Directional Overcurrent Relays Coordination Problems in Distributed Generation Systems
Directory of Open Access Journals (Sweden)
Jakub Ehrenberger
2017-09-01
Full Text Available This paper proposes a new approach to the distributed generation system protection coordination based on directional overcurrent protections with inverse-time characteristics. The key question of protection coordination is the determination of correct values of all inverse-time characteristics coefficients. The coefficients must be correctly chosen considering the sufficiently short tripping times and the sufficiently long selectivity times. In the paper a new approach to protection coordination is designed, in which not only some, but all the required types of short-circuit contributions are taken into account. In radial systems, if the pickup currents are correctly chosen, protection coordination for maximum contributions is enough to ensure selectivity times for all the required short-circuit types. In distributed generation systems, due to different contributions flowing through the primary and selective protections, coordination for maximum contributions is not enough, but all the short-circuit types must be taken into account, and the protection coordination becomes a complex problem. A possible solution to the problem, based on an appropriately designed optimization, has been proposed in the paper. By repeating a simple optimization considering only one short-circuit type, the protection coordination considering all the required short-circuit types has been achieved. To show the importance of considering all the types of short-circuit contributions, setting optimizations with one (the highest and all the types of short-circuit contributions have been performed. Finally, selectivity time values are explored throughout the entire protected section, and both the settings are compared.
Improved Testing of Distributed Lag Model in Presence of ...
African Journals Online (AJOL)
The finite distributed lag models (DLM) are often used in econometrics and statistics. Application of the ordinary least square (OLS) directly on the DLM for estimation may have serious problems. To overcome these problems, some alternative estimation procedures are available in the literature. One popular method to ...
Real-time modeling of heat distributions
Energy Technology Data Exchange (ETDEWEB)
Hamann, Hendrik F.; Li, Hongfei; Yarlanki, Srinivas
2018-01-02
Techniques for real-time modeling temperature distributions based on streaming sensor data are provided. In one aspect, a method for creating a three-dimensional temperature distribution model for a room having a floor and a ceiling is provided. The method includes the following steps. A ceiling temperature distribution in the room is determined. A floor temperature distribution in the room is determined. An interpolation between the ceiling temperature distribution and the floor temperature distribution is used to obtain the three-dimensional temperature distribution model for the room.
Distributional equity problems at the proposed Yucca Mountain facility
International Nuclear Information System (INIS)
Kasperson, R.E.; Abdollahzadeh, S.
1988-07-01
This paper addresses one quite specific part of this broad range of issues -- the distribution of impacts to the state of Nevada and to the nation likely to be associated with the proposed Yucca Mountain repository. As such, it is one of four needed analyses of the overall equity problems and needs to be read in conjunction with our proposed overall framework for equity studies. The objective of this report is to consider how an analysis might be made of the distribution of projected outcomes between the state and nation. At the same time, it needs to be clear that no attempt will be made actually to implement the analysis that is proposed. What follows is a conceptual statement that identifies the analytical issues and pro poses an approach for overcoming them. Significantly, it must also be noted that this report will not address procedural equity issues between the state and nation for this is the subject of a separate analysis. 14 refs., 8 figs., 3 tabs
Multi-choice stochastic transportation problem involving general form of distributions.
Quddoos, Abdul; Ull Hasan, Md Gulzar; Khalid, Mohammad Masood
2014-01-01
Many authors have presented studies of multi-choice stochastic transportation problem (MCSTP) where availability and demand parameters follow a particular probability distribution (such as exponential, weibull, cauchy or extreme value). In this paper an MCSTP is considered where availability and demand parameters follow general form of distribution and a generalized equivalent deterministic model (GMCSTP) of MCSTP is obtained. It is also shown that all previous models obtained by different authors can be deduced with the help of GMCSTP. MCSTP with pareto, power function or burr-XII distributions are also considered and equivalent deterministic models are obtained. To illustrate the proposed model two numerical examples are presented and solved using LINGO 13.0 software package.
Modeling visual problem solving as analogical reasoning.
Lovett, Andrew; Forbus, Kenneth
2017-01-01
We present a computational model of visual problem solving, designed to solve problems from the Raven's Progressive Matrices intelligence test. The model builds on the claim that analogical reasoning lies at the heart of visual problem solving, and intelligence more broadly. Images are compared via structure mapping, aligning the common relational structure in 2 images to identify commonalities and differences. These commonalities or differences can themselves be reified and used as the input for future comparisons. When images fail to align, the model dynamically rerepresents them to facilitate the comparison. In our analysis, we find that the model matches adult human performance on the Standard Progressive Matrices test, and that problems which are difficult for the model are also difficult for people. Furthermore, we show that model operations involving abstraction and rerepresentation are particularly difficult for people, suggesting that these operations may be critical for performing visual problem solving, and reasoning more generally, at the highest level. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Problem Solving Model for Science Learning
Alberida, H.; Lufri; Festiyed; Barlian, E.
2018-04-01
This research aims to develop problem solving model for science learning in junior high school. The learning model was developed using the ADDIE model. An analysis phase includes curriculum analysis, analysis of students of SMP Kota Padang, analysis of SMP science teachers, learning analysis, as well as the literature review. The design phase includes product planning a science-learning problem-solving model, which consists of syntax, reaction principle, social system, support system, instructional impact and support. Implementation of problem-solving model in science learning to improve students' science process skills. The development stage consists of three steps: a) designing a prototype, b) performing a formative evaluation and c) a prototype revision. Implementation stage is done through a limited trial. A limited trial was conducted on 24 and 26 August 2015 in Class VII 2 SMPN 12 Padang. The evaluation phase was conducted in the form of experiments at SMPN 1 Padang, SMPN 12 Padang and SMP National Padang. Based on the development research done, the syntax model problem solving for science learning at junior high school consists of the introduction, observation, initial problems, data collection, data organization, data analysis/generalization, and communicating.
Problems In Indoor Mapping and Modelling
Zlatanova, S.; Sithole, G.; Nakagawa, M.; Zhu, Q.
2013-11-01
Research in support of indoor mapping and modelling (IMM) has been active for over thirty years. This research has come in the form of As-Built surveys, Data structuring, Visualisation techniques, Navigation models and so forth. Much of this research is founded on advancements in photogrammetry, computer vision and image analysis, computer graphics, robotics, laser scanning and many others. While IMM used to be the privy of engineers, planners, consultants, contractors, and designers, this is no longer the case as commercial enterprises and individuals are also beginning to apply indoor models in their business process and applications. There are three main reasons for this. Firstly, the last two decades have seen greater use of spatial information by enterprises and the public. Secondly, IMM has been complimented by advancements in mobile computing and internet communications, making it easier than ever to access and interact with spatial information. Thirdly, indoor modelling has been advanced geometrically and semantically, opening doors for developing user-oriented, context-aware applications. This reshaping of the public's attitude and expectations with regards to spatial information has realised new applications and spurred demand for indoor models and the tools to use them. This paper examines the present state of IMM and considers the research areas that deserve attention in the future. In particular the paper considers problems in IMM that are relevant to commercial enterprises and the general public, groups this paper expects will emerge as the greatest users IMM. The subject of indoor modelling and mapping is discussed here in terms of Acquisitions and Sensors, Data Structures and Modelling, Visualisation, Applications, Legal Issues and Standards. Problems are discussed in terms of those that exist and those that are emerging. Existing problems are those that are currently being researched. Emerging problems are those problems or demands that are
Directory of Open Access Journals (Sweden)
Mi Gan
2018-01-01
Full Text Available The rapid growth of logistics distribution highlights the problems including the imperfect infrastructure of logistics distribution network, the serious shortage of distribution capacity of each individual enterprise, and the high cost of distribution in China. While the development of sharing economy makes it possible to achieve the integration of whole social logistic resources, big data technology can grasp customer’s logistics demand accurately on the basis of analyzing the customer’s logistics distribution preference, which contributes to the integration and optimization of the whole logistics resources. This paper proposes a kind of intensive distribution logistics network considering sharing economy, which assumes that all the social logistics suppliers build a strategic alliance, and individual idle logistics resources are also used to deal with distribution needs. Analyzing customer shopping behavior by the big data technology to determine customer’s logistics preference on the basis of dividing the customer’s logistics preference into high speed, low cost, and low pollution and then constructing the corresponding objective function model according to different logistics preferences, we obtain the intensive distribution logistics network model and solve it with heuristic algorithm. Furthermore, this paper analyzes the mechanism of interest distribution of the participants in the distribution network and puts forward an improved interval Shapley value method considering both satisfaction and contribution, with case verifying the feasibility and effectiveness of the model. The results showed that, compared with the traditional Shapley method, distribution coefficient calculated by the improved model could be fairer, improve stakeholder satisfaction, and promote the sustainable development of the alliance as well.
Selecting model complexity in learning problems
Energy Technology Data Exchange (ETDEWEB)
Buescher, K.L. [Los Alamos National Lab., NM (United States); Kumar, P.R. [Illinois Univ., Urbana, IL (United States). Coordinated Science Lab.
1993-10-01
To learn (or generalize) from noisy data, one must resist the temptation to pick a model for the underlying process that overfits the data. Many existing techniques solve this problem at the expense of requiring the evaluation of an absolute, a priori measure of each model`s complexity. We present a method that does not. Instead, it uses a natural, relative measure of each model`s complexity. This method first creates a pool of ``simple`` candidate models using part of the data and then selects from among these by using the rest of the data.
A Hybrid Autonomic Computing-Based Approach to Distributed Constraint Satisfaction Problems
Directory of Open Access Journals (Sweden)
Abhishek Bhatia
2015-03-01
Full Text Available Distributed constraint satisfaction problems (DisCSPs are among the widely endeavored problems using agent-based simulation. Fernandez et al. formulated sensor and mobile tracking problem as a DisCSP, known as SensorDCSP In this paper, we adopt a customized ERE (environment, reactive rules and entities algorithm for the SensorDCSP, which is otherwise proven as a computationally intractable problem. An amalgamation of the autonomy-oriented computing (AOC-based algorithm (ERE and genetic algorithm (GA provides an early solution of the modeled DisCSP. Incorporation of GA into ERE facilitates auto-tuning of the simulation parameters, thereby leading to an early solution of constraint satisfaction. This study further contributes towards a model, built up in the NetLogo simulation environment, to infer the efficacy of the proposed approach.
Integrating autonomous Problem Resolution Models with Remedy
Marquina, M A; Padilla, J; Ramos, R
2000-01-01
This paper briefly defines the concept of Problem Resolution Model and shows possible approaches to the issues which may arise when integrating various PRMs to present a consistent view to the end user, despite of the peculiarities of each physical implementation. Integration refers to various autonomous PRMs having to interact as problems pass from one to another in the resolution flow. This process should be transparent to the user and internally there must be a way to track in which stage ...
One-dimensional computational modeling on nuclear reactor problems
International Nuclear Information System (INIS)
Alves Filho, Hermes; Baptista, Josue Costa; Trindade, Luiz Fernando Santos; Heringer, Juan Diego dos Santos
2013-01-01
In this article, we present a computational modeling, which gives us a dynamic view of some applications of Nuclear Engineering, specifically in the power distribution and the effective multiplication factor (keff) calculations. We work with one-dimensional problems of deterministic neutron transport theory, with the linearized Boltzmann equation in the discrete ordinates (SN) formulation, independent of time, with isotropic scattering and then built a software (Simulator) for modeling computational problems used in a typical calculations. The program used in the implementation of the simulator was Matlab, version 7.0. (author)
Comparative Distributions of Hazard Modeling Analysis
Directory of Open Access Journals (Sweden)
Rana Abdul Wajid
2006-07-01
Full Text Available In this paper we present the comparison among the distributions used in hazard analysis. Simulation technique has been used to study the behavior of hazard distribution modules. The fundamentals of Hazard issues are discussed using failure criteria. We present the flexibility of the hazard modeling distribution that approaches to different distributions.
Finding Multiple Optimal Solutions to Optimal Load Distribution Problem in Hydropower Plant
Directory of Open Access Journals (Sweden)
Xinhao Jiang
2012-05-01
Full Text Available Optimal load distribution (OLD among generator units of a hydropower plant is a vital task for hydropower generation scheduling and management. Traditional optimization methods for solving this problem focus on finding a single optimal solution. However, many practical constraints on hydropower plant operation are very difficult, if not impossible, to be modeled, and the optimal solution found by those models might be of limited practical uses. This motivates us to find multiple optimal solutions to the OLD problem, which can provide more flexible choices for decision-making. Based on a special dynamic programming model, we use a modified shortest path algorithm to produce multiple solutions to the problem. It is shown that multiple optimal solutions exist for the case study of China’s Geheyan hydropower plant, and they are valuable for assessing the stability of generator units, showing the potential of reducing occurrence times of units across vibration areas.
Biosocial models of adolescent problem behaviors.
Udry, J R
1990-01-01
This paper develops a biosocial model of adolescent age-graded norm violations ("problem behaviors"), combining a traditional social control model with a biological model using steroid hormones. Subjects were 101 white boys drawn from the 8th-, 9th-, and 10th-grade rosters of selected public schools, and ranging in age from 13 to 16. Subjects completed self-administered questionnaires and provided blood samples which were assayed for the behaviorally relevant hormones. Boys' problem behavior shows strong hormone effects. Social and biological variables have both additive and indirect effects. Using a biosocial model leads to conclusions which are different from those which would have been drawn from the sociological model alone.
Implementing Problem Resolution Models in Remedy
Marquina, M A; Ramos, R
2000-01-01
This paper defines the concept of Problem Resolution Model (PRM) and describes the current implementation made by the User Support unit at CERN. One of the main challenges of User Support services in any High Energy Physics institute/organization is to address solving of the computing-relatedproblems faced by their researchers. The User Support group at CERN is the IT unit in charge of modeling the operations of the Help Desk and acts as asecond level support to some of the support lines whose problems are receptioned at the Help Desk. The motivation behind the use of a PRM is to provide well defined procedures and methods to react in an efficient way to a request for solving a problem,providing advice, information etc. A PRM is materialized on a workflow which has a set of defined states in which a problem can be. Problems move from onestate to another according to actions as decided by the person who is handling them. A PRM can be implemented by a computer application, generallyreferred to as Problem Report...
Directory of Open Access Journals (Sweden)
Kenan Karagül
2014-07-01
Full Text Available In this study, Fleet Size and Mix Vehicle Routing Problem is considered in order to optimize the distribution of the tourists who have traveled between the airport and the hotels in the shortest distance by using the minimum cost. The initial solution space for the related methods are formed as a combination of Savings algorithm, Sweep algorithm and random permutation alignment. Then, two well-known solution methods named as Standard Genetic Algorithms and random search algorithms are used for changing the initial solutions. Computational power of the machine and heuristic algorithms are used instead of human experience and human intuition in order to solve the distribution problem of tourists coming to hotels in Alanya region from Antalya airport. For this case study, daily data of tourist distributions performed by an agency operating in Alanya region are considered. These distributions are then modeled as Vehicle Routing Problem to calculate the solutions for various applications. From the comparisons with the decision of a human expert, it is seen that the proposed methods produce better solutions with respect to human experience and insight. Random search method produces a solution more favorable in terms of time. As a conclusion, it is seen that, owing to the distribution plans offered by the obtained solutions, the agencies may reduce the costs by achieving savings up to 35%.
Electric Power Distribution System Model Simplification Using Segment Substitution
Energy Technology Data Exchange (ETDEWEB)
Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat; Reed, Gregory F.
2018-05-01
Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers model bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). In contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.
A Fuzzy Goal Programming for a Multi-Depot Distribution Problem
Nunkaew, Wuttinan; Phruksaphanrat, Busaba
2010-10-01
A fuzzy goal programming model for solving a Multi-Depot Distribution Problem (MDDP) is proposed in this research. This effective proposed model is applied for solving in the first step of Assignment First-Routing Second (AFRS) approach. Practically, a basic transportation model is firstly chosen to solve this kind of problem in the assignment step. After that the Vehicle Routing Problem (VRP) model is used to compute the delivery cost in the routing step. However, in the basic transportation model, only depot to customer relationship is concerned. In addition, the consideration of customer to customer relationship should also be considered since this relationship exists in the routing step. Both considerations of relationships are solved using Preemptive Fuzzy Goal Programming (P-FGP). The first fuzzy goal is set by a total transportation cost and the second fuzzy goal is set by a satisfactory level of the overall independence value. A case study is used for describing the effectiveness of the proposed model. Results from the proposed model are compared with the basic transportation model that has previously been used in this company. The proposed model can reduce the actual delivery cost in the routing step owing to the better result in the assignment step. Defining fuzzy goals by membership functions are more realistic than crisps. Furthermore, flexibility to adjust goals and an acceptable satisfactory level for decision maker can also be increased and the optimal solution can be obtained.
Mathematical model in economic environmental problems
Energy Technology Data Exchange (ETDEWEB)
Nahorski, Z. [Polish Academy of Sciences, Systems Research Inst. (Poland); Ravn, H.F. [Risoe National Lab. (Denmark)
1996-12-31
The report contains a review of basic models and mathematical tools used in economic regulation problems. It starts with presentation of basic models of capital accumulation, resource depletion, pollution accumulation, and population growth, as well as construction of utility functions. Then the one-state variable model is discussed in details. The basic mathematical methods used consist of application of the maximum principle and phase plane analysis of the differential equations obtained as the necessary conditions of optimality. A summary of basic results connected with these methods is given in appendices. (au) 13 ills.; 17 refs.
Environmental problems indicator under environmental modeling toward sustainable development
P. Sutthichaimethee; W. Tanoamchard; P. Sawangwong; P Pachana; N. Witit-Anun
2015-01-01
This research aims to apply a model to the study and analysis of environmental and natural resource costs created in supply chains of goods and services produced in Thailand, and propose indicators for environmental problem management, caused by goods and services production, based on concepts of sustainable production and consumer behavior. The research showed that the highest environmental cost in terms of Natural Resource Materials was from pipelines and gas distribution, while the lowest ...
Duan, Peibo; Zhang, Changsheng; Mao, Guoqiang; Zhang, Bin
2017-09-22
User association has emerged as a distributed resource allocation problem in the heterogeneous networks (HetNets). Although an approximate solution is obtainable using the approaches like combinatorial optimization and game theory-based schemes, these techniques can be easily trapped in local optima. Furthermore, the lack of exploring the relation between the quality of the solution and the parameters in the HetNet [e.g., the number of users and base stations (BSs)], at what levels, impairs the practicability of deploying these approaches in a real world environment. To address these issues, this paper investigates how to model the problem as a distributed constraint optimization problem (DCOP) from the point of the view of the multiagent system. More specifically, we develop two models named each connection as variable (ECAV) and each BS and user as variable (EBUAV). Hereinafter, we propose a DCOP solver which not only sets up the model in a distributed way but also enables us to efficiently obtain the solution by means of a complete DCOP algorithm based on distributed message-passing. Naturally, both theoretical analysis and simulation show that different qualitative solutions can be obtained in terms of an introduced parameter η which has a close relation with the parameters in the HetNet. It is also apparent that there is 6% improvement on the throughput by the DCOP solver comparing with other counterparts when η=3. Particularly, it demonstrates up to 18% increase in the ability to make BSs service more users when the number of users is above 200 while the available resource blocks (RBs) are limited. In addition, it appears that the distribution of RBs allocated to users by BSs is better with the variation of the volume of RBs at the macro BS.
Distributions with given marginals and statistical modelling
Fortiana, Josep; Rodriguez-Lallena, José
2002-01-01
This book contains a selection of the papers presented at the meeting `Distributions with given marginals and statistical modelling', held in Barcelona (Spain), July 17-20, 2000. In 24 chapters, this book covers topics such as the theory of copulas and quasi-copulas, the theory and compatibility of distributions, models for survival distributions and other well-known distributions, time series, categorical models, definition and estimation of measures of dependence, monotonicity and stochastic ordering, shape and separability of distributions, hidden truncation models, diagonal families, orthogonal expansions, tests of independence, and goodness of fit assessment. These topics share the use and properties of distributions with given marginals, this being the fourth specialised text on this theme. The innovative aspect of the book is the inclusion of statistical aspects such as modelling, Bayesian statistics, estimation, and tests.
Problem-Solving Methods for the Prospective Development of Urban Power Distribution Network
Directory of Open Access Journals (Sweden)
A. P. Karpenko
2014-01-01
Full Text Available This article succeeds the former A. P. K nko’ and A. I. Kuzmina’ ubl t on titled "A mathematical model of urban distribution electro-network considering its future development" (electronic scientific and technical magazine "Science and education" No. 5, 2014.The article offers a model of urban power distribution network as a set of transformer and distribution substations and cable lines. All elements of the network and new consumers are determined owing to vectors of parameters consistent with them.A problem of the urban power distribution network design, taking into account a prospective development of the city, is presented as a problem of discrete programming. It is in deciding on the optimal option to connect new consumers to the power supply network, on the number and sites to build new substations, and on the option to include them in the power supply network.Two methods, namely a reduction method for a set the nested tasks of global minimization and a decomposition method are offered to solve the problem.In reduction method the problem of prospective development of power supply network breaks into three subtasks of smaller dimension: a subtask to define the number and sites of new transformer and distribution substations, a subtask to define the option to connect new consumers to the power supply network, and a subtask to include new substations in the power supply network. The vector of the varied parameters is broken into three subvectors consistent with the subtasks. Each subtask is solved using an area of admissible vector values of the varied parameters at the fixed components of the subvectors obtained when solving the higher subtasks.In decomposition method the task is presented as a set of three, similar to reduction method, reductions of subtasks and a problem of coordination. The problem of coordination specifies a sequence of the subtasks solution, defines the moment of calculation termination. Coordination is realized by
Integrating autonomous Problem Resolution Models with Remedy
Marquina, M A; Ramos, R
2000-01-01
This paper briefly defines the concept of Problem Resolution Model and shows possible approaches to the issues which may arise when integrating various PRMs to present a consistent view to the end user, despite of the peculiarities of each physical implementation. Integration refers to various autonomous PRMs having to interact as problems pass from one to another in the resolution flow. This process should be transparent to the user and internally there must be a way to track in which stage of the resolution process any problem is. This means addressing two different issues. On one side PRMs which are to be integrated need to comply with certain interface standards. These standards must ensure that problems exchanged between them can always be traced. On the other side problems owned by different PRMs should be presented to the end user under a homogeneous view. This means having an uniform criteria for automatic notification messages, a single reference point (www) where users can query the status of proble...
A problem of optimal control and observation for distributed homogeneous multi-agent system
Kruglikov, Sergey V.
2017-12-01
The paper considers the implementation of a algorithm for controlling a distributed complex of several mobile multi-robots. The concept of a unified information space of the controlling system is applied. The presented information and mathematical models of participants and obstacles, as real agents, and goals and scenarios, as virtual agents, create the base forming the algorithmic and software background for computer decision support system. The controlling scheme assumes the indirect management of the robotic team on the basis of optimal control and observation problem predicting intellectual behavior in a dynamic, hostile environment. A basic content problem is a compound cargo transportation by a group of participants in the case of a distributed control scheme in the terrain with multiple obstacles.
Distance distribution in configuration-model networks
Nitzan, Mor; Katzav, Eytan; Kühn, Reimer; Biham, Ofer
2016-06-01
We present analytical results for the distribution of shortest path lengths between random pairs of nodes in configuration model networks. The results, which are based on recursion equations, are shown to be in good agreement with numerical simulations for networks with degenerate, binomial, and power-law degree distributions. The mean, mode, and variance of the distribution of shortest path lengths are also evaluated. These results provide expressions for central measures and dispersion measures of the distribution of shortest path lengths in terms of moments of the degree distribution, illuminating the connection between the two distributions.
Photovoltaic subsystem marketing and distribution model
Energy Technology Data Exchange (ETDEWEB)
None
1982-04-01
The purpose of the marketing and distribution model is to estimate the costs of selling and transporting photovoltaic solar energy products from the factory to the factory customer. The model adjusts for inflation and regional differences in marketing and distribution costs. The model consists of three major components: the marketing submodel, the distribution submodel, and the financial submodel. What the model can and cannot do, and what data are required is explained. An example for a power conditioning unit demonstrates the application of the model.
H-infinity Tracking Problems for a Distributed Parameter System
DEFF Research Database (Denmark)
Larsen, Mikael
1997-01-01
The thesis considers the problem of finding a finite dimensional controller for an infinite dimensional system (A tunnel Pasteurizer) combinedwith a rubustness analysis.......The thesis considers the problem of finding a finite dimensional controller for an infinite dimensional system (A tunnel Pasteurizer) combinedwith a rubustness analysis....
Towards Solving the Problem of Transmission and Distribution of ...
African Journals Online (AJOL)
Data on transmission and distribution losses in some African countries on conventional cables show that Nigeria experiences the highest transmission and distribution losses (=6.81 x 1015 kWh) among African countries and, certainly, among the highest in the world with per capita consumption of 0.03 kWh/person. The total ...
Problems in Modelling Charge Output Accelerometers
Directory of Open Access Journals (Sweden)
Tomczyk Krzysztof
2016-12-01
Full Text Available The paper presents major issues associated with the problem of modelling change output accelerometers. The presented solutions are based on the weighted least squares (WLS method using transformation of the complex frequency response of the sensors. The main assumptions of the WLS method and a mathematical model of charge output accelerometers are presented in first two sections of this paper. In the next sections applying the WLS method to estimation of the accelerometer model parameters is discussed and the associated uncertainties are determined. Finally, the results of modelling a PCB357B73 charge output accelerometer are analysed in the last section of this paper. All calculations were executed using the MathCad software program. The main stages of these calculations are presented in Appendices A−E.
New light on an old problem: Reflections on barriers and enablers of distributed energy
International Nuclear Information System (INIS)
Szatow, Anthony; Quezada, George; Lilley, Bill
2012-01-01
This viewpoint article, New light on an Old Problem, aims to stimulate thought and discussion on pathways to rapid emission reduction trajectories. It considers briefly the history of the Australian energy system and recent attempts to support emerging, distributed energy supply systems, before exploring the importance of new energy supply models and how they may emerge organically, ahead of further policy and regulatory shifts in Australia. The article is shaped by extensive primary research, literature review and engagement with policy makers, industry and community organisations, energy market institutions, colleagues and others over a period of four years. It outlines how new business models may reduce emissions ahead of policy and regulation, and the importance of keeping an open mind when considering ‘barriers’ to distributed energy. We hope this article will spark interest and dialogue with colleagues who may be experiencing and grappling with similar challenges. - Research highlights: ► We discuss documented barriers to distributed energy. ► We draw on socio-technical system literature and our research experience to outline a possible solution to distributed energy barriers. ► We describe a hypothetical energy service business model, led by the property sector, as a catalyst for energy market change. ► We outline reasons for our confidence in this property sector led energy services model.
Electric power scheduling - A distributed problem-solving approach
Mellor, Pamela A.; Dolce, James L.; Krupp, Joseph C.
1990-01-01
Space Station Freedom's power system, along with the spacecraft's other subsystems, needs to carefully conserve its resources and yet strive to maximize overall Station productivity. Due to Freedom's distributed design, each subsystem must work cooperatively within the Station community. There is a need for a scheduling tool which will preserve this distributed structure, allow each subsystem the latitude to satisfy its own constraints, and preserve individual value systems while maintaining Station-wide integrity.
Optimization model for the design of distributed wastewater treatment networks
Directory of Open Access Journals (Sweden)
Ibrić Nidret
2012-01-01
Full Text Available In this paper we address the synthesis problem of distributed wastewater networks using mathematical programming approach based on the superstructure optimization. We present a generalized superstructure and optimization model for the design of the distributed wastewater treatment networks. The superstructure includes splitters, treatment units, mixers, with all feasible interconnections including water recirculation. Based on the superstructure the optimization model is presented. The optimization model is given as a nonlinear programming (NLP problem where the objective function can be defined to minimize the total amount of wastewater treated in treatment operations or to minimize the total treatment costs. The NLP model is extended to a mixed integer nonlinear programming (MINLP problem where binary variables are used for the selection of the wastewater treatment technologies. The bounds for all flowrates and concentrations in the wastewater network are specified as general equations. The proposed models are solved using the global optimization solvers (BARON and LINDOGlobal. The application of the proposed models is illustrated on the two wastewater network problems of different complexity. First one is formulated as the NLP and the second one as the MINLP. For the second one the parametric and structural optimization is performed at the same time where optimal flowrates, concentrations as well as optimal technologies for the wastewater treatment are selected. Using the proposed model both problems are solved to global optimality.
An Improved Distribution Policy with a Maintenance Aspect for an Urban Logistic Problem
Directory of Open Access Journals (Sweden)
Nadia Ndhaief
2017-07-01
Full Text Available In this paper, we present an improved distribution plan supporting an urban distribution center (UDC to solve the last mile problem of urban freight. This is motivated by the need of UDCs to satisfy daily demand in time under a high service level in allocated urban areas. Moreover, these demands could not be satisfied in individual cases because the delivery rate can be less than daily demand and/or affected by random failure or maintenance actions of vehicles. The scope of our work is to focus on a UDC, which needs to satisfy demands in a finite horizon. To that end, we consider a distribution policy on two sequential plans, a distribution plan correlated to a maintenance plan using a subcontracting strategy with several potential urban distribution centers (UDCs and performing preventive maintenance to ensure deliveries for their allocated urban area. The choice of subcontractor will depend on distance, environmental and availability criteria. In doing so, we define a mathematical model for searching the best distribution and maintenance plans using a subcontracting strategy. Moreover, we consider delay for the next periods with an expensive penalty. Finally, we present a numerical example illustrating the benefits of our approach.
Distributed modeling for road authorities
Luiten, G.T.; Bõhms, H.M.; Nederveen, S. van; Bektas, E.
2013-01-01
A great challenge for road authorities is to improve the effectiveness and efficiency of their core processes by improving data exchange and sharing using new technologies such as building information modeling (BIM). BIM has already been successfully implemented in other sectors, such as
New trends in species distribution modelling
Zimmermann, Niklaus E.; Edwards, Thomas C.; Graham, Catherine H.; Pearman, Peter B.; Svenning, Jens-Christian
2010-01-01
Species distribution modelling has its origin in the late 1970s when computing capacity was limited. Early work in the field concentrated mostly on the development of methods to model effectively the shape of a species' response to environmental gradients (Austin 1987, Austin et al. 1990). The methodology and its framework were summarized in reviews 10–15 yr ago (Franklin 1995, Guisan and Zimmermann 2000), and these syntheses are still widely used as reference landmarks in the current distribution modelling literature. However, enormous advancements have occurred over the last decade, with hundreds – if not thousands – of publications on species distribution model (SDM) methodologies and their application to a broad set of conservation, ecological and evolutionary questions. With this special issue, originating from the third of a set of specialized SDM workshops (2008 Riederalp) entitled 'The Utility of Species Distribution Models as Tools for Conservation Ecology', we reflect on current trends and the progress achieved over the last decade.
DEFF Research Database (Denmark)
Reich, Juri; Kinra, Aseem; Kotzab, Herbert
We look at the global distribution network design problem and the requirements to solve it. This problem typically involves conflicting goals and a magnitude of interdependent input factors, described by qualitative and quantitative information. Our literature review shows that current models do...
Estimation of distribution overlap of urn models.
Hampton, Jerrad; Lladser, Manuel E
2012-01-01
A classical problem in statistics is estimating the expected coverage of a sample, which has had applications in gene expression, microbial ecology, optimization, and even numismatics. Here we consider a related extension of this problem to random samples of two discrete distributions. Specifically, we estimate what we call the dissimilarity probability of a sample, i.e., the probability of a draw from one distribution not being observed in [Formula: see text] draws from another distribution. We show our estimator of dissimilarity to be a [Formula: see text]-statistic and a uniformly minimum variance unbiased estimator of dissimilarity over the largest appropriate range of [Formula: see text]. Furthermore, despite the non-Markovian nature of our estimator when applied sequentially over [Formula: see text], we show it converges uniformly in probability to the dissimilarity parameter, and we present criteria when it is approximately normally distributed and admits a consistent jackknife estimator of its variance. As proof of concept, we analyze V35 16S rRNA data to discern between various microbial environments. Other potential applications concern any situation where dissimilarity of two discrete distributions may be of interest. For instance, in SELEX experiments, each urn could represent a random RNA pool and each draw a possible solution to a particular binding site problem over that pool. The dissimilarity of these pools is then related to the probability of finding binding site solutions in one pool that are absent in the other.
Economic Models and Algorithms for Distributed Systems
Neumann, Dirk; Altmann, Jorn; Rana, Omer F
2009-01-01
Distributed computing models for sharing resources such as Grids, Peer-to-Peer systems, or voluntary computing are becoming increasingly popular. This book intends to discover fresh avenues of research and amendments to existing technologies, aiming at the successful deployment of commercial distributed systems
Distributed simulation a model driven engineering approach
Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent
2016-01-01
Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.
Distributionally Robust Joint Chance Constrained Problem under Moment Uncertainty
Directory of Open Access Journals (Sweden)
Ke-wei Ding
2014-01-01
Full Text Available We discuss and develop the convex approximation for robust joint chance constraints under uncertainty of first- and second-order moments. Robust chance constraints are approximated by Worst-Case CVaR constraints which can be reformulated by a semidefinite programming. Then the chance constrained problem can be presented as semidefinite programming. We also find that the approximation for robust joint chance constraints has an equivalent individual quadratic approximation form.
Ebola Virus Infection Modelling and Identifiability Problems
Directory of Open Access Journals (Sweden)
Van-Kinh eNguyen
2015-04-01
Full Text Available The recent outbreaks of Ebola virus (EBOV infections have underlined the impact of the virus as a major threat for human health. Due to the high biosafety classification of EBOV (level 4, basic research is very limited. Therefore, the development of new avenues of thinking to advance quantitative comprehension of the virus and its interaction with the host cells is urgently neededto tackle this lethal disease. Mathematical modelling of the EBOV dynamics can be instrumental to interpret Ebola infection kinetics on quantitative grounds. To the best of our knowledge, a mathematical modelling approach to unravel the interaction between EBOV and the host cells isstill missing. In this paper, a mathematical model based on differential equations is used to represent the basic interactions between EBOV and wild-type Vero cells in vitro. Parameter sets that represent infectivity of pathogens are estimated for EBOV infection and compared with influenza virus infection kinetics. The average infecting time of wild-type Vero cells in EBOV is slower than in influenza infection. Simulation results suggest that the slow infecting time of EBOV could be compensated by its efficient replication. This study reveals several identifiability problems and what kind of experiments are necessary to advance the quantification of EBOV infection. A first mathematical approach of EBOV dynamics and the estimation of standard parametersin viral infections kinetics is the key contribution of this work, paving the way for future modelling work on EBOV infection.
Distributional, differential and integral problems: Equivalence and existence results
Czech Academy of Sciences Publication Activity Database
Monteiro, Giselle Antunes; Satco, B. R.
2017-01-01
Roč. 2017, č. 7 (2017), s. 1-26 ISSN 1417-3875 Institutional support: RVO:67985840 Keywords : derivative with respect to functions * distribution * Kurzweil-Stieltjes integral Subject RIV: BA - General Math ematics OBOR OECD: Pure math ematics Impact factor: 0.926, year: 2016 http://www. math .u-szeged.hu/ejqtde/periodica.html?periodica=1¶mtipus_ertek=publication¶m_ertek=4753
Distributional, differential and integral problems: Equivalence and existence results
Czech Academy of Sciences Publication Activity Database
Monteiro, Giselle Antunes; Satco, B. R.
2017-01-01
Roč. 2017, č. 7 (2017), s. 1-26 ISSN 1417-3875 Institutional support: RVO:67985840 Keywords : derivative with respect to functions * distribution * Kurzweil-Stieltjes integral Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.926, year: 2016 http://www.math.u-szeged.hu/ejqtde/periodica.html?periodica=1¶mtipus_ertek=publication¶m_ertek=4753
Directional Overcurrent Relays Coordination Problems in Distributed Generation Systems
Jakub Ehrenberger; Jan Švec
2017-01-01
This paper proposes a new approach to the distributed generation system protection coordination based on directional overcurrent protections with inverse-time characteristics. The key question of protection coordination is the determination of correct values of all inverse-time characteristics coefficients. The coefficients must be correctly chosen considering the sufficiently short tripping times and the sufficiently long selectivity times. In the paper a new approach to protection coordinat...
Mathematical problems in modeling artificial heart
Directory of Open Access Journals (Sweden)
Ahmed N. U.
1995-01-01
Full Text Available In this paper we discuss some problems arising in mathematical modeling of artificial hearts. The hydrodynamics of blood flow in an artificial heart chamber is governed by the Navier-Stokes equation, coupled with an equation of hyperbolic type subject to moving boundary conditions. The flow is induced by the motion of a diaphragm (membrane inside the heart chamber attached to a part of the boundary and driven by a compressor (pusher plate. On one side of the diaphragm is the blood and on the other side is the compressor fluid. For a complete mathematical model it is necessary to write the equation of motion of the diaphragm and all the dynamic couplings that exist between its position, velocity and the blood flow in the heart chamber. This gives rise to a system of coupled nonlinear partial differential equations; the Navier-Stokes equation being of parabolic type and the equation for the membrane being of hyperbolic type. The system is completed by introducing all the necessary static and dynamic boundary conditions. The ultimate objective is to control the flow pattern so as to minimize hemolysis (damage to red blood cells by optimal choice of geometry, and by optimal control of the membrane for a given geometry. The other clinical problems, such as compatibility of the material used in the construction of the heart chamber, and the membrane, are not considered in this paper. Also the dynamics of the valve is not considered here, though it is also an important element in the overall design of an artificial heart. We hope to model the valve dynamics in later paper.
Coastal erosion problem, modelling and protection
Yılmaz, Nihal; Balas, Lale; İnan, Asu
2015-09-01
Göksu Delta, located in the south of Silifke County of Mersin on the coastal plain formed by Göksu River, is one of the Specially Protected Areas in Turkey. Along the coastal area of the Delta, coastline changes at significant rates are observed, concentrating especially at four regions; headland of İncekum, coast of Paradeniz Lagoon, river mouth of Göksu and coast of Altınkum. The coast of Paradeniz Lagoon is suffering significantly from erosion and the consequent coastal retreating problem. Therefore, the narrow barrier beach which separates Paradeniz Lagoon from the Mediterranean Sea is getting narrower, creating a risk of uniting with the sea, thus causing the disappearance of the Lagoon. The aim of this study was to understand the coastal transport processes along the coastal area of Göksu Delta to determine the coastal sediment transport rates, and accordingly, to propose solutions to prevent the loss of coastal lands in the Delta. To this end, field measurements of currents and sediment grain sizes were carried out, and wind climate, wave climate, circulation patterns and longshore sediment transport rates were numerically modeled by HYDROTAM-3D, which is a three dimensional hydrodynamic transport model. Finally, considering its special importance as an environmentally protected region, some coastal structures of gabions were proposed as solutions against the coastal erosion problems of the Delta. The effects of proposed structures on future coastline changes were also modeled, and the coastlines predicted for the year 2017 are presented and discussed in the paper.
Modelling refrigerant distribution in minichannel evaporators
DEFF Research Database (Denmark)
Brix, Wiebke
This thesis is concerned with numerical modelling of flow distribution in a minichannel evaporator for air-conditioning. The study investigates the impact of non-uniform airflow and non-uniform distribution of the liquid and vapour phases in the inlet manifold on the refrigerant mass flow...... in the numerical experiments using the test case evaporator. The results show that the reduction in cooling capacity due to non-uniform airflow and non-uniform liquid and vapour distribution is generally larger when using R134a than when using CO2 as refrigerant. Comparing the capacity reductions with reductions...... of the liquid and vapour in the inlet manifold. Combining non-uniform airflow and non-uniform liquid and vapour distribution shows that a non-uniform airflow distribution to some degree can be compensated by a suitable liquid and vapour distribution. Controlling the superheat out of the individual channels...
The inverse gravimetric problem in gravity modelling
Sanso, F.; Tscherning, C. C.
1989-01-01
One of the main purposes of geodesy is to determine the gravity field of the Earth in the space outside its physical surface. This purpose can be pursued without any particular knowledge of the internal density even if the exact shape of the physical surface of the Earth is not known, though this seems to entangle the two domains, as it was in the old Stoke's theory before the appearance of Molodensky's approach. Nevertheless, even when large, dense and homogeneous data sets are available, it was always recognized that subtracting from the gravity field the effect of the outer layer of the masses (topographic effect) yields a much smoother field. This is obviously more important when a sparse data set is bad so that any smoothing of the gravity field helps in interpolating between the data without raising the modeling error, this approach is generally followed because it has become very cheap in terms of computing time since the appearance of spectral techniques. The mathematical description of the Inverse Gravimetric Problem (IGP) is dominated mainly by two principles, which in loose terms can be formulated as follows: the knowledge of the external gravity field determines mainly the lateral variations of the density; and the deeper the density anomaly giving rise to a gravity anomaly, the more improperly posed is the problem of recovering the former from the latter. The statistical relation between rho and n (and its inverse) is also investigated in its general form, proving that degree cross-covariances have to be introduced to describe the behavior of rho. The problem of the simultaneous estimate of a spherical anomalous potential and of the external, topographic masses is addressed criticizing the choice of the mixed collection approach.
Heuristic for solving capacitor allocation problems in electric energy radial distribution networks
Directory of Open Access Journals (Sweden)
Maria A. Biagio
2012-04-01
Full Text Available The goal of the capacitor allocation problem in radial distribution networks is to minimize technical losses with consequential positive impacts on economic and environmental areas. The main objective is to define the size and location of the capacitors while considering load variations in a given horizon. The mathematical formulation for this planning problem is given by an integer nonlinear mathematical programming model that demands great computational effort to be solved. With the goal of solving this problem, this paper proposes a methodology that is composed of heuristics and Tabu Search procedures. The methodology presented explores network system characteristics of the network system reactive loads for identifying regions where procedures of local and intensive searches should be performed. A description of the proposed methodology and an analysis of computational results obtained which are based on several test systems including actual systems are presented. The solutions reached are as good as or better than those indicated by well referenced methodologies. The technique proposed is simple in its use and does not require calibrating an excessive amount of parameters, making it an attractive alternative for companies involved in the planning of radial distribution networks.
A hierarchical distributed control model for coordinating intelligent systems
Adler, Richard M.
1991-01-01
A hierarchical distributed control (HDC) model for coordinating cooperative problem-solving among intelligent systems is described. The model was implemented using SOCIAL, an innovative object-oriented tool for integrating heterogeneous, distributed software systems. SOCIAL embeds applications in 'wrapper' objects called Agents, which supply predefined capabilities for distributed communication, control, data specification, and translation. The HDC model is realized in SOCIAL as a 'Manager'Agent that coordinates interactions among application Agents. The HDC Manager: indexes the capabilities of application Agents; routes request messages to suitable server Agents; and stores results in a commonly accessible 'Bulletin-Board'. This centralized control model is illustrated in a fault diagnosis application for launch operations support of the Space Shuttle fleet at NASA, Kennedy Space Center.
Shared Problem Models and Crew Decision Making
Orasanu, Judith; Statler, Irving C. (Technical Monitor)
1994-01-01
The importance of crew decision making to aviation safety has been well established through NTSB accident analyses: Crew judgment and decision making have been cited as causes or contributing factors in over half of all accidents in commercial air transport, general aviation, and military aviation. Yet the bulk of research on decision making has not proven helpful in improving the quality of decisions in the cockpit. One reason is that traditional analytic decision models are inappropriate to the dynamic complex nature of cockpit decision making and do not accurately describe what expert human decision makers do when they make decisions. A new model of dynamic naturalistic decision making is offered that may prove more useful for training or aiding cockpit decision making. Based on analyses of crew performance in full-mission simulation and National Transportation Safety Board accident reports, features that define effective decision strategies in abnormal or emergency situations have been identified. These include accurate situation assessment (including time and risk assessment), appreciation of the complexity of the problem, sensitivity to constraints on the decision, timeliness of the response, and use of adequate information. More effective crews also manage their workload to provide themselves with time and resources to make good decisions. In brief, good decisions are appropriate to the demands of the situation and reflect the crew's metacognitive skill. Effective crew decision making and overall performance are mediated by crew communication. Communication contributes to performance because it assures that all crew members have essential information, but it also regulates and coordinates crew actions and is the medium of collective thinking in response to a problem. This presentation will examine the relation between communication that serves to build performance. Implications of these findings for crew training will be discussed.
Practical Solutions for Harmonics Problems Produced in the Distribution Networks
Directory of Open Access Journals (Sweden)
A. F. Zobaa
2006-03-01
Full Text Available Harmonic distortion on the power system is a modern concern due to the technological advances in silicon technology as it presents an increased non-linear loading of the power system. The effects of harmonics are well known: customers could experience major production losses due to the loss of supply as an example, on the other hand, harmonic load currents cause the utility to supply a higher real energy input then the actual real power needed to maintain a plant’s production at a certain level. The utility carries the extra transmission losses due to the harmonic currents. Different solutions will be reviewed as concepts for solving certain types of problems related to power quality. Both theoretical and a case study are presented.
Connecting micro dynamics and population distributions in system dynamics models.
Fallah-Fini, Saeideh; Rahmandad, Hazhir; Chen, Hsin-Jen; Xue, Hong; Wang, Youfa
2013-01-01
Researchers use system dynamics models to capture the mean behavior of groups of indistinguishable population elements (e.g., people) aggregated in stock variables. Yet, many modeling problems require capturing the heterogeneity across elements with respect to some attribute(s) (e.g., body weight). This paper presents a new method to connect the micro-level dynamics associated with elements in a population with the macro-level population distribution along an attribute of interest without the need to explicitly model every element. We apply the proposed method to model the distribution of Body Mass Index and its changes over time in a sample population of American women obtained from the U.S. National Health and Nutrition Examination Survey. Comparing the results with those obtained from an individual-based model that captures the same phenomena shows that our proposed method delivers accurate results with less computation than the individual-based model.
Connecting micro dynamics and population distributions in system dynamics models
Rahmandad, Hazhir; Chen, Hsin-Jen; Xue, Hong; Wang, Youfa
2014-01-01
Researchers use system dynamics models to capture the mean behavior of groups of indistinguishable population elements (e.g., people) aggregated in stock variables. Yet, many modeling problems require capturing the heterogeneity across elements with respect to some attribute(s) (e.g., body weight). This paper presents a new method to connect the micro-level dynamics associated with elements in a population with the macro-level population distribution along an attribute of interest without the need to explicitly model every element. We apply the proposed method to model the distribution of Body Mass Index and its changes over time in a sample population of American women obtained from the U.S. National Health and Nutrition Examination Survey. Comparing the results with those obtained from an individual-based model that captures the same phenomena shows that our proposed method delivers accurate results with less computation than the individual-based model. PMID:25620842
Modeling Word Burstiness Using the Dirichlet Distribution
DEFF Research Database (Denmark)
Madsen, Rasmus Elsborg; Kauchak, David; Elkan, Charles
2005-01-01
Multinomial distributions are often used to model text documents. However, they do not capture well the phenomenon that words in a document tend to appear in bursts: if a word appears once, it is more likely to appear again. In this paper, we propose the Dirichlet compound multinomial model (DCM)...
Mathematical Models for Room Air Distribution
DEFF Research Database (Denmark)
Nielsen, Peter V.
1982-01-01
A number of different models on the air distribution in rooms are introduced. This includes the throw model, a model on penetration length of a cold wall jet and a model for maximum velocity in the dimensioning of an air distribution system in highly loaded rooms and shows that the amount of heat...... removed from the room at constant penetration length is proportional to the cube of the velocities in the occupied zone. It is also shown that a large number of diffusers increases the amount of heat which may be removed without affecting the thermal conditions. Control strategies for dual duct and single...... duct systems are given and the paper is concluded by mentioning a computer-based prediction method which gives the velocity and temperature distribution in the whole room....
Mathematical Models for Room Air Distribution - Addendum
DEFF Research Database (Denmark)
Nielsen, Peter V.
1982-01-01
A number of different models on the air distribution in rooms are introduced. This includes the throw model, a model on penetration length of a cold wall jet and a model for maximum velocity in the dimensioning of an air distribution system in highly loaded rooms and shows that the amount of heat...... removed from the room at constant penetration length is proportional to the cube of the velocities in the occupied zone. It is also shown that a large number of diffusers increases the amount of heat which may be removed without affecting the thermal conditions. Control strategies for dual duct and single...... duct systems are given and the paper is concluded by mentioning a computer-based prediction method which gives the velocity and temperature distribution in the whole room....
Convergence diagnostics for Eigenvalue problems with linear regression model
International Nuclear Information System (INIS)
Shi, Bo; Petrovic, Bojan
2011-01-01
Although the Monte Carlo method has been extensively used for criticality/Eigenvalue problems, a reliable, robust, and efficient convergence diagnostics method is still desired. Most methods are based on integral parameters (multiplication factor, entropy) and either condense the local distribution information into a single value (e.g., entropy) or even disregard it. We propose to employ the detailed cycle-by-cycle local flux evolution obtained by using mesh tally mechanism to assess the source and flux convergence. By applying a linear regression model to each individual mesh in a mesh tally for convergence diagnostics, a global convergence criterion can be obtained. We exemplify this method on two problems and obtain promising diagnostics results. (author)
Explicit Problem Modeling: An Intervention Strategy.
Sims, David; Jones, Sue
1981-01-01
Suggests that organizational development consultants should use more explicit representations of the problems they work on with client teams. Offers an approach to overcome difficulties and provides a strategy for intervening in the processes of problem definition in teams. (Author)
Estimating Predictive Variance for Statistical Gas Distribution Modelling
International Nuclear Information System (INIS)
Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo
2009-01-01
Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.
Benchmark problems for repository siting models
International Nuclear Information System (INIS)
Ross, B.; Mercer, J.W.; Thomas, S.D.; Lester, B.H.
1982-12-01
This report describes benchmark problems to test computer codes used in siting nuclear waste repositories. Analytical solutions, field problems, and hypothetical problems are included. Problems are included for the following types of codes: ground-water flow in saturated porous media, heat transport in saturated media, ground-water flow in saturated fractured media, heat and solute transport in saturated porous media, solute transport in saturated porous media, solute transport in saturated fractured media, and solute transport in unsaturated porous media
The process model of problem solving difficulty
Pala, O.; Rouwette, E.A.J.A.; Vennix, J.A.M.
2002-01-01
Groups and organizations, or in general multi-actor decision-making groups, frequently come across complex problems in which neither the problem definition nor the interrelations of parts that make up the problem are well defined. In these kinds of situations, members of a decision-making group
Modeling a Distribution of Mortgage Credit Losses
Czech Academy of Sciences Publication Activity Database
Gapko, Petr; Šmíd, Martin
2010-01-01
Roč. 23, č. 23 (2010), s. 1-23 R&D Projects: GA ČR GA402/09/0965; GA ČR GD402/09/H045 Grant - others:Univerzita Karlova - GAUK(CZ) 46108 Institutional research plan: CEZ:AV0Z10750506 Keywords : Credit Risk * Mortgage * Delinquency Rate * Generalized Hyperbolic Distribution * Normal Distribution Subject RIV: AH - Economics http://library.utia.cas.cz/separaty/2010/E/gapko-modeling a distribution of mortgage credit losses-ies wp.pdf
Applications of species distribution modeling to paleobiology
DEFF Research Database (Denmark)
Svenning, Jens-Christian; Fløjgaard, Camilla; Marske, Katharine Ann
2011-01-01
Species distribution modeling (SDM: statistical and/or mechanistic approaches to the assessment of range determinants and prediction of species occurrence) offers new possibilities for estimating and studying past organism distributions. SDM complements fossil and genetic evidence by providing (i...... the role of Pleistocene glacial refugia in biogeography and evolution, especially in Europe, but also in many other regions. SDM-based approaches are also beginning to contribute to a suite of other research questions, such as historical constraints on current distributions and diversity patterns, the end...
Modeling a Distribution of Mortgage Credit Losses
Czech Academy of Sciences Publication Activity Database
Gapko, Petr; Šmíd, Martin
2012-01-01
Roč. 60, č. 10 (2012), s. 1005-1023 ISSN 0013-3035 R&D Projects: GA ČR GD402/09/H045; GA ČR(CZ) GBP402/12/G097 Grant - others:Univerzita Karlova(CZ) 46108 Institutional research plan: CEZ:AV0Z10750506 Institutional support: RVO:67985556 Keywords : credit risk * mortgage * delinquency rate * generalized hyperbolic distribution * normal distribution Subject RIV: AH - Economics Impact factor: 0.194, year: 2012 http://library.utia.cas.cz/separaty/2013/E/smid-modeling a distribution of mortgage credit losses.pdf
Population control, distribution, and manpower problems in Thailand.
Debevalya, N
1981-12-01
Describes changing reproductive patterns underway in Thailand, evident in the data of national sample surveys conducted between 1969 and 1975. Fertility rates and differentials are described, and Thai population policy and family planning activities are reviewed. Thailand is in the midst of a transition to lower rates of fertility. Data are presented for 1947, 1960 and 1970 which show trends in regional population distribution. Migration, both international and internal, is discussed and the overall geographic stability of the Thai population is noted. Population redistribution policies of the 4th Plan of National Economic and Social Development (1977-81), aimed at relieving the pressures on Bangkok, are listed. An analysis of economic activity of the population based on 1960 and 1970 census data is presented. Data depicting trends in growth of the economically active population, occupational structure, work status, and educational level of employed persons are presented and discussed. Unemployment, various categories of underemployment, including seasonal employment and under-utilization of educated and trained manpower are discussed. Experimental field studies by the National Statistical Office on labor utilization framework are described. The framework classifies the labor force into persons whose labor and skills are inadequately utilized and those whose labor and skills are adequately utilized. In addition, several categories of inadequate utilization are established, for example, unemployment and mismatch of occupation and education. The results of 5 pilot studies carried out during 1973-1975 are discussed. This framework was found to be more meaningful for examining manpower utilization in Thailand than the employed/unemployed approach.
Advanced Distribution Network Modelling with Distributed Energy Resources
O'Connell, Alison
The addition of new distributed energy resources, such as electric vehicles, photovoltaics, and storage, to low voltage distribution networks means that these networks will undergo major changes in the future. Traditionally, distribution systems would have been a passive part of the wider power system, delivering electricity to the customer and not needing much control or management. However, the introduction of these new technologies may cause unforeseen issues for distribution networks, due to the fact that they were not considered when the networks were originally designed. This thesis examines different types of technologies that may begin to emerge on distribution systems, as well as the resulting challenges that they may impose. Three-phase models of distribution networks are developed and subsequently utilised as test cases. Various management strategies are devised for the purposes of controlling distributed resources from a distribution network perspective. The aim of the management strategies is to mitigate those issues that distributed resources may cause, while also keeping customers' preferences in mind. A rolling optimisation formulation is proposed as an operational tool which can manage distributed resources, while also accounting for the uncertainties that these resources may present. Network sensitivities for a particular feeder are extracted from a three-phase load flow methodology and incorporated into an optimisation. Electric vehicles are the focus of the work, although the method could be applied to other types of resources. The aim is to minimise the cost of electric vehicle charging over a 24-hour time horizon by controlling the charge rates and timings of the vehicles. The results demonstrate the advantage that controlled EV charging can have over an uncontrolled case, as well as the benefits provided by the rolling formulation and updated inputs in terms of cost and energy delivered to customers. Building upon the rolling optimisation, a
Amallynda, I.; Santosa, B.
2017-11-01
This paper proposes a new generalization of the distributed parallel machine and assembly scheduling problem (DPMASP) with eligibility constraints referred to as the modified distributed parallel machine and assembly scheduling problem (MDPMASP) with eligibility constraints. Within this generalization, we assume that there are a set non-identical factories or production lines, each one with a set unrelated parallel machine with different speeds in processing them disposed to a single assembly machine in series. A set of different products that are manufactured through an assembly program of a set of components (jobs) according to the requested demand. Each product requires several kinds of jobs with different sizes. Beside that we also consider to the multi-objective problem (MOP) of minimizing mean flow time and the number of tardy products simultaneously. This is known to be NP-Hard problem, is important to practice, as the former criterions to reflect the customer's demand and manufacturer's perspective. This is a realistic and complex problem with wide range of possible solutions, we propose four simple heuristics and two metaheuristics to solve it. Various parameters of the proposed metaheuristic algorithms are discussed and calibrated by means of Taguchi technique. All proposed algorithms are tested by Matlab software. Our computational experiments indicate that the proposed problem and fourth proposed algorithms are able to be implemented and can be used to solve moderately-sized instances, and giving efficient solutions, which are close to optimum in most cases.
Inverse modeling for heat conduction problem in human abdominal phantom.
Huang, Ming; Chen, Wenxi
2011-01-01
Noninvasive methods for deep body temperature measurement are based on the principle of heat equilibrium between the thermal sensor and the target location theoretically. However, the measurement position is not able to be definitely determined. In this study, a 2-dimensional mathematical model was built based upon some assumptions for the physiological condition of the human abdomen phantom. We evaluated the feasibility in estimating the internal organs temperature distribution from the readings of the temperature sensors arranged on the skin surface. It is a typical inverse heat conduction problem (IHCP), and is usually mathematically ill-posed. In this study, by integrating some physical and physiological a-priori information, we invoked the quasi-linear (QL) method to reconstruct the internal temperature distribution. The solutions of this method were improved by increasing the accuracy of the sensors and adjusting their arrangement on the outer surface, and eventually reached the state of converging at the best state accurately. This study suggests that QL method is able to reconstruct the internal temperature distribution in this phantom and might be worthy of a further study in an anatomical based model.
Distributed model predictive control made easy
Negenborn, Rudy
2014-01-01
The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems. This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...
Geometric Algebra Model of Distributed Representations
Patyk, Agnieszka
Formalism based on GA is an alternative to distributed representation models developed so far: Smolensky's tensor product, Holographic Reduced Representations (HRR), and Binary Spatter Code (BSC). Convolutions are replaced by geometric products interpretable in terms of geometry, which seems to be the most natural language for visualization of higher concepts. This paper recalls the main ideas behind the GA model and investigates recognition test results using both inner product and a clipped version of matrix representation. The influence of accidental blade equality on recognition is also studied. Finally, the efficiency of the GA model is compared to that of previously developed models.
National Research Council Canada - National Science Library
Crino, John
2002-01-01
.... This dissertation applies and extends some of Colletti's (1999) seminal work in group theory and metaheuristics in order to solve the theater distribution vehicle routing and scheduling problem (TDVRSP...
Dynamical Model of Fission Fragment Angular Distribution
Drozdov, V. A.; Eremenko, D. O.; Fotina, O. V.; Platonov, S. Yu.; Yuminov, O. A.; Giardina, G.; Taccone, A.
2002-01-01
A dynamical model of fission fragment angular distributions is suggested. The model allows one to calculate fission fragment angular distributions, prescission light particle multyplicities, evaporation residue cross sections etc. for the cases of decay of hot and rotating heavy nuclei. The experimental data on angular anisotropies of fission fragments and prescission neutron multiplicities are analyzed for the 16O + 208Pb, 232Th, 248Cm and 238U reactions at the energies of the incident 16O ions ranging from 90 to 160 MeV. This analysis allows us to extract both the nuclear friction coefficient value and the relaxation time for the tilting mode. It is also demonstrated that the angular distributions are sensitive to the deformation dependence of the nuclear friction.
Geometric Algebra Model of Distributed Representations
Patyk, Agnieszka
2010-01-01
Formalism based on GA is an alternative to distributed representation models developed so far --- Smolensky's tensor product, Holographic Reduced Representations (HRR) and Binary Spatter Code (BSC). Convolutions are replaced by geometric products, interpretable in terms of geometry which seems to be the most natural language for visualization of higher concepts. This paper recalls the main ideas behind the GA model and investigates recognition test results using both inner product and a clipp...
Comparison of sparse point distribution models
DEFF Research Database (Denmark)
Erbou, Søren Gylling Hemmingsen; Vester-Christensen, Martin; Larsen, Rasmus
2010-01-01
This paper compares several methods for obtaining sparse and compact point distribution models suited for data sets containing many variables. These are evaluated on a database consisting of 3D surfaces of a section of the pelvic bone obtained from CT scans of 33 porcine carcasses. The superior m...
Modeling utilization distributions in space and time
Keating, K.A.; Cherry, S.
2009-01-01
W. Van Winkle defined the utilization distribution (UD) as a probability density that gives an animal's relative frequency of occurrence in a two-dimensional (x, y) plane. We extend Van Winkle's work by redefining the UD as the relative frequency distribution of an animal's occurrence in all four dimensions of space and time. We then describe a product kernel model estimation method, devising a novel kernel from the wrapped Cauchy distribution to handle circularly distributed temporal covariates, such as day of year. Using Monte Carlo simulations of animal movements in space and time, we assess estimator performance. Although not unbiased, the product kernel method yields models highly correlated (Pearson's r - 0.975) with true probabilities of occurrence and successfully captures temporal variations in density of occurrence. In an empirical example, we estimate the expected UD in three dimensions (x, y, and t) for animals belonging to each of two distinct bighorn sheep {Ovis canadensis) social groups in Glacier National Park, Montana, USA. Results show the method can yield ecologically informative models that successfully depict temporal variations in density of occurrence for a seasonally migratory species. Some implications of this new approach to UD modeling are discussed. ?? 2009 by the Ecological Society of America.
Finessing atlas data for species distribution models
Niamir, A.; Skidmore, A.K.; Toxopeus, A.G.; Munoz, A.R.; Real, R.
2011-01-01
Aim The spatial resolution of species atlases and therefore resulting model predictions are often too coarse for local applications. Collecting distribution data at a finer resolution for large numbers of species requires a comprehensive sampling effort, making it impractical and expensive. This
Hierarchical Model Predictive Control for Resource Distribution
DEFF Research Database (Denmark)
Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob
2010-01-01
This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...
Distributionally Robust Return-Risk Optimization Models and Their Applications
Directory of Open Access Journals (Sweden)
Li Yang
2014-01-01
Full Text Available Based on the risk control of conditional value-at-risk, distributionally robust return-risk optimization models with box constraints of random vector are proposed. They describe uncertainty in both the distribution form and moments (mean and covariance matrix of random vector. It is difficult to solve them directly. Using the conic duality theory and the minimax theorem, the models are reformulated as semidefinite programming problems, which can be solved by interior point algorithms in polynomial time. An important theoretical basis is therefore provided for applications of the models. Moreover, an application of the models to a practical example of portfolio selection is considered, and the example is evaluated using a historical data set of four stocks. Numerical results show that proposed methods are robust and the investment strategy is safe.
EFFECT OF PROBLEM BASED LEARNING AND MODEL CRITICAL THINKING ABILITY TO PROBLEM SOLVING SKILLS
Directory of Open Access Journals (Sweden)
Unita S. Zuliani Nasution
2016-12-01
Full Text Available The purposes of this research were to analyze the different between physic resolving problem ability by using problem based learning model and direct instruction model, the different of physic resolving problem ability between the students that have critical thinking ability upper the average and the students that have critical thinking ability under the average, and the interaction of problem based learning model toward critical thinking ability and students’ physic resolving problem ability. This research was quasy experimental research that use critical thinking ability tests and physic resolving problem ability tests as the instruments. Result of the research showed that the students’ physic resolving problem ability by using problem based learning model was better than by using direct instruction model, students’ physic resolving problem ability and critical thinking ability upper the average showed better different and result than students’ critical thinking ability under the average, besides there was an interaction between problem based learning model and critical thinking ability in improving students’ physic resolving problem ability.
Efficient mixed integer programming models for family scheduling problems
Directory of Open Access Journals (Sweden)
Meng-Ye Lin
Full Text Available This paper proposes several mixed integer programming models which incorporate optimal sequence properties into the models, to solve single machine family scheduling problems. The objectives are total weighted completion time and maximum lateness, respectively. Experiment results indicate that there are remarkable improvements in computational efficiency when optimal sequence properties are included in the models. For the total weighted completion time problems, the best model solves all of the problems up to 30-jobs within 5 s, all 50-job problems within 4 min and about 1/3 of the 75-job to 100-job problems within 1 h. For maximum lateness problems, the best model solves almost all the problems up to 30-jobs within 11 min and around half of the 50-job to 100-job problems within 1 h. Keywords: Family scheduling, Sequence independent setup, Total weighted completion time, Maximum lateness
Reusable Component Model Development Approach for Parallel and Distributed Simulation
Zhu, Feng; Yao, Yiping; Chen, Huilong; Yao, Feng
2014-01-01
Model reuse is a key issue to be resolved in parallel and distributed simulation at present. However, component models built by different domain experts usually have diversiform interfaces, couple tightly, and bind with simulation platforms closely. As a result, they are difficult to be reused across different simulation platforms and applications. To address the problem, this paper first proposed a reusable component model framework. Based on this framework, then our reusable model development approach is elaborated, which contains two phases: (1) domain experts create simulation computational modules observing three principles to achieve their independence; (2) model developer encapsulates these simulation computational modules with six standard service interfaces to improve their reusability. The case study of a radar model indicates that the model developed using our approach has good reusability and it is easy to be used in different simulation platforms and applications. PMID:24729751
Analysis of Jingdong Mall Logistics Distribution Model
Shao, Kang; Cheng, Feng
In recent years, the development of electronic commerce in our country to speed up the pace. The role of logistics has been highlighted, more and more electronic commerce enterprise are beginning to realize the importance of logistics in the success or failure of the enterprise. In this paper, the author take Jingdong Mall for example, performing a SWOT analysis of their current situation of self-built logistics system, find out the problems existing in the current Jingdong Mall logistics distribution and give appropriate recommendations.
PROGRAMMING OF METHODS FOR THE NEEDS OF LOGISTICS DISTRIBUTION SOLVING PROBLEMS
Directory of Open Access Journals (Sweden)
Andrea Štangová
2014-06-01
Full Text Available Logistics has become one of the dominant factors which is affecting the successful management, competitiveness and mentality of the global economy. Distribution logistics materializes the connesciton of production and consumer marke. It uses different methodology and methods of multicriterial evaluation and allocation. This thesis adresses the problem of the costs of securing the distribution of product. It was therefore relevant to design a software product thet would be helpful in solvin the problems related to distribution logistics. Elodis – electronic distribution logistics program was designed on the basis of theoretical analysis of the issue of distribution logistics and on the analysis of the software products market. The program uses a multicriterial evaluation methods to deremine the appropriate type and mathematical and geometrical method to determine an appropriate allocation of the distribution center, warehouse and company.
Structuring Problem Analysis for Embedded Systems Modelling
Marincic, J.; Mader, Angelika H.; Wieringa, Roelf J.; Lucas, Yan
Our interest is embedded systems validation as part of the model-driven approach. To design a model, the modeller needs to obtain knowledge about the system and decide what is relevant to model and how. A part of the modelling activities is inherently informal - it cannot be formalised in such a way
A Reference Model for Distribution Grid Control in the 21st Century
Energy Technology Data Exchange (ETDEWEB)
Taft, Jeffrey D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); De Martini, Paul [California Inst. of Technology (CalTech), Pasadena, CA (United States); Kristov, Lorenzo [California Independent System Operator, Folsom, CA (United States)
2015-07-01
Intensive changes in the structure of the grid due to the penetration of new technologies, coupled with changing societal needs are outpacing the capabilities of traditional grid control systems. The gap is widening at an accelerating rate with the biggest impacts occurring at the distribution level due to the widespread adoption of diverse distribution-connected energy resources (DER) . This paper outlines the emerging distribution grid control environment, defines the new distribution control problem, and provides a distribution control reference model. The reference model offers a schematic representation of the problem domain to inform development of system architecture and control solutions for the high-DER electric system.
A void distribution model-flashing flow
International Nuclear Information System (INIS)
Riznic, J.; Ishii, M.; Afgan, N.
1987-01-01
A new model for flashing flow based on wall nucleations is proposed here and the model predictions are compared with some experimental data. In order to calculate the bubble number density, the bubble number transport equation with a distributed source from the wall nucleation sites was used. Thus it was possible to avoid the usual assumption of a constant bubble number density. Comparisons of the model with the data shows that the model based on the nucleation site density correlation appears to be acceptable to describe the vapor generation in the flashing flow. For the limited data examined, the comparisons show rather satisfactory agreement without using a floating parameter to adjust the model. This result indicated that, at least for the experimental conditions considered here, the mechanistic predictions of the flashing phenomenon is possible on the present wall nucleation based model
Problem of uniqueness in the renewal process generated by the uniform distribution
Directory of Open Access Journals (Sweden)
D. Ugrin-parac
1992-01-01
Full Text Available The renewal process generated by the uniform distribution, when interpreted as a transformation of the uniform distribution into a discrete distribution, gives rise to the question of uniqueness of the inverse image. The paper deals with a particular problem from the described domain, that arose in the construction of a complex stochastic test intended to evaluate pseudo-random number generators. The connection of the treated problem with the question of a unique integral representation of Gamma-function is also mentioned.
Distributed parallel computing in stochastic modeling of groundwater systems.
Dong, Yanhui; Li, Guomin; Xu, Haizhen
2013-03-01
Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.
Constructing a coherent problem model to facilitate algebra problem solving in a chemistry context
Hiong Ngu, Bing; Seeshing Yeung, Alexander; Phan, Huy P.
2015-04-01
An experiment using a sample of 11th graders compared text editing and worked examples approaches in learning to solve dilution and molarity algebra word problems in a chemistry context. Text editing requires students to assess the structure of a word problem by specifying whether the problem text contains sufficient, missing, or irrelevant information for reaching a solution. Worked examples direct students to follow steps toward the solution, and its emphasis is on computation instead of the formation of a coherent problem model. Text editing yielded higher scores in a transfer test (which shared the same solution procedure as in the acquisition problems but differed in contexts), but not a similar test (which resembled acquisition problems in terms of both solution procedure and context). Results provide some theoretical support and practical implications for using text editing to develop a coherent problem model to facilitate problem-solving skills in chemistry.
The Higgs transverse momentum distribution in gluon fusion as multiscale problem
International Nuclear Information System (INIS)
Bagnaschi, E.; Vicini, A.
2015-05-01
We consider Higgs production in gluon fusion and in particular the prediction of the Higgs transverse momentum distribution. We discuss the ambiguities affecting the matching procedure between fixed order matrix elements and the resummation to all orders of the terms enhanced by log(p H T /m H ) factors. Following a recent proposal (Grazzini et al., hep-ph/1306.4581), we argue that the gluon fusion process, computed considering two active quark flavors, is a multiscale problem from the point of view of the resummation of the collinear singular terms. We perform an analysis at parton level of the collinear behavior of the real emission amplitudes and we derive an upper limit to the range of transverse momenta where the collinear approximation is valid. This scale is then used as the value of the resummation scale in the analytic resummation framework or as the value of the h parameter in the POWHEG-BOX code. Finally, we provide a phenomenological analysis in the Standard Model, in the Two Higgs Doublet Model and in the Minimal Supersymmetric Standard Model. In the two latter cases, we provide an ansatz for the central value of the matching parameters not only for a Standard Model-like Higgs boson, but also for heavy scalars and in scenarios where the bottom quark may play the dominant role.
On the formulation and numerical simulation of distributed-order fractional optimal control problems
Zaky, M. A.; Machado, J. A. Tenreiro
2017-11-01
In a fractional optimal control problem, the integer order derivative is replaced by a fractional order derivative. The fractional derivative embeds implicitly the time delays in an optimal control process. The order of the fractional derivative can be distributed over the unit interval, to capture delays of distinct sources. The purpose of this paper is twofold. Firstly, we derive the generalized necessary conditions for optimal control problems with dynamics described by ordinary distributed-order fractional differential equations (DFDEs). Secondly, we propose an efficient numerical scheme for solving an unconstrained convex distributed optimal control problem governed by the DFDE. We convert the problem under consideration into an optimal control problem governed by a system of DFDEs, using the pseudo-spectral method and the Jacobi-Gauss-Lobatto (J-G-L) integration formula. Next, we present the numerical solutions for a class of optimal control problems of systems governed by DFDEs. The convergence of the proposed method is graphically analyzed showing that the proposed scheme is a good tool for the simulation of distributed control problems governed by DFDEs.
A Multiple Period Problem in Distributed Energy Management Systems Considering CO2 Emissions
Muroda, Yuki; Miyamoto, Toshiyuki; Mori, Kazuyuki; Kitamura, Shoichi; Yamamoto, Takaya
Consider a special district (group) which is composed of multiple companies (agents), and where each agent responds to an energy demand and has a CO2 emission allowance imposed. A distributed energy management system (DEMS) optimizes energy consumption of a group through energy trading in the group. In this paper, we extended the energy distribution decision and optimal planning problem in DEMSs from a single period problem to a multiple periods one. The extension enabled us to consider more realistic constraints such as demand patterns, the start-up cost, and minimum running/outage times of equipment. At first, we extended the market-oriented programming (MOP) method for deciding energy distribution to the multiple periods problem. The bidding strategy of each agent is formulated by a 0-1 mixed non-linear programming problem. Secondly, we proposed decomposing the problem into a set of single period problems in order to solve it faster. In order to decompose the problem, we proposed a CO2 emission allowance distribution method, called an EP method. We confirmed that the proposed method was able to produce solutions whose group costs were close to lower-bound group costs by computational experiments. In addition, we verified that reduction in computational time was achieved without losing the quality of solutions by using the EP method.
Distributed Bayesian Networks for User Modeling
DEFF Research Database (Denmark)
Tedesco, Roberto; Dolog, Peter; Nejdl, Wolfgang
2006-01-01
The World Wide Web is a popular platform for providing eLearning applications to a wide spectrum of users. However – as users differ in their preferences, background, requirements, and goals – applications should provide personalization mechanisms. In the Web context, user models used...... by such adaptive applications are often partial fragments of an overall user model. The fragments have then to be collected and merged into a global user profile. In this paper we investigate and present algorithms able to cope with distributed, fragmented user models – based on Bayesian Networks – in the context...... of Web-based eLearning platforms. The scenario we are tackling assumes learners who use several systems over time, which are able to create partial Bayesian Networks for user models based on the local system context. In particular, we focus on how to merge these partial user models. Our merge mechanism...
Language and modeling word problems in mathematics among bilinguals.
Bernardo, Allan B I
2005-09-01
The study was conducted to determine whether the language of math word problems would affect how Filipino-English bilingual problem solvers would model the structure of these word problems. Modeling the problem structure was studied using the problem-completion paradigm, which involves presenting problems without the question. The paradigm assumes that problem solvers can infer the appropriate question of a word problem if they correctly grasp its problem structure. Arithmetic word problems in Filipino and English were given to bilingual students, some of whom had Filipino as a first language and others who had English as a first language. The problem-completion data and solution data showed similar results. The language of the problem had no effect on problem-structure modeling. The results were discussed in relation to a more circumscribed view about the role of language in word problem solving among bilinguals. In particular, the results of the present study showed that linguistic factors do not affect the more mathematically abstract components of word problem solving, although they may affect the other components such as those related to reading comprehension and understanding.
Fuzzy Approximate Model for Distributed Thermal Solar Collectors Control
Elmetennani, Shahrazed
2014-07-01
This paper deals with the problem of controlling concentrated solar collectors where the objective consists of making the outlet temperature of the collector tracking a desired reference. The performance of the novel approximate model based on fuzzy theory, which has been introduced by the authors in [1], is evaluated comparing to other methods in the literature. The proposed approximation is a low order state representation derived from the physical distributed model. It reproduces the temperature transfer dynamics through the collectors accurately and allows the simplification of the control design. Simulation results show interesting performance of the proposed controller.
Modelling refrigerant distribution in microchannel evaporators
DEFF Research Database (Denmark)
Brix, Wiebke; Kærn, Martin Ryhl; Elmegaard, Brian
2009-01-01
of the refrigerant distribution is carried out for two channels in parallel and for two different cases. In the first case maldistribution of the inlet quality into the channels is considered, and in the second case a non-uniform airflow on the secondary side is considered. In both cases the total mixed superheat...... out of the evaporator is kept constant. It is shown that the cooling capacity of the evaporator is reduced significantly, both in the case of unevenly distributed inlet quality and for the case of non-uniform airflow on the outside of the channels.......The effects of refrigerant maldistribution in parallel evaporator channels on the heat exchanger performance are investigated numerically. For this purpose a 1D steady state model of refrigerant R134a evaporating in a microchannel tube is built and validated against other evaporator models. A study...
Equivalence Problem Solvability in Biparametric Gateway Program Models
Directory of Open Access Journals (Sweden)
A. E. Molchanov
2014-01-01
Full Text Available Algebraic program models with procedures are designed to analyze program semantic properties on their models called program schemes. Procedural liberisation problem and equivalence problem are stated for program models with procedures in which both defining parameters are chosen independently. Program models with procedures built over a given program model without procedures are investigated. Algorithms for both stated tasks are proposed for models where an additional restriction is applied: the intersection emptiness problem is solvable in the program model without procedures. Polynomial estimates for the complexity of the algorithms are shown. Some topics for further investigation are proposed.
The Effect of Problem Solving and Problem Posing Models and Innate Ability to Students Achievement
Directory of Open Access Journals (Sweden)
Ratna Kartika Irawati
2015-04-01
Full Text Available Pengaruh Model Problem Solving dan Problem Posing serta Kemampuan Awal terhadap Hasil Belajar Siswa Abstract: Chemistry concepts understanding features abstract quality and requires higher order thinking skills. Yet, the learning on chemistry has not boost the higher order thinking skills of the students. The use of the learning model of Problem Solving and Problem Posing in observing the innate ability of the student is expected to resolve the issue. This study aims to determine the learning model which is effective to improve the study of the student with different level of innate ability. This study used the quasi-experimental design. The research data used in this research is the quiz/test of the class which consist of 14 multiple choice questions and 5 essay questions. The data analysis used is ANOVA Two Ways. The results showed that Problem Posing is more effective to improve the student compared to Problem Solving, students with high level of innate ability have better outcomes in learning rather than the students with low level of innate ability after being applied with the Problem solving and Problem posing model, further, Problem Solving and Problem Posing is more suitable to be applied to the students with high level of innate ability. Key Words: problem solving, problem posing, higher order thinking skills, innate ability, learning outcomes Abstrak: Pemahaman konsep-konsep kimia yang bersifat abstrak membutuhkan keterampilan berpikir tingkat tinggi. Pembelajaran kimia belum mendorong siswa melakukan keterampilan berpikir tingkat tinggi. Penggunaan model pembelajaran Problem Solving dan Problem Posing dengan memperhatikan kemampuan awal siswa diduga dapat mengatasi masalah tersebut. Penelitian ini bertujuan untuk mengetahui model pembelajaran yang efektif dalam meningkatkan hasil belajar dengan kemampuan awal siswa yang berbeda. Penelitian ini menggunakan rancangan eksperimen semu. Data penelitian menggunakan tes hasil belajar
Texture modelling by discrete distribution mixtures
Czech Academy of Sciences Publication Activity Database
Grim, Jiří; Haindl, Michal
2003-01-01
Roč. 41, 3-4 (2003), s. 603-615 ISSN 0167-9473 R&D Projects: GA ČR GA102/00/0030; GA AV ČR KSK1019101 Institutional research plan: CEZ:AV0Z1075907 Keywords : discrete distribution mixtures * EM algorithm * texture modelling Subject RIV: JC - Computer Hardware ; Software Impact factor: 0.711, year: 2003
Nourifar, Raheleh; Mahdavi, Iraj; Mahdavi-Amiri, Nezam; Paydar, Mohammad Mahdi
2017-09-01
Decentralized supply chain management is found to be significantly relevant in today's competitive markets. Production and distribution planning is posed as an important optimization problem in supply chain networks. Here, we propose a multi-period decentralized supply chain network model with uncertainty. The imprecision related to uncertain parameters like demand and price of the final product is appropriated with stochastic and fuzzy numbers. We provide mathematical formulation of the problem as a bi-level mixed integer linear programming model. Due to problem's convolution, a structure to solve is developed that incorporates a novel heuristic algorithm based on Kth-best algorithm, fuzzy approach and chance constraint approach. Ultimately, a numerical example is constructed and worked through to demonstrate applicability of the optimization model. A sensitivity analysis is also made.
Modeling the Structure and Complexity of Engineering Routine Design Problems
Jauregui Becker, Juan Manuel; Wits, Wessel Willems; van Houten, Frederikus J.A.M.
2011-01-01
This paper proposes a model to structure routine design problems as well as a model of its design complexity. The idea is that having a proper model of the structure of such problems enables understanding its complexity, and likewise, a proper understanding of its complexity enables the development
Ranking multivariate GARCH models by problem dimension
M. Caporin (Massimiliano); M.J. McAleer (Michael)
2010-01-01
textabstractIn the last 15 years, several Multivariate GARCH (MGARCH) models have appeared in the literature. The two most widely known and used are the Scalar BEKK model of Engle and Kroner (1995) and Ding and Engle (2001), and the DCC model of Engle (2002). Some recent research has begun to
International Nuclear Information System (INIS)
Guilani, Pedram Pourkarim; Azimi, Parham; Niaki, S.T.A.; Niaki, Seyed Armin Akhavan
2016-01-01
The redundancy allocation problem (RAP) is a useful method to enhance system reliability. In most works involving RAP, failure rates of the system components are assumed to follow either exponential or k-Erlang distributions. In real world problems however, many systems have components with increasing failure rates. This indicates that as time passes by, the failure rates of the system components increase in comparison to their initial failure rates. In this paper, the redundancy allocation problem of a series–parallel system with components having an increasing failure rate based on Weibull distribution is investigated. An optimization method via simulation is proposed for modeling and a genetic algorithm is developed to solve the problem. - Highlights: • The redundancy allocation problem of a series–parallel system is aimed. • Components possess an increasing failure rate based on Weibull distribution. • An optimization method via simulation is proposed for modeling. • A genetic algorithm is developed to solve the problem.
Modelling of skin exposure from distributed sources
DEFF Research Database (Denmark)
Fogh, C.L.; Andersson, Kasper Grann
2000-01-01
A simple model of indoor air pollution concentrations was used together with experimental results on deposition velocities to skin to calculate the skin dose from an outdoor plume of contaminants, The primary pathway was considered to be direct deposition to the skin from a homogeneously distribu...... distributed air source. The model has been used to show that skin deposition was a significant dose contributor for example when compared to inhalation dose. (C) 2000 British Occupational Hygiene Society, Published by Elsevier Science Ltd. All rights reserved....
The predictive performance and stability of six species distribution models.
Duan, Ren-Yan; Kong, Xiao-Quan; Huang, Min-Yi; Fan, Wei-Yi; Wang, Zhi-Gao
2014-01-01
Predicting species' potential geographical range by species distribution models (SDMs) is central to understand their ecological requirements. However, the effects of using different modeling techniques need further investigation. In order to improve the prediction effect, we need to assess the predictive performance and stability of different SDMs. We collected the distribution data of five common tree species (Pinus massoniana, Betula platyphylla, Quercus wutaishanica, Quercus mongolica and Quercus variabilis) and simulated their potential distribution area using 13 environmental variables and six widely used SDMs: BIOCLIM, DOMAIN, MAHAL, RF, MAXENT, and SVM. Each model run was repeated 100 times (trials). We compared the predictive performance by testing the consistency between observations and simulated distributions and assessed the stability by the standard deviation, coefficient of variation, and the 99% confidence interval of Kappa and AUC values. The mean values of AUC and Kappa from MAHAL, RF, MAXENT, and SVM trials were similar and significantly higher than those from BIOCLIM and DOMAIN trials (pSDMs (MAHAL, RF, MAXENT, and SVM) had higher prediction accuracy, smaller confidence intervals, and were more stable and less affected by the random variable (randomly selected pseudo-absence points). According to the prediction performance and stability of SDMs, we can divide these six SDMs into two categories: a high performance and stability group including MAHAL, RF, MAXENT, and SVM, and a low performance and stability group consisting of BIOCLIM, and DOMAIN. We highlight that choosing appropriate SDMs to address a specific problem is an important part of the modeling process.
Nonparametric Estimation of Distributions in Random Effects Models
Hart, Jeffrey D.
2011-01-01
We propose using minimum distance to obtain nonparametric estimates of the distributions of components in random effects models. A main setting considered is equivalent to having a large number of small datasets whose locations, and perhaps scales, vary randomly, but which otherwise have a common distribution. Interest focuses on estimating the distribution that is common to all datasets, knowledge of which is crucial in multiple testing problems where a location/scale invariant test is applied to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article, including R-code and a dataset, are available online. © 2011 American Statistical Association.
A distributed snow-evolution modeling system (SnowModel)
Glen E. Liston; Kelly. Elder
2006-01-01
SnowModel is a spatially distributed snow-evolution modeling system designed for application in landscapes, climates, and conditions where snow occurs. It is an aggregation of four submodels: MicroMet defines meteorological forcing conditions, EnBal calculates surface energy exchanges, SnowPack simulates snow depth and water-equivalent evolution, and SnowTran-3D...
Directory of Open Access Journals (Sweden)
Umit Sami Sakalli
2017-01-01
Full Text Available Production-Distribution Problem (PDP in Supply Chain Management (SCM is an important tactical decision. One of the challenges in this decision is the size and complexity of supply chain system (SCS. On the other side, a tactical operation is a mid-term plan for 6–12 months; therefore, it includes different types of uncertainties, which is the second challenge. In the literature, the uncertain parameters were modeled as stochastic or fuzzy. However, there are a few studies in the literature that handle stochastic and fuzzy uncertainties simultaneously in PDP. In this paper, the modeling and solution approaches of PDP which contain stochastic and fuzzy uncertainties simultaneously are investigated for a SCS that includes multiple suppliers, multiple products, multiple plants, multiple warehouses, multiple retailers, multiple transport paths, and multiple time periods, which, to the best of the author’s knowledge, is not handled in the literature. The PDP contains deterministic, fuzzy, fuzzy random, and random fuzzy parameters. To the best of the author’s knowledge, there is no study in the literature which considers all of them simultaneously in PDP. An analytic solution approach has been developed by using possibilistic programming and chance-constrained programming approaches. The proposed modeling and solution approaches are implemented in a numerical example. The solution has shown that the proposed approaches successfully handled uncertainties and produce robust solutions for PDP.
Li, Zejing
2012-01-01
This dissertation is mainly devoted to the research of two problems - the continuous-time portfolio optimization in different Wishart models and the effects of discrete rebalancing on portfolio wealth distribution and optimal portfolio strategy.
Directory of Open Access Journals (Sweden)
Hao Zhang
2017-01-01
Full Text Available The problem of locating distribution centers for delivering fresh food as a part of electronic commerce is a strategic decision problem for enterprises. This paper establishes a model for locating distribution centers that considers the uncertainty of customer demands for fresh goods in terms of time-sensitiveness and freshness. Based on the methodology of robust optimization in dealing with uncertain problems, this paper optimizes the location model in discrete demand probabilistic scenarios. In this paper, an improved fruit fly optimization algorithm is proposed to solve the distribution center location problem. An example is given to show that the proposed model and algorithm are robust and can effectively handle the complications caused by uncertain demand. The model proposed in this paper proves valuable both theoretically and practically in the selection of locations of distribution centers.
BAYESIAN MODELS FOR SPECIES DISTRIBUTION MODELLING WITH ONLY-PRESENCE RECORDS
Directory of Open Access Journals (Sweden)
Bartolo de JesÃºs Villar-HernÃ¡ndez
2015-08-01
Full Text Available One of the central issues in ecology is the study of geographical distribution of species of flora and fauna through Species Distribution Models (SDM. Recently, scientific interest has focused on presence-only records. Two recent approaches have been proposed for this problem: a model based on maximum likelihood method (Maxlike and an inhomogeneous poisson process model (IPP. In this paper we discussed two bayesian approaches called MaxBayes and IPPBayes based on Maxlike and IPP model, respectively. To illustrate these proposals, we implemented two study examples: (1 both models were implemented on a simulated dataset, and (2 we modeled the potencial distribution of genus Dalea in the Tehuacan-CuicatlÃ¡n biosphere reserve with both models, the results was compared with that of Maxent. The results show that both models, MaxBayes and IPPBayes, are viable alternatives when species distribution are modeled with only-presence records. For simulated dataset, MaxBayes achieved prevalence estimation, even when the number of records was small. In the real dataset example, both models predict similar potential distributions like Maxent does. Â
Ibrahima, Fayadhoi; Meyer, Daniel; Tchelepi, Hamdi
2016-04-01
Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are crucial to explore possible scenarios and assess risks in subsurface problems. In particular, nonlinear two-phase flows in porous media are essential, yet challenging, in reservoir simulation and hydrology. Adding highly heterogeneous and uncertain input, such as the permeability and porosity fields, transforms the estimation of the flow response into a tough stochastic problem for which computationally expensive Monte Carlo (MC) simulations remain the preferred option.We propose an alternative approach to evaluate the probability distribution of the (water) saturation for the stochastic Buckley-Leverett problem when the probability distributions of the permeability and porosity fields are available. We give a computationally efficient and numerically accurate method to estimate the one-point probability density (PDF) and cumulative distribution functions (CDF) of the (water) saturation. The distribution method draws inspiration from a Lagrangian approach of the stochastic transport problem and expresses the saturation PDF and CDF essentially in terms of a deterministic mapping and the distribution and statistics of scalar random fields. In a large class of applications these random fields can be estimated at low computational costs (few MC runs), thus making the distribution method attractive. Even though the method relies on a key assumption of fixed streamlines, we show that it performs well for high input variances, which is the case of interest. Once the saturation distribution is determined, any one-point statistics thereof can be obtained, especially the saturation average and standard deviation. Moreover, the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be efficiently derived from the distribution method. These statistics can then be used for risk assessment, as well as data assimilation and uncertainty reduction
Directory of Open Access Journals (Sweden)
S. M. J. Mirzapour Al-e-Hashem
2011-01-01
Full Text Available A multi-objective two stage stochastic programming model is proposed to deal with a multi-period multi-product multi-site production-distribution planning problem for a midterm planning horizon. The presented model involves majority of supply chain cost parameters such as transportation cost, inventory holding cost, shortage cost, production cost. Moreover some respects as lead time, outsourcing, employment, dismissal, workers productivity and training are considered. Due to the uncertain nature of the supply chain, it is assumed that cost parameters and demand fluctuations are random variables and follow from a pre-defined probability distribution. To develop a robust stochastic model, an additional objective functions is added to the traditional production-distribution-planning problem. So, our multi-objective model includes (i the minimization of the expected total cost of supply chain, (ii the minimization of the variance of the total cost of supply chain and (iii the maximization of the workers productivity through training courses that could be held during the planning horizon. Then, the proposed model is solved applying a hybrid algorithm that is a combination of Monte Carlo sampling method, modified -constraint method and L-shaped method. Finally, a numerical example is solved to demonstrate the validity of the model as well as the efficiency of the hybrid algorithm.
Problem Solving, Modeling, and Local Conceptual Development.
Lesh, Richard; Harel, Guershon
2003-01-01
Describes similarities and differences between modeling cycles and stages of development. Includes examples of relevant constructs underlying children's developing ways of thinking about fractions, ratios, rates, proportions, or other mathematical ideas. Concludes that modeling cycles appear to be local or situated versions of the general stages…
Krohling, Renato A; Coelho, Leandro dos Santos
2006-12-01
In this correspondence, an approach based on coevolutionary particle swarm optimization to solve constrained optimization problems formulated as min-max problems is presented. In standard or canonical particle swarm optimization (PSO), a uniform probability distribution is used to generate random numbers for the accelerating coefficients of the local and global terms. We propose a Gaussian probability distribution to generate the accelerating coefficients of PSO. Two populations of PSO using Gaussian distribution are used on the optimization algorithm that is tested on a suite of well-known benchmark constrained optimization problems. Results have been compared with the canonical PSO (constriction factor) and with a coevolutionary genetic algorithm. Simulation results show the suitability of the proposed algorithm in terms of effectiveness and robustness.
Reconsideration of mass-distribution models
Directory of Open Access Journals (Sweden)
Ninković S.
2014-01-01
Full Text Available The mass-distribution model proposed by Kuzmin and Veltmann (1973 is revisited. It is subdivided into two models which have a common case. Only one of them is subject of the present study. The study is focused on the relation between the density ratio (the central one to that corresponding to the core radius and the total-mass fraction within the core radius. The latter one is an increasing function of the former one, but it cannot exceed one quarter, which takes place when the density ratio tends to infinity. Therefore, the model is extended by representing the density as a sum of two components. The extension results into possibility of having a correspondence between the infinite density ratio and 100% total-mass fraction. The number of parameters in the extended model exceeds that of the original model. Due to this, in the extended model, the correspondence between the density ratio and total-mass fraction is no longer one-to-one; several values of the total-mass fraction can correspond to the same value for the density ratio. In this way, the extended model could explain the contingency of having two, or more, groups of real stellar systems (subsystems in the diagram total-mass fraction versus density ratio. [Projekat Ministarstva nauke Republike Srbije, br. 176011: Dynamics and Kinematics of Celestial Bodies and Systems
New model for nucleon generalized parton distributions
Energy Technology Data Exchange (ETDEWEB)
Radyushkin, Anatoly V. [JLAB, Newport News, VA (United States)
2014-01-01
We describe a new type of models for nucleon generalized parton distributions (GPDs) H and E. They are heavily based on the fact nucleon GPDs require to use two forms of double distribution (DD) representations. The outcome of the new treatment is that the usual DD+D-term construction should be amended by an extra term, {xi} E{sub +}{sup 1} (x,{xi}) which has the DD structure {alpha}/{beta} e({beta},{alpha}, with e({beta},{alpha}) being the DD that generates GPD E(x,{xi}). We found that this function, unlike the D-term, has support in the whole -1 <= x <= 1 region. Furthermore, it does not vanish at the border points |x|={xi}.
Ballistic model to estimate microsprinkler droplet distribution
Directory of Open Access Journals (Sweden)
Conceição Marco Antônio Fonseca
2003-01-01
Full Text Available Experimental determination of microsprinkler droplets is difficult and time-consuming. This determination, however, could be achieved using ballistic models. The present study aimed to compare simulated and measured values of microsprinkler droplet diameters. Experimental measurements were made using the flour method, and simulations using a ballistic model adopted by the SIRIAS computational software. Drop diameters quantified in the experiment varied between 0.30 mm and 1.30 mm, while the simulated between 0.28 mm and 1.06 mm. The greatest differences between simulated and measured values were registered at the highest radial distance from the emitter. The model presented a performance classified as excellent for simulating microsprinkler drop distribution.
Robustness of a Distributed Knowledge Management Model
DEFF Research Database (Denmark)
Pedersen, Mogens Kuhn; Holm Larsen, Michael
2003-01-01
In globalizing competitive markets knowledge exchangebetween business organizations requires incentivemechanisms to ensure tactical purposes while strategicpurposes are subject to joint organization and otherforms of contractual obligations. Where property ofknowledge (e.g. patents and copyrights......) and contractbasedknowledge exchange do not obtain networkeffectiveness because of prohibitive transaction costs inreducing uncertainty, we suggest a robust model for peerproduced knowledge within a distributed setting. Thepeer produced knowledge exchange model relies upon adouble loop knowledge conversion...... with symmetricincentives in a network since the production of actorspecific knowledge makes any knowledge appropriationby use of property rights by the actors irrelevant. Withoutproperty rights in knowledge the actor network generatesopportunity for incentive symmetry over a period of time.The model merges specific...
Simulation model of load balancing in distributed computing systems
Botygin, I. A.; Popov, V. N.; Frolov, S. G.
2017-02-01
The availability of high-performance computing, high speed data transfer over the network and widespread of software for the design and pre-production in mechanical engineering have led to the fact that at the present time the large industrial enterprises and small engineering companies implement complex computer systems for efficient solutions of production and management tasks. Such computer systems are generally built on the basis of distributed heterogeneous computer systems. The analytical problems solved by such systems are the key models of research, but the system-wide problems of efficient distribution (balancing) of the computational load and accommodation input, intermediate and output databases are no less important. The main tasks of this balancing system are load and condition monitoring of compute nodes, and the selection of a node for transition of the user’s request in accordance with a predetermined algorithm. The load balancing is one of the most used methods of increasing productivity of distributed computing systems through the optimal allocation of tasks between the computer system nodes. Therefore, the development of methods and algorithms for computing optimal scheduling in a distributed system, dynamically changing its infrastructure, is an important task.
A Probabilistic Model for Uncertain Problem Solving
National Research Council Canada - National Science Library
Farley, Arthur M
1981-01-01
... and provide pragmatic focusing. Search methods are generalized to produce tree-structured plans incorporating the use of such operators. Several application domains for the model also are discussed.
Solid mechanics theory, modeling, and problems
Bertram, Albrecht
2015-01-01
This textbook offers an introduction to modeling the mechanical behavior of solids within continuum mechanics and thermodynamics. To illustrate the fundamental principles, the book starts with an overview of the most important models in one dimension. Tensor calculus, which is called for in three-dimensional modeling, is concisely presented in the second part of the book. Once the reader is equipped with these essential mathematical tools, the third part of the book develops the foundations of continuum mechanics right from the beginning. Lastly, the book’s fourth part focuses on modeling the mechanics of materials and in particular elasticity, viscoelasticity and plasticity. Intended as an introductory textbook for students and for professionals interested in self-study, it also features numerous worked-out examples to aid in understanding.
Model Order Reduction: Application to Electromagnetic Problems
Paquay, Yannick
2017-01-01
With the increase in computational resources, numerical modeling has grown expo- nentially these last two decades. From structural analysis to combustion modeling and electromagnetics, discretization methods–in particular the finite element method–have had a tremendous impact. Their main advantage consists in a correct representation of dynamical and nonlinear behaviors by solving equations at local scale, however the spatial discretization inherent to such approaches is also its main drawbac...
AbdulJabbar, Mustafa Abdulmajeed
2017-05-11
Reduction of communication and efficient partitioning are key issues for achieving scalability in hierarchical N-Body algorithms like Fast Multipole Method (FMM). In the present work, we propose three independent strategies to improve partitioning and reduce communication. First, we show that the conventional wisdom of using space-filling curve partitioning may not work well for boundary integral problems, which constitute a significant portion of FMM’s application user base. We propose an alternative method that modifies orthogonal recursive bisection to relieve the cell-partition misalignment that has kept it from scaling previously. Secondly, we optimize the granularity of communication to find the optimal balance between a bulk-synchronous collective communication of the local essential tree and an RDMA per task per cell. Finally, we take the dynamic sparse data exchange proposed by Hoefler et al. [1] and extend it to a hierarchical sparse data exchange, which is demonstrated at scale to be faster than the MPI library’s MPI_Alltoallv that is commonly used.
Directory of Open Access Journals (Sweden)
Peiqing Li
2015-01-01
Full Text Available Fresh fruits and vegetables, perishable by nature, are subject to additional deterioration and bruising in the distribution process due to vibration and shock caused by road irregularities. A nonlinear mathematical model was developed that considered not only the vehicle routing problem with time windows but also the effect of road irregularities on the bruising of fresh fruits and vegetables. The main objective of this work was to obtain the optimal distribution routes for fresh fruits and vegetables considering different road classes with the least amount of logistics costs. An improved genetic algorithm was used to solve the problem. A fruit delivery route among the 13 cities in Jiangsu Province was used as a real analysis case. The simulation results showed that the vehicle routing problem with time windows, considering road irregularities and different classes of toll roads, can significantly influence total delivery costs compared with traditional VRP models. The comparison between four models to predict the total cost and actual total cost in distribution showed that the improved genetic algorithm is superior to the Group-based pattern, CW pattern, and O-X type cross pattern.
Wright, Adam; Bates, David W
2010-01-01
BACKGROUND: Many natural phenomena demonstrate power-law distributions, where very common items predominate. Problems, medications and lab results represent some of the most important data elements in medicine, but their overall distribution has not been reported. OBJECTIVE: Our objective is to determine whether problems, medications and lab results demonstrate a power law distribution. METHODS: Retrospective review of electronic medical record data for 100,000 randomly selected patients seen at least twice in 2006 and 2007 at the Brigham and Women's Hospital in Boston and its affiliated medical practices. RESULTS: All three data types exhibited a power law distribution. The 12.5% most frequently used problems account for 80% of all patient problems, the top 11.8% of medications account for 80% of all medication orders and the top 4.5% of lab result types account for all lab results. CONCLUSION: These three data elements exhibited power law distributions with a small number of common items representing a substantial proportion of all orders and observations, which has implications for electronic health record design.
Discrete and Continuous Models for Partitioning Problems
Lellmann, Jan
2013-04-11
Recently, variational relaxation techniques for approximating solutions of partitioning problems on continuous image domains have received considerable attention, since they introduce significantly less artifacts than established graph cut-based techniques. This work is concerned with the sources of such artifacts. We discuss the importance of differentiating between artifacts caused by discretization and those caused by relaxation and provide supporting numerical examples. Moreover, we consider in depth the consequences of a recent theoretical result concerning the optimality of solutions obtained using a particular relaxation method. Since the employed regularizer is quite tight, the considered relaxation generally involves a large computational cost. We propose a method to significantly reduce these costs in a fully automatic way for a large class of metrics including tree metrics, thus generalizing a method recently proposed by Strekalovskiy and Cremers (IEEE conference on computer vision and pattern recognition, pp. 1905-1911, 2011). © 2013 Springer Science+Business Media New York.
Modeling and Solving the Train Pathing Problem
Directory of Open Access Journals (Sweden)
Chuen-Yih Chen
2009-04-01
Full Text Available In a railroad system, train pathing is concerned with the assignment of trains to links and tracks, and train timetabling allocates time slots to trains. In this paper, we present an optimization heuristic to solve the train pathing and timetabling problem. This heuristic allows the dwell time of trains in a station or link to be dependent on the assigned tracks. It also allows the minimum clearance time between the trains to depend on their relative status. The heuristic generates a number of alternative paths for each train service in the initialization phase. Then it uses a neighborhood search approach to find good feasible combinations of these paths. A linear program is developed to evaluate the quality of each combination that is encountered. Numerical examples are provided.
Modeling Coordination Problems in a Music Ensemble
DEFF Research Database (Denmark)
Frimodt-Møller, Søren R.
2008-01-01
This paper considers in general terms, how musicians are able to coordinate through rational choices in a situation of (temporary) doubt in an ensemble performance. A fictitious example involving a 5-bar development in an unknown piece of music is analyzed in terms of epistemic logic, more...... specifically a multi-agent system, where it is shown that perfect coordination can only be certain to take place if the musicians have common knowledge of certain rules of the composition. We subsequently argue, however, that the musicians need not agree on the central features of the piece of music in order...... to coordinate. Such coordination can be described in terms of Michael Bacharach's theory of variable frames as an aid to solve game theoretic coordination problems....
Inverse Modelling Problems in Linear Algebra Undergraduate Courses
Martinez-Luaces, Victor E.
2013-01-01
This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…
Some Problems in Using Diffusion Models for New Products.
Bernhardt, Irwin; Mackenzie, Kenneth D.
This paper analyzes some of the problems of using diffusion models to formulate marketing strategies for new products. Though future work in this area appears justified, many unresolved problems limit its application. There is no theory for adoption and diffusion processes; such a theory is outlined in this paper. The present models are too…
Facilitating Change to a Problem-based Model
DEFF Research Database (Denmark)
Kolmos, Anette
2002-01-01
The paper presents the barriers which arise during the change process from a traditional educational system to a problem-based educational model.......The paper presents the barriers which arise during the change process from a traditional educational system to a problem-based educational model....
Models and theory for precompound angular distributions
Energy Technology Data Exchange (ETDEWEB)
Blann, M.; Pohl, B.A.; Remington, B.A. (Lawrence Livermore National Lab., CA (USA)); Scobel, W.; Trabandt, M. (Hamburg Univ. (Germany, F.R.). 1. Inst. fuer Experimentalphysik); Byrd, R.C. (Los Alamos National Lab., NM (USA)); Foster, C.C. (Indiana Univ. Cyclotron Facility, Bloomington, IN (USA)); Bonetti, R.; Chiesa, C. (Milan Univ. (Italy). Ist. di Fisica Generale Applicata); Grimes, S.M. (Ohio Univ
1990-06-06
We compare angular distributions calculated by folding nucleon- nucleon scattering kernels, using the theory of Feshbach, Kerman and Koonin, and the systematics of Kalbach, with a wide range of data. The data range from (n,xn) at 14 MeV incident energy to (p,xn) at 160 MeV incident energy. The FKK theory works well with one adjustable parameter, the depth of the nucleon-nucleon interaction potential. The systematics work well when normalized to the hybrid model single differential cross section prediction. The nucleon- nucleon scattering approach seems inadequate. 9 refs., 10 figs.
Ghosh, Diptesh; Chakrabarti, Anindya S.
2017-10-01
In this paper, we study a large-scale distributed coordination problem and propose efficient adaptive strategies to solve the problem. The basic problem is to allocate finite number of resources to individual agents in the absence of a central planner such that there is as little congestion as possible and the fraction of unutilized resources is reduced as far as possible. In the absence of a central planner and global information, agents can employ adaptive strategies that uses only a finite knowledge about the competitors. In this paper, we show that a combination of finite information sets and reinforcement learning can increase the utilization fraction of resources substantially.
Research on consumable distribution mode of shipbuilder’s shop based on vehicle routing problem
Directory of Open Access Journals (Sweden)
Xiang Su
2017-02-01
Full Text Available A distribution vehicle optimization is established with considerations for the problem of long period of requisition and high shop costs due to the existing consumable requisition mode in shipbuilder’s shops for the requirements of shops for consumables. The shortest traveling distance of distribution vehicles are calculated with the genetic algorithm (GA. Explorations are made into a shop consumable distribution mode for shipbuilders to help them to effectively save their production logistics costs, enhance their internal material management level and provide reference for shipbuilder’s change in traditional ways and realization of just-in-time (JIT production.
Model Predictive Control for Distributed Microgrid Battery Energy Storage Systems
DEFF Research Database (Denmark)
Morstyn, Thomas; Hredzak, Branislav; Aguilera, Ricardo P.
2018-01-01
-current model and linearized power flow approximations. This allows the optimal power flows to be solved as a convex optimization problem, for which fast and robust solvers exist. The proposed method does not assume that real and reactive power flows are decoupled, allowing line losses, voltage constraints...... feeder, with distributed battery ES systems and intermittent photovoltaic generation. It is shown that the proposed control strategy approaches the performance of a strategy based on nonconvex optimization, while reducing the required computation time by a factor of 1000, making it suitable for a real...
How can model comparison help improving species distribution models?
Directory of Open Access Journals (Sweden)
Emmanuel Stephan Gritti
Full Text Available Today, more than ever, robust projections of potential species range shifts are needed to anticipate and mitigate the impacts of climate change on biodiversity and ecosystem services. Such projections are so far provided almost exclusively by correlative species distribution models (correlative SDMs. However, concerns regarding the reliability of their predictive power are growing and several authors call for the development of process-based SDMs. Still, each of these methods presents strengths and weakness which have to be estimated if they are to be reliably used by decision makers. In this study we compare projections of three different SDMs (STASH, LPJ and PHENOFIT that lie in the continuum between correlative models and process-based models for the current distribution of three major European tree species, Fagussylvatica L., Quercusrobur L. and Pinussylvestris L. We compare the consistency of the model simulations using an innovative comparison map profile method, integrating local and multi-scale comparisons. The three models simulate relatively accurately the current distribution of the three species. The process-based model performs almost as well as the correlative model, although parameters of the former are not fitted to the observed species distributions. According to our simulations, species range limits are triggered, at the European scale, by establishment and survival through processes primarily related to phenology and resistance to abiotic stress rather than to growth efficiency. The accuracy of projections of the hybrid and process-based model could however be improved by integrating a more realistic representation of the species resistance to water stress for instance, advocating for pursuing efforts to understand and formulate explicitly the impact of climatic conditions and variations on these processes.
Numerical modeling of some engineering heat transfer problems
Energy Technology Data Exchange (ETDEWEB)
Eriksson, Daniel
1998-04-01
Engineering heat transfer problems are very often of a complex nature and most often no analytical solutions exist. One way to create solutions to such problems is to apply numerical methods. This study concerns heat transfer problems with coupled conduction, convection and thermal radiation. Five important but different engineering problems are considered. (1) The transient temperature distribution in a rotating cylinder which is exposed to a time varying incident heat flux, e.g. a nuclear burst, is determined. The cylinder is cooled by mixed convection and thermal radiation. The effects of the leading parameters, such as rotation speed, the cooling parameters and the physical properties of the shell are studied. (2) The cooling of a roll system which is transporting/casting a thin hot plastic film. The leading roll is heated by the hot film, cooled at the interior by forced convection and on the outside by forced convection, thermal radiation and contact with a support roll. The influence of the cooling parameters and the rotation are studied. (3) The heat and mass diffusion in pre-insulated district heating/cooling pipes. The task is to determine the effects of the gas mass transport through the casing of the pipes on the thermal behaviour and effects of condensed water due to the mass diffusion of water vapour. The importance of the density of the casing, the wall thickness of the casing, the thickness of the insulation and the surrounding temperature is revealed. (4) The development of a cooling system for an electrical unit in which a time dependent heat is generated due to the Joule effect. (5) The heat transfer from a rectangular fin in a confined space. The fin is cooled by turbulent forced convection. The turbulence model applied is a low Reynolds k-{epsilon}-model. Predicted results are compared with experimental ones, and a correlation for the Nusselt number is proposed. The effects of thermal radiation for non-participating as well as participating
Our evolving conceptual model of the coastal eutrophication problem
Cloern, James E.
2001-01-01
A primary focus of coastal science during the past 3 decades has been the question: How does anthropogenic nutrient enrichment cause change in the structure or function of nearshore coastal ecosystems? This theme of environmental science is recent, so our conceptual model of the coastal eutrophication problem continues to change rapidly. In this review, I suggest that the early (Phase I) conceptual model was strongly influenced by limnologists, who began intense study of lake eutrophication by the 1960s. The Phase I model emphasized changing nutrient input as a signal, and responses to that signal as increased phytoplankton biomass and primary production, decomposition of phytoplankton-derived organic matter, and enhanced depletion of oxygen from bottom waters. Coastal research in recent decades has identified key differences in the responses of lakes and coastal-estuarine ecosystems to nutrient enrichment. The contemporary (Phase II) conceptual model reflects those differences and includes explicit recognition of (1) system-specific attributes that act as a filter to modulate the responses to enrichment (leading to large differences among estuarine-coastal systems in their sensitivity to nutrient enrichment); and (2) a complex suite of direct and indirect responses including linked changes in: water transparency, distribution of vascular plants and biomass of macroalgae, sediment biogeochemistry and nutrient cycling, nutrient ratios and their regulation of phytoplankton community composition, frequency of toxic/harmful algal blooms, habitat quality for metazoans, reproduction/growth/survival of pelagic and benthic invertebrates, and subtle changes such as shifts in the seasonality of ecosystem functions. Each aspect of the Phase II model is illustrated here with examples from coastal ecosystems around the world. In the last section of this review I present one vision of the next (Phase III) stage in the evolution of our conceptual model, organized around 5
An Optimal Design Model for New Water Distribution Networks in ...
African Journals Online (AJOL)
This paper is concerned with the problem of optimizing the distribution of water in Kigali City at a minimum cost. The mathematical formulation is a Linear Programming Problem (LPP) which involves the design of a new network of water distribution considering the cost in the form of unit price of pipes, the hydraulic gradient ...
Belkina, T. A.; Konyukhova, N. B.; Kurochkin, S. V.
2016-01-01
Previous and new results are used to compare two mathematical insurance models with identical insurance company strategies in a financial market, namely, when the entire current surplus or its constant fraction is invested in risky assets (stocks), while the rest of the surplus is invested in a risk-free asset (bank account). Model I is the classical Cramér-Lundberg risk model with an exponential claim size distribution. Model II is a modification of the classical risk model (risk process with stochastic premiums) with exponential distributions of claim and premium sizes. For the survival probability of an insurance company over infinite time (as a function of its initial surplus), there arise singular problems for second-order linear integrodifferential equations (IDEs) defined on a semiinfinite interval and having nonintegrable singularities at zero: model I leads to a singular constrained initial value problem for an IDE with a Volterra integral operator, while II model leads to a more complicated nonlocal constrained problem for an IDE with a non-Volterra integral operator. A brief overview of previous results for these two problems depending on several positive parameters is given, and new results are presented. Additional results are concerned with the formulation, analysis, and numerical study of "degenerate" problems for both models, i.e., problems in which some of the IDE parameters vanish; moreover, passages to the limit with respect to the parameters through which we proceed from the original problems to the degenerate ones are singular for small and/or large argument values. Such problems are of mathematical and practical interest in themselves. Along with insurance models without investment, they describe the case of surplus completely invested in risk-free assets, as well as some noninsurance models of surplus dynamics, for example, charity-type models.
Effectiveness of discovery learning model on mathematical problem solving
Herdiana, Yunita; Wahyudin, Sispiyati, Ririn
2017-08-01
This research is aimed to describe the effectiveness of discovery learning model on mathematical problem solving. This research investigate the students' problem solving competency before and after learned by using discovery learning model. The population used in this research was student in grade VII in one of junior high school in West Bandung Regency. From nine classes, class VII B were randomly selected as the sample of experiment class, and class VII C as control class, which consist of 35 students every class. The method in this research was quasi experiment. The instrument in this research is pre-test, worksheet and post-test about problem solving of mathematics. Based on the research, it can be conclude that the qualification of problem solving competency of students who gets discovery learning model on level 80%, including in medium category and it show that discovery learning model effective to improve mathematical problem solving.
Medical problem and document model for natural language understanding.
Meystre, Stephanie; Haug, Peter J
2003-01-01
We are developing tools to help maintain a complete, accurate and timely problem list within a general purpose Electronic Medical Record system. As a part of this project, we have designed a system to automatically retrieve medical problems from free-text documents. Here we describe an information model based on XML (eXtensible Markup Language) and compliant with the CDA (Clinical Document Architecture). This model is used to ease the exchange of clinical data between the Natural Language Understanding application that retrieves potential problems from narrative document, and the problem list management application.
Cost Optimisation in Freight Distribution with Cross-Docking: N-Echelon Location Routing Problem
Directory of Open Access Journals (Sweden)
Jesus Gonzalez-Feliu
2012-03-01
Full Text Available Freight transportation constitutes one of the main activities that influence the economy and society, as it assures a vital link between suppliers and customers and represents a major source of employment. Multi-echelon distribution is one of the most common strategies adopted by the transportation companies in an aim of cost reduction. Although vehicle routing problems are very common in operational research, they are essentially related to single-echelon cases. This paper presents the main concepts of multi-echelon distribution with cross-docks and a unified notation for the N-echelon location routing problem. A literature review is also presented, in order to list the main problems and methods that can be helpful for scientists and transportation practitioners.
Teaching model of problem solving Programming Fundamentals
Directory of Open Access Journals (Sweden)
Iván Darwin Tutillo-Arcentales
2016-10-01
Full Text Available The formation process has been studied by several authors in those last years, although not always focusing in the technology careers which requires of a pedagogical and didactic point of view, which promotes behaviouring changes in the teachers with impact in the quality of graduates. The purposes of this paper is: to value the pedagogical fundamentals of the formation in the career Analysis of systems,in order to promote qualitative and quantitative improvements in the students learning. The questioner applied to students and teachers proved the difficulties in the contents related to the algorithmic procedures, which constitutes a necessary content in their formation and to them contributes other syllabus of the first level. So it is necessary to model a theoretical construction which express the new relationships established from the psychological and didactic point of view in order to solving those situations from the programing.
Regularized multivariate regression models with skew-t error distributions
Chen, Lianfu
2014-06-01
We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both the regression coefficient and inverse scale matrices simultaneously. The sparsity is introduced through penalizing the negative log-likelihood by adding L1-penalties on the entries of the two matrices. Taking advantage of the hierarchical representation of skew-t distributions, and using the expectation conditional maximization (ECM) algorithm, we reduce the problem to penalized normal likelihood and develop a procedure to minimize the ensuing objective function. Using a simulation study the performance of the method is assessed, and the methodology is illustrated using a real data set with a 24-dimensional response vector. © 2014 Elsevier B.V.
Storage Solutions for Power Quality Problems in Cyprus Electricity Distribution Network
Directory of Open Access Journals (Sweden)
Andreas Poullikkas
2014-01-01
Full Text Available In this work, a prediction of the effects of introducing energy storage systems on the network stability of the distribution network of Cyprus and a comparison in terms of cost with a traditional solution is carried out. In particular, for solving possible overvoltage problems, several scenarios of storage units' installation are used and compared with the alternative solution of extra cable connection between the node with the lowest voltage and the node with the highest voltage of the distribution network. For the comparison, a case study of a typical LV distribution feeder in the power system of Cyprus is used. The results indicated that the performance indicator of each solution depends on the type, the size and the position of installation of the storage unit. Also, as more storage units are installed the better the performance indicator and the more attractive is the investment in storage units to solve power quality problems in the distribution network. In the case where the technical requirements in voltage limitations according to distribution regulations are satisfied with one storage unit, the installation of an additional storage unit will only increase the final cost. The best solution, however, still remains the alternative solution of extra cable connection between the node with the lowest voltage and the node with the highest voltage of the distribution network, due to the lower investment costs compared to that of the storage units.
Benchmark problems for numerical implementations of phase field models
International Nuclear Information System (INIS)
Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; Warren, J.; Heinonen, O. G.
2016-01-01
Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verify new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.
Reserve selection using nonlinear species distribution models.
Moilanen, Atte
2005-06-01
Reserve design is concerned with optimal selection of sites for new conservation areas. Spatial reserve design explicitly considers the spatial pattern of the proposed reserve network and the effects of that pattern on reserve cost and/or ability to maintain species there. The vast majority of reserve selection formulations have assumed a linear problem structure, which effectively means that the biological value of a potential reserve site does not depend on the pattern of selected cells. However, spatial population dynamics and autocorrelation cause the biological values of neighboring sites to be interdependent. Habitat degradation may have indirect negative effects on biodiversity in areas neighboring the degraded site as a result of, for example, negative edge effects or lower permeability for animal movement. In this study, I present a formulation and a spatial optimization algorithm for nonlinear reserve selection problems in grid-based landscapes that accounts for interdependent site values. The method is demonstrated using habitat maps and nonlinear habitat models for threatened birds in the Netherlands, and it is shown that near-optimal solutions are found for regions consisting of up to hundreds of thousands grid cells, a landscape size much larger than those commonly attempted even with linear reserve selection formulations.
Modeling Complex Chemical Systems: Problems and Solutions
van Dijk, Jan
2016-09-01
Non-equilibrium plasmas in complex gas mixtures are at the heart of numerous contemporary technologies. They typically contain dozens to hundreds of species, involved in hundreds to thousands of reactions. Chemists and physicists have always been interested in what are now called chemical reduction techniques (CRT's). The idea of such CRT's is that they reduce the number of species that need to be considered explicitly without compromising the validity of the model. This is usually achieved on the basis of an analysis of the reaction time scales of the system under study, which identifies species that are in partial equilibrium after a given time span. The first such CRT that has been widely used in plasma physics was developed in the 1960's and resulted in the concept of effective ionization and recombination rates. It was later generalized to systems in which multiple levels are effected by transport. In recent years there has been a renewed interest in tools for chemical reduction and reaction pathway analysis. An example of the latter is the PumpKin tool. Another trend is that techniques that have previously been developed in other fields of science are adapted as to be able to handle the plasma state of matter. Examples are the Intrinsic Low Dimension Manifold (ILDM) method and its derivatives, which originate from combustion engineering, and the general-purpose Principle Component Analysis (PCA) technique. In this contribution we will provide an overview of the most common reduction techniques, then critically assess the pros and cons of the methods that have gained most popularity in recent years. Examples will be provided for plasmas in argon and carbon dioxide.
The predictive performance and stability of six species distribution models.
Directory of Open Access Journals (Sweden)
Ren-Yan Duan
Full Text Available Predicting species' potential geographical range by species distribution models (SDMs is central to understand their ecological requirements. However, the effects of using different modeling techniques need further investigation. In order to improve the prediction effect, we need to assess the predictive performance and stability of different SDMs.We collected the distribution data of five common tree species (Pinus massoniana, Betula platyphylla, Quercus wutaishanica, Quercus mongolica and Quercus variabilis and simulated their potential distribution area using 13 environmental variables and six widely used SDMs: BIOCLIM, DOMAIN, MAHAL, RF, MAXENT, and SVM. Each model run was repeated 100 times (trials. We compared the predictive performance by testing the consistency between observations and simulated distributions and assessed the stability by the standard deviation, coefficient of variation, and the 99% confidence interval of Kappa and AUC values.The mean values of AUC and Kappa from MAHAL, RF, MAXENT, and SVM trials were similar and significantly higher than those from BIOCLIM and DOMAIN trials (p<0.05, while the associated standard deviations and coefficients of variation were larger for BIOCLIM and DOMAIN trials (p<0.05, and the 99% confidence intervals for AUC and Kappa values were narrower for MAHAL, RF, MAXENT, and SVM. Compared to BIOCLIM and DOMAIN, other SDMs (MAHAL, RF, MAXENT, and SVM had higher prediction accuracy, smaller confidence intervals, and were more stable and less affected by the random variable (randomly selected pseudo-absence points.According to the prediction performance and stability of SDMs, we can divide these six SDMs into two categories: a high performance and stability group including MAHAL, RF, MAXENT, and SVM, and a low performance and stability group consisting of BIOCLIM, and DOMAIN. We highlight that choosing appropriate SDMs to address a specific problem is an important part of the modeling process.
Some model problems of the dynamics of a jumping vehicle
Beletskii, V. V.; Dolganov, A. V.; Salimova, O. P.
1992-06-01
The paper considers two model problems of a vehicle moving on the surface of a planetoid by jumping. The vehicle is represented by a physical point subject to attraction from gravitational fields of different types. A characteristic feature of this problem is the effect on the vehicle of the planetoid-surface impacts, resulting in a complex, 'multipetaled', trajectory.
Models for the discrete berth allocation problem: A computational comparison
DEFF Research Database (Denmark)
Buhrkal, Katja Frederik; Zuglian, Sara; Røpke, Stefan
2011-01-01
In this paper we consider the problem of allocating arriving ships to discrete berth locations at container terminals. This problem is recognized as one of the most important processes for any container terminal. We review and describe three main models of the discrete dynamic berth allocation...
Models for the Discrete Berth Allocation Problem: A Computational Comparison
DEFF Research Database (Denmark)
Buhrkal, Katja; Zuglian, Sara; Røpke, Stefan
In this paper we consider the problem of allocating arriving ships to discrete berth locations at container terminals. This problem is recognized as one of the most important processes for any container terminal. We review and describe the three main models of the discrete dynamic berth allocation...
Modelling and solving an acyclic multi-period timetabling problem
Cangalovic, Mirjana; Schreuder, J.A.M.
1992-01-01
In this article case of the class-teacher timetabling problem is described. This case takes into consideration a partial ordering between the topics of the curriculum and special requirements in respect to their daily lectures. The problem is modelled as a discrete lexicographisc optimization
Problem Resolution through Electronic Mail: A Five-Step Model.
Grandgenett, Neal; Grandgenett, Don
2001-01-01
Discusses the use of electronic mail within the general resolution and management of administrative problems and emphasizes the need for careful attention to problem definition and clarity of language. Presents a research-based five-step model for the effective use of electronic mail based on experiences at the University of Nebraska at Omaha.…
International Nuclear Information System (INIS)
Grscic, Z.
1989-01-01
Models for solving transport and dispersion problems of radioactive pollutants through atmosphere are briefly shown. These models are the base for solving and some special problems such as: estimating effective and physical heights of radioactive sources, computation of radioactive concentration distribution from multiple sources etc (author)
APPROXIMATION OF PROBABILITY DISTRIBUTIONS IN QUEUEING MODELS
Directory of Open Access Journals (Sweden)
T. I. Aliev
2013-03-01
Full Text Available For probability distributions with variation coefficient, not equal to unity, mathematical dependences for approximating distributions on the basis of first two moments are derived by making use of multi exponential distributions. It is proposed to approximate distributions with coefficient of variation less than unity by using hypoexponential distribution, which makes it possible to generate random variables with coefficient of variation, taking any value in a range (0; 1, as opposed to Erlang distribution, having only discrete values of coefficient of variation.
Centrifuge Modelling of Two Civil-Environmental Problems
National Research Council Canada - National Science Library
Goodings, Deborah
2001-01-01
Research Problem 1: Frost heave and thaw induced settlement in silt and silty clay developing over a year have been modelled correctly using a geotechnical centrifuge with tests requiring less than a day...
Spreadsheet-Enhanced Problem Solving in Context as Modeling
Directory of Open Access Journals (Sweden)
Sergei Abramovich
2003-07-01
development through situated mathematical problem solving. Modeling activities described in this paper support the epistemological position regarding the interplay that exists between the development of mathematical concepts and available methods of calculation. The spreadsheet used is Microsoft Excel 2001
Deng, Zhenhua; Liang, Shu; Hong, Yiguang
2017-10-17
In this paper, a distributed resource allocation problem with nonsmooth local cost functions is considered, where the interaction among agents is depicted by strongly connected and weight-balanced digraphs. Here the decision variable of each agent is within a local feasibility constraint described as a convex set, and all the decision variables have to satisfy a network resource constraint, which is the sum of available resources. To solve the problem, a distributed continuous-time algorithm is developed by virtue of differentiated projection operations and differential inclusions, and its convergence to the optimal solution is proved via the set-valued LaSalle invariance principle. Furthermore, the exponential convergence of the proposed algorithm can be achieved when the local cost functions are differentiable with Lipschitz gradients and there are no local feasibility constraints. Finally, numerical examples are given to verify the effectiveness of the proposed algorithms.In this paper, a distributed resource allocation problem with nonsmooth local cost functions is considered, where the interaction among agents is depicted by strongly connected and weight-balanced digraphs. Here the decision variable of each agent is within a local feasibility constraint described as a convex set, and all the decision variables have to satisfy a network resource constraint, which is the sum of available resources. To solve the problem, a distributed continuous-time algorithm is developed by virtue of differentiated projection operations and differential inclusions, and its convergence to the optimal solution is proved via the set-valued LaSalle invariance principle. Furthermore, the exponential convergence of the proposed algorithm can be achieved when the local cost functions are differentiable with Lipschitz gradients and there are no local feasibility constraints. Finally, numerical examples are given to verify the effectiveness of the proposed algorithms.
Directory of Open Access Journals (Sweden)
Genoveva Rodríguez-Castañeda
Full Text Available Species distribution modeling (SDM is an increasingly important tool to predict the geographic distribution of species. Even though many problems associated with this method have been highlighted and solutions have been proposed, little has been done to increase comparability among studies. We reviewed recent publications applying SDMs and found that seventy nine percent failed to report methods that ensure comparability among studies, such as disclosing the maximum probability range produced by the models and reporting on the number of species occurrences used. We modeled six species of Falco from northern Europe and demonstrate that model results are altered by (1 spatial bias in species' occurrence data, (2 differences in the geographic extent of the environmental data, and (3 the effects of transformation of model output to presence/absence data when applying thresholds. Depending on the modeling decisions, forecasts of the future geographic distribution of Falco ranged from range contraction in 80% of the species to no net loss in any species, with the best model predicting no net loss of habitat in Northern Europe. The fact that predictions of range changes in response to climate change in published studies may be influenced by decisions in the modeling process seriously hampers the possibility of making sound management recommendations. Thus, each of the decisions made in generating SDMs should be reported and evaluated to ensure conclusions and policies are based on the biology and ecology of the species being modeled.
Menshikh, V.; Samorokovskiy, A.; Avsentev, O.
2018-03-01
The mathematical model of optimizing the allocation of resources to reduce the time for management decisions and algorithms to solve the general problem of resource allocation. The optimization problem of choice of resources in organizational systems in order to reduce the total execution time of a job is solved. This problem is a complex three-level combinatorial problem, for the solving of which it is necessary to implement the solution to several specific problems: to estimate the duration of performing each action, depending on the number of performers within the group that performs this action; to estimate the total execution time of all actions depending on the quantitative composition of groups of performers; to find such a distribution of the existing resource of performers in groups to minimize the total execution time of all actions. In addition, algorithms to solve the general problem of resource allocation are proposed.
Biosocial models of adolescent problem behavior: extension to panel design.
Drigotas, S M; Udry, J R
1993-01-01
We extended the biosocial model of problem behavior tested by Udry (1990) to a panel design, following a sample of over one hundred boys in adolescence for three years. We found the expected results for sociological variables, but weaker effects for testosterone than Udry found on cross-sectional data. Using panel models with lagged hormone effects, we identified relationships between Time-1 testosterone and problem behavior one year or more later. The relationship between testosterone and problem behavior was not present for subsequent measures of testosterone, either in cross-section or with time-lagged models. Therefore we cannot interpret the results as showing testosterone effects on problem behavior. Rather it appears that testosterone level in early adolescence is a marker for a more general growth trajectory of early development.
Towards an Information Model of Consistency Maintenance in Distributed Interactive Applications
Directory of Open Access Journals (Sweden)
Xin Zhang
2008-01-01
Full Text Available A novel framework to model and explore predictive contract mechanisms in distributed interactive applications (DIAs using information theory is proposed. In our model, the entity state update scheme is modelled as an information generation, encoding, and reconstruction process. Such a perspective facilitates a quantitative measurement of state fidelity loss as a result of the distribution protocol. Results from an experimental study on a first-person shooter game are used to illustrate the utility of this measurement process. We contend that our proposed model is a starting point to reframe and analyse consistency maintenance in DIAs as a problem in distributed interactive media compression.
Optimal Water-Power Flow Problem: Formulation and Distributed Optimal Solution
Energy Technology Data Exchange (ETDEWEB)
Dall-Anese, Emiliano [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhao, Changhong [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zamzam, Admed S. [University of Minnesota; Sidiropoulos, Nicholas D. [University of Minnesota; Taylor, Josh A. [University of Toronto
2018-01-12
This paper formalizes an optimal water-power flow (OWPF) problem to optimize the use of controllable assets across power and water systems while accounting for the couplings between the two infrastructures. Tanks and pumps are optimally managed to satisfy water demand while improving power grid operations; {for the power network, an AC optimal power flow formulation is augmented to accommodate the controllability of water pumps.} Unfortunately, the physics governing the operation of the two infrastructures and coupling constraints lead to a nonconvex (and, in fact, NP-hard) problem; however, after reformulating OWPF as a nonconvex, quadratically-constrained quadratic problem, a feasible point pursuit-successive convex approximation approach is used to identify feasible and optimal solutions. In addition, a distributed solver based on the alternating direction method of multipliers enables water and power operators to pursue individual objectives while respecting the couplings between the two networks. The merits of the proposed approach are demonstrated for the case of a distribution feeder coupled with a municipal water distribution network.
Technology of solving multi-objective problems of control of systems with distributed parameters
Rapoport, E. Ya.; Pleshivtseva, Yu. E.
2017-07-01
A constructive technology of multi-objective optimization of control of distributed parameter plants is proposed. The technology is based on a single-criterion version in the form of the minimax convolution of normalized performance criteria. The approach under development is based on the transition to an equivalent form of the variational problem with constraints, with the problem solution being a priori Pareto-effective. Further procedures of preliminary parameterization of control actions and subsequent reduction to a special problem of semi-infinite programming make it possible to find the sought extremals with the use of their Chebyshev properties and fundamental laws of the subject domain. An example of multi-objective optimization of operation modes of an engineering thermophysics object is presented, which is of independent interest.
Frausto-Solis, Juan; Liñán-García, Ernesto; Sánchez-Hernández, Juan Paulo; González-Barbosa, J Javier; González-Flores, Carlos; Castilla-Valdez, Guadalupe
2016-01-01
A new hybrid Multiphase Simulated Annealing Algorithm using Boltzmann and Bose-Einstein distributions (MPSABBE) is proposed. MPSABBE was designed for solving the Protein Folding Problem (PFP) instances. This new approach has four phases: (i) Multiquenching Phase (MQP), (ii) Boltzmann Annealing Phase (BAP), (iii) Bose-Einstein Annealing Phase (BEAP), and (iv) Dynamical Equilibrium Phase (DEP). BAP and BEAP are simulated annealing searching procedures based on Boltzmann and Bose-Einstein distributions, respectively. DEP is also a simulated annealing search procedure, which is applied at the final temperature of the fourth phase, which can be seen as a second Bose-Einstein phase. MQP is a search process that ranges from extremely high to high temperatures, applying a very fast cooling process, and is not very restrictive to accept new solutions. However, BAP and BEAP range from high to low and from low to very low temperatures, respectively. They are more restrictive for accepting new solutions. DEP uses a particular heuristic to detect the stochastic equilibrium by applying a least squares method during its execution. MPSABBE parameters are tuned with an analytical method, which considers the maximal and minimal deterioration of problem instances. MPSABBE was tested with several instances of PFP, showing that the use of both distributions is better than using only the Boltzmann distribution on the classical SA.
Distributed memory compiler methods for irregular problems: Data copy reuse and runtime partitioning
Das, Raja; Ponnusamy, Ravi; Saltz, Joel; Mavriplis, Dimitri
1991-01-01
Outlined here are two methods which we believe will play an important role in any distributed memory compiler able to handle sparse and unstructured problems. We describe how to link runtime partitioners to distributed memory compilers. In our scheme, programmers can implicitly specify how data and loop iterations are to be distributed between processors. This insulates users from having to deal explicitly with potentially complex algorithms that carry out work and data partitioning. We also describe a viable mechanism for tracking and reusing copies of off-processor data. In many programs, several loops access the same off-processor memory locations. As long as it can be verified that the values assigned to off-processor memory locations remain unmodified, we show that we can effectively reuse stored off-processor data. We present experimental data from a 3-D unstructured Euler solver run on iPSC/860 to demonstrate the usefulness of our methods.
Directory of Open Access Journals (Sweden)
Juan Frausto-Solis
2016-01-01
Full Text Available A new hybrid Multiphase Simulated Annealing Algorithm using Boltzmann and Bose-Einstein distributions (MPSABBE is proposed. MPSABBE was designed for solving the Protein Folding Problem (PFP instances. This new approach has four phases: (i Multiquenching Phase (MQP, (ii Boltzmann Annealing Phase (BAP, (iii Bose-Einstein Annealing Phase (BEAP, and (iv Dynamical Equilibrium Phase (DEP. BAP and BEAP are simulated annealing searching procedures based on Boltzmann and Bose-Einstein distributions, respectively. DEP is also a simulated annealing search procedure, which is applied at the final temperature of the fourth phase, which can be seen as a second Bose-Einstein phase. MQP is a search process that ranges from extremely high to high temperatures, applying a very fast cooling process, and is not very restrictive to accept new solutions. However, BAP and BEAP range from high to low and from low to very low temperatures, respectively. They are more restrictive for accepting new solutions. DEP uses a particular heuristic to detect the stochastic equilibrium by applying a least squares method during its execution. MPSABBE parameters are tuned with an analytical method, which considers the maximal and minimal deterioration of problem instances. MPSABBE was tested with several instances of PFP, showing that the use of both distributions is better than using only the Boltzmann distribution on the classical SA.
Dynamical Models For Prices With Distributed Delays
Directory of Open Access Journals (Sweden)
Mircea Gabriela
2015-06-01
Full Text Available In the present paper we study some models for the price dynamics of a single commodity market. The quantities of supplied and demanded are regarded as a function of time. Nonlinearities in both supply and demand functions are considered. The inventory and the level of inventory are taken into consideration. Due to the fact that the consumer behavior affects commodity demand, and the behavior is influenced not only by the instantaneous price, but also by the weighted past prices, the distributed time delay is introduced. The following kernels are taken into consideration: demand price weak kernel and demand price Dirac kernel. Only one positive equilibrium point is found and its stability analysis is presented. When the demand price kernel is weak, under some conditions of the parameters, the equilibrium point is locally asymptotically stable. When the demand price kernel is Dirac, the existence of the local oscillations is investigated. A change in local stability of the equilibrium point, from stable to unstable, implies a Hopf bifurcation. A family of periodic orbits bifurcates from the positive equilibrium point when the time delay passes through a critical value. The last part contains some numerical simulations to illustrate the effectiveness of our results and conclusions.
Developing a Model for Solving the Flight Perturbation Problem
Directory of Open Access Journals (Sweden)
Amirreza Nickkar
2015-02-01
Full Text Available Purpose: In the aviation and airline industry, crew costs are the second largest direct operating cost next to the fuel costs. But unlike the fuel costs, a considerable portion of the crew costs can be saved through optimized utilization of the internal resources of an airline company. Therefore, solving the flight perturbation scheduling problem, in order to provide an optimized schedule in a comprehensive manner that covered all problem dimensions simultaneously, is very important. In this paper, we defined an integrated recovery model as that which is able to recover aircraft and crew dimensions simultaneously in order to produce more economical solutions and create fewer incompatibilities between the decisions. Design/methodology/approach: Current research is performed based on the development of one of the flight rescheduling models with disruption management approach wherein two solution strategies for flight perturbation problem are presented: Dantzig-Wolfe decomposition and Lagrangian heuristic. Findings: According to the results of this research, Lagrangian heuristic approach for the DW-MP solved the problem optimally in all known cases. Also, this strategy based on the Dantig-Wolfe decomposition manage to produce a solution within an acceptable time (Under 1 Sec. Originality/value: This model will support the decisions of the flight controllers in the operation centers for the airlines. When the flight network faces a problem the flight controllers achieve a set of ranked answers using this model thus, applying crew’s conditions in the proposed model caused this model to be closer to actual conditions.
A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.
Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio
2017-11-01
Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this
Atomic hydrogen distribution. [in Titan atmospheric model
Tabarie, N.
1974-01-01
Several possible H2 vertical distributions in Titan's atmosphere are considered with the constraint of 5 km-A a total quantity. Approximative calculations show that hydrogen distribution is quite sensitive to two other parameters of Titan's atmosphere: the temperature and the presence of other constituents. The escape fluxes of H and H2 are also estimated as well as the consequent distributions trapped in the Saturnian system.
Impedance model for quantum-mechanical barrier problems
International Nuclear Information System (INIS)
Nelin, Evgenii A
2007-01-01
Application of the impedance model to typical quantum-mechanical barrier problems, including those for structures with resonant electron tunneling, is discussed. The efficiency of the approach is illustrated. The physical transparency and compactness of the model and its potential as a teaching and learning tool are discussed. (methodological notes)
Data-Driven Model Order Reduction for Bayesian Inverse Problems
Cui, Tiangang
2014-01-06
One of the major challenges in using MCMC for the solution of inverse problems is the repeated evaluation of computationally expensive numerical models. We develop a data-driven projection- based model order reduction technique to reduce the computational cost of numerical PDE evaluations in this context.
Performance prediction model for distributed applications on multicore clusters
CSIR Research Space (South Africa)
Khanyile, NP
2012-07-01
Full Text Available Distributed processing offers a way of successfully dealing with computationally demanding applications such as scientific problems. Over the years, researchers have investigated ways to predict the performance of parallel algorithms. Amdahl’s law...
Koepke, C.; Irving, J.; Roubinet, D.
2014-12-01
Geophysical methods have gained much interest in hydrology over the past two decades because of their ability to provide estimates of the spatial distribution of subsurface properties at a scale that is often relevant to key hydrological processes. Because of an increased desire to quantify uncertainty in hydrological predictions, many hydrogeophysical inverse problems have recently been posed within a Bayesian framework, such that estimates of hydrological properties and their corresponding uncertainties can be obtained. With the Bayesian approach, it is often necessary to make significant approximations to the associated hydrological and geophysical forward models such that stochastic sampling from the posterior distribution, for example using Markov-chain-Monte-Carlo (MCMC) methods, is computationally feasible. These approximations lead to model structural errors, which, so far, have not been properly treated in hydrogeophysical inverse problems. Here, we study the inverse problem of estimating unsaturated hydraulic properties, namely the van Genuchten-Mualem (VGM) parameters, in a layered subsurface from time-lapse, zero-offset-profile (ZOP) ground penetrating radar (GPR) data, collected over the course of an infiltration experiment. In particular, we investigate the effects of assumptions made for computational tractability of the stochastic inversion on model prediction errors as a function of depth and time. These assumptions are that (i) infiltration is purely vertical and can be modeled by the 1D Richards equation, and (ii) the petrophysical relationship between water content and relative dielectric permittivity is known. Results indicate that model errors for this problem are far from Gaussian and independently identically distributed, which has been the common assumption in previous efforts in this domain. In order to develop a more appropriate likelihood formulation, we use (i) a stochastic description of the model error that is obtained through
Correlation Structures of Correlated Binomial Models and Implied Default Distribution
S. Mori; K. Kitsukawa; M. Hisakado
2006-01-01
We show how to analyze and interpret the correlation structures, the conditional expectation values and correlation coefficients of exchangeable Bernoulli random variables. We study implied default distributions for the iTraxx-CJ tranches and some popular probabilistic models, including the Gaussian copula model, Beta binomial distribution model and long-range Ising model. We interpret the differences in their profiles in terms of the correlation structures. The implied default distribution h...
Application of the distributed genetic algorithm for loading pattern optimization problems
International Nuclear Information System (INIS)
Hashimoto, Hiroshi; Yamamoto, Akio
2000-01-01
The distributed genetic algorithm (DGA) is applied for loading pattern optimization problems of the pressurized water reactors (PWR). Due to stiff nature of the loading pattern optimizations (e.g. multi-modality and non-linearity), stochastic methods like the simulated annealing or the genetic algorithm (GA) are widely applied for these problems. A basic concept of DGA is based on that of GA. However, DGA equally distributes candidates of solutions (i.e. loading patterns) to several independent 'islands' and evolves them in each island. Migrations of some candidates are performed among islands with a certain period. Since candidates of solutions independently evolve in each island with accepting different genes of migrants from other islands, premature convergence in the traditional GA can be prevented. Because many candidate loading patterns should be evaluated in one generation of GA or DGA, the parallelization in these calculations works efficiently. Parallel efficiency was measured using our optimization code and good load balance was attained even in a heterogeneous cluster environment due to dynamic distribution of the calculation load. The optimization code is based on the client/server architecture with the TCP/IP native socket and a client (optimization module) and calculation server modules communicate the objects of loading patterns each other. Throughout the sensitivity study on optimization parameters of DGA, a suitable set of the parameters for a test problem was identified. Finally, optimization capability of DGA and the traditional GA was compared in the test problem and DGA provided better optimization results than the traditional GA. (author)
Modelling human problem solving with data from an online game.
Rach, Tim; Kirsch, Alexandra
2016-11-01
Since the beginning of cognitive science, researchers have tried to understand human strategies in order to develop efficient and adequate computational methods. In the domain of problem solving, the travelling salesperson problem has been used for the investigation and modelling of human solutions. We propose to extend this effort with an online game, in which instances of the travelling salesperson problem have to be solved in the context of a game experience. We report on our effort to design and run such a game, present the data contained in the resulting openly available data set and provide an outlook on the use of games in general for cognitive science research. In addition, we present three geometrical models mapping the starting point preferences in the problems presented in the game as the result of an evaluation of the data set.
Application of a Mathematical Model to an Advertisement Reservation Problem
Directory of Open Access Journals (Sweden)
Ozlem COSGUN
2013-01-01
Full Text Available Television networks provide TV programs free of charge to the public. However, they acquire their revenue by telecasting advertisements in the midst of continuing programs or shows. A key problem faced by the TV networks in Turkey is how to accept and televise the advertisements reserved by a client on a specified advertisement break which we called “Advertisement Reservation Problem” (ARP. The problem is complicated by limited time inventory, by different rating points for different target groups, competition avoidance and the relationship between TV networks and clients. In this study we have developed a mathematical model for advertisement reservation problem and extended this model for some cases encountered in real business life. We have also discussed how these cases affect the decisions of a TV network. Mixed integer linear programming approach is proposed to solve these problems. This approach has been implemented to a case taken from one of the biggest TV networks of Turkey.
Distributed Model Predictive Control via Dual Decomposition
DEFF Research Database (Denmark)
Biegel, Benjamin; Stoustrup, Jakob; Andersen, Palle
2014-01-01
. This allows coordination of all the subsystems without the need of sharing local dynamics, objectives and constraints. To illustrate this, an example is included where dual decomposition is used to resolve power grid congestion in a distributed manner among a number of players coupled by distribution grid...
A hybrid genetic algorithm for the distributed permutation flowshop scheduling problem
Directory of Open Access Journals (Sweden)
Jian Gao
2011-08-01
Full Text Available Distributed Permutation Flowshop Scheduling Problem (DPFSP is a newly proposed scheduling problem, which is a generalization of classical permutation flow shop scheduling problem. The DPFSP is NP-hard in general. It is in the early stages of studies on algorithms for solving this problem. In this paper, we propose a GA-based algorithm, denoted by GA_LS, for solving this problem with objective to minimize the maximum completion time. In the proposed GA_LS, crossover and mutation operators are designed to make it suitable for the representation of DPFSP solutions, where the set of partial job sequences is employed. Furthermore, GA_LS utilizes an efficient local search method to explore neighboring solutions. The local search method uses three proposed rules that move jobs within a factory or between two factories. Intensive experiments on the benchmark instances, extended from Taillard instances, are carried out. The results indicate that the proposed hybrid genetic algorithm can obtain better solutions than all the existing algorithms for the DPFSP, since it obtains better relative percentage deviation and differences of the results are also statistically significant. It is also seen that best-known solutions for most instances are updated by our algorithm. Moreover, we also show the efficiency of the GA_LS by comparing with similar genetic algorithms with the existing local search methods.
Modelling Ecuador's rainfall distribution according to geographical characteristics.
Tobar, Vladimiro; Wyseure, Guido
2017-04-01
It is known that rainfall is affected by terrain characteristics and some studies had focussed on its distribution over complex terrain. Ecuador's temporal and spatial rainfall distribution is affected by its location on the ITCZ, the marine currents in the Pacific, the Amazon rainforest, and the Andes mountain range. Although all these factors are important, we think that the latter one may hold a key for modelling spatial and temporal distribution of rainfall. The study considered 30 years of monthly data from 319 rainfall stations having at least 10 years of data available. The relatively low density of stations and their location in accessible sites near to main roads or rivers, leave large and important areas ungauged, making it not appropriate to rely on traditional interpolation techniques to estimate regional rainfall for water balance. The aim of this research was to come up with a useful model for seasonal rainfall distribution in Ecuador based on geographical characteristics to allow its spatial generalization. The target for modelling was the seasonal rainfall, characterized by nine percentiles for each one of the 12 months of the year that results in 108 response variables, later on reduced to four principal components comprising 94% of the total variability. Predictor variables for the model were: geographic coordinates, elevation, main wind effects from the Amazon and Coast, Valley and Hill indexes, and average and maximum elevation above the selected rainfall station to the east and to the west, for each one of 18 directions (50-135°, by 5°) adding up to 79 predictors. A multiple linear regression model by the Elastic-net algorithm with cross-validation was applied for each one of the PC as response to select the most important ones from the 79 predictor variables. The Elastic-net algorithm deals well with collinearity problems, while allowing variable selection in a blended approach between the Ridge and Lasso regression. The model fitting
International Nuclear Information System (INIS)
Liu, Sha; Liu, Shi; Tong, Guowei
2017-01-01
In industrial areas, temperature distribution information provides a powerful data support for improving system efficiency, reducing pollutant emission, ensuring safety operation, etc. As a noninvasive measurement technology, acoustic tomography (AT) has been widely used to measure temperature distribution where the efficiency of the reconstruction algorithm is crucial for the reliability of the measurement results. Different from traditional reconstruction techniques, in this paper a two-phase reconstruction method is proposed to ameliorate the reconstruction accuracy (RA). In the first phase, the measurement domain is discretized by a coarse square grid to reduce the number of unknown variables to mitigate the ill-posed nature of the AT inverse problem. By taking into consideration the inaccuracy of the measured time-of-flight data, a new cost function is constructed to improve the robustness of the estimation, and a grey wolf optimizer is used to solve the proposed cost function to obtain the temperature distribution on the coarse grid. In the second phase, the Adaboost.RT based BP neural network algorithm is developed for predicting the temperature distribution on the refined grid in accordance with the temperature distribution data estimated in the first phase. Numerical simulations and experiment measurement results validate the superiority of the proposed reconstruction algorithm in improving the robustness and RA. (paper)
Liu, Sha; Liu, Shi; Tong, Guowei
2017-11-01
In industrial areas, temperature distribution information provides a powerful data support for improving system efficiency, reducing pollutant emission, ensuring safety operation, etc. As a noninvasive measurement technology, acoustic tomography (AT) has been widely used to measure temperature distribution where the efficiency of the reconstruction algorithm is crucial for the reliability of the measurement results. Different from traditional reconstruction techniques, in this paper a two-phase reconstruction method is proposed to ameliorate the reconstruction accuracy (RA). In the first phase, the measurement domain is discretized by a coarse square grid to reduce the number of unknown variables to mitigate the ill-posed nature of the AT inverse problem. By taking into consideration the inaccuracy of the measured time-of-flight data, a new cost function is constructed to improve the robustness of the estimation, and a grey wolf optimizer is used to solve the proposed cost function to obtain the temperature distribution on the coarse grid. In the second phase, the Adaboost.RT based BP neural network algorithm is developed for predicting the temperature distribution on the refined grid in accordance with the temperature distribution data estimated in the first phase. Numerical simulations and experiment measurement results validate the superiority of the proposed reconstruction algorithm in improving the robustness and RA.
Kaijser, Thomas
2013-01-01
A Hidden Markov Model generates two basic stochastic processes, a Markov chain, which is hidden, and an observation sequence. The filtering process of a Hidden Markov Model is, roughly speaking, the sequence of conditional distributions of the hidden Markov chain that is obtained as new observations are received. It is well-known, that the filtering process itself, is also a Markov chain. A classical, theoretical problem is to find conditions which implies that the distributions of the filter...
Identification of Chemistry Learning Problems Viewed From Conceptual Change Model
Redhana, I. W; Sudria, I. B. N; Hidayat, I; Merta, L. M
2017-01-01
This study aimed at describing and explaining chemistry learning problems viewed from conceptual change model and misconceptions of students. The study was qualitative research of case study type conducted in one class of SMAN 1 Singaraja. Subjects of the study were a chemistry teacher and students. Data were obtained through classroom observation, interviews, and conception tests. The chemistry learning problems were grouped based on aspects of necessity, intelligibility, plausibility, and f...
Deterministic Properties of Serially Connected Distributed Lag Models
Directory of Open Access Journals (Sweden)
Piotr Nowak
2013-01-01
Full Text Available Distributed lag models are an important tool in modeling dynamic systems in economics. In the analysis of composite forms of such models, the component models are ordered in parallel (with the same independent variable and/or in series (where the independent variable is also the dependent variable in the preceding model. This paper presents an analysis of certain deterministic properties of composite distributed lag models composed of component distributed lag models arranged in sequence, and their asymptotic properties in particular. The models considered are in discrete form. Even though the paper focuses on deterministic properties of distributed lag models, the derivations are based on analytical tools commonly used in probability theory such as probability distributions and the central limit theorem. (original abstract
Managing problem employees: a model program and practical guide.
Miller, Laurence
2010-01-01
This article presents a model program for managing problem employees that includes a description ofthe basic types of problem employees and employee problems, as well as practical recommendations for. (1) selection and screening, (2) education and training, (3) coaching and counseling, (4) discipline, (5) psychological fitness-for-duty evaluations, (6) mental health services, (7) termination, and (8) leadership and administrative strategies. Throughout, the emphasis on balancing the need for order and productivity in the workplace with fairness and concern for employee health and well-being.
Directory of Open Access Journals (Sweden)
Babak Farhang Moghadam
2010-07-01
Full Text Available During the past few years, there have tremendous efforts on improving the cost of logistics using varieties of Vehicle Routing Problem (VRP models. In fact, the recent rise on fuel prices has motivated many to reduce the cost of transportation associated with their business through an improved implementation of VRP systems. We study a specific form of VRP where demand is supposed to be uncertain with unknown distribution. A Particle Swarm Optimization (PSO is proposed to solve the VRP and the results are compared with other existing methods. The proposed approach is also used for real world case study of drug distribution and the preliminary results indicate that the method could reduce the unmet demand significantly.
Directory of Open Access Journals (Sweden)
Xiangang Peng
2015-12-01
Full Text Available Distributed generation (DG systems are integral parts in future distribution networks. In this paper, a novel approach integrating crisscross optimization algorithm and Monte Carlo simulation (CSO-MCS is implemented to solve the optimal DG allocation (ODGA problem. The feature of applying CSO to address the ODGA problem lies in three interacting operators, namely horizontal crossover, vertical crossover and competitive operator. The horizontal crossover can search new solutions in a hypercube space with a larger probability while in the periphery of each hypercube with a decreasing probability. The vertical crossover can effectively facilitate those stagnant dimensions of a population to escape from premature convergence. The competitive operator allows the crisscross search to always maintain in a historical best position to quicken the converge rate. It is the combination of the double search strategies and competitive mechanism that enables CSO significant advantage in convergence speed and accuracy. Moreover, to deal with system uncertainties such as the output power of wind turbine and photovoltaic generators, an MCS-based method is adopted to solve the probabilistic power flow. The effectiveness of the CSO-MCS method is validated on the typical 33-bus and 69-bus test system, and results substantiate the suitability of CSO-MCS for multi-objective ODGA problem.
Exacerbating the Cosmological Constant Problem with Interacting Dark Energy Models.
Marsh, M C David
2017-01-06
Future cosmological surveys will probe the expansion history of the Universe and constrain phenomenological models of dark energy. Such models do not address the fine-tuning problem of the vacuum energy, i.e., the cosmological constant problem (CCP), but can make it spectacularly worse. We show that this is the case for "interacting dark energy" models in which the masses of the dark matter states depend on the dark energy sector. If realized in nature, these models have far-reaching implications for proposed solutions to the CCP that require the number of vacua to exceed the fine-tuning of the vacuum energy density. We show that current estimates of the number of flux vacua in string theory, N_{vac}∼O(10^{272 000}), are far too small to realize certain simple models of interacting dark energy and solve the cosmological constant problem anthropically. These models admit distinctive observational signatures that can be targeted by future gamma-ray observatories, hence making it possible to observationally rule out the anthropic solution to the cosmological constant problem in theories with a finite number of vacua.
Models and analysis for distributed systems
Haddad, Serge; Pautet, Laurent; Petrucci, Laure
2013-01-01
Nowadays, distributed systems are increasingly present, for public software applications as well as critical systems. software applications as well as critical systems. This title and Distributed Systems: Design and Algorithms - from the same editors - introduce the underlying concepts, the associated design techniques and the related security issues.The objective of this book is to describe the state of the art of the formal methods for the analysis of distributed systems. Numerous issues remain open and are the topics of major research projects. One current research trend consists of pro
Use of model analysis to analyse Thai students’ attitudes and approaches to physics problem solving
Rakkapao, S.; Prasitpong, S.
2018-03-01
This study applies the model analysis technique to explore the distribution of Thai students’ attitudes and approaches to physics problem solving and how those attitudes and approaches change as a result of different experiences in physics learning. We administered the Attitudes and Approaches to Problem Solving (AAPS) survey to over 700 Thai university students from five different levels, namely students entering science, first-year science students, and second-, third- and fourth-year physics students. We found that their inferred mental states were generally mixed. The largest gap between physics experts and all levels of the students was about the role of equations and formulas in physics problem solving, and in views towards difficult problems. Most participants of all levels believed that being able to handle the mathematics is the most important part of physics problem solving. Most students’ views did not change even though they gained experiences in physics learning.
RESOURCE DISTRIBUTION MODEL IN CLOUD ENVIRONMENTS
Directory of Open Access Journals (Sweden)
Ramil I. Khantimirov
2015-01-01
Full Text Available A new approach to distribution of load in computer clouds is proposed, based on the analysis of the equability of resource usage and resource usage forecast using intelligent algorithms.
A model for the distribution channels planning process
Neves, M.F.; Zuurbier, P.; Campomar, M.C.
2001-01-01
Research of existing literature reveals some models (sequence of steps) for companies that want to plan distribution channels. None of these models uses strong contributions from transaction cost economics, bringing a possibility to elaborate on a "distribution channels planning model", with these
Effect of PLISSIT Model on Solution of Sexual Problems
Directory of Open Access Journals (Sweden)
Esra Uslu
2016-03-01
Full Text Available This systematic review study aims to determine the effect of PLISSIT model (permission, limited information, special suggestions, intensive therapy in the care of individuals having sexual problems. Two of the studies included in the systematic review have been carried out in Iran and one of them in Turkey. These studies were limited to the patients with stoma and women having sexual problems. Results presented that care via PLISSIT model improves the sexual functions and reduces sexual stress, increases the sexual desire, sexual arousal, lubrication, orgasm, sexual satisfaction and frequency of sexual activity. [Psikiyatride Guncel Yaklasimlar - Current Approaches in Psychiatry 2016; 8(1: 52-63
Testing and Modeling of Contact Problems in Resistance Welding
DEFF Research Database (Denmark)
Song, Quanfeng
As a part of the efforts towards a professional and reliable numerical tool for resistance welding engineers, this Ph.D. project is dedicated to refining the numerical models related to the interface behavior. An FE algorithm for the contact problems in resistance welding has been developed...... in this work, dealing with the coupled mechanical-electrical-thermal contact problems. The penalty method is used to impose the contact conditions in the electrical and thermal contact, as well as frictionless contact and sticking contact in the mechanical model. A node-segment contact element is the basis...
Zhou, Lin; Baldacci, Roberto; Vigo, Daniele; Wang, Xu
2018-01-01
In this paper, we introduce a new city logistics problem arising in the last mile distribution of e-commerce. The problem involves two levels of routing problems. The first requires a design of the routes for a vehicle fleet located at the depots to transport the customer demands to a subset of the
Two efficient heuristics to solve the integrated load distribution and production planning problem
International Nuclear Information System (INIS)
Gajpal, Yuvraj; Nourelfath, Mustapha
2015-01-01
This paper considers a multi-period production system where a set of machines are arranged in parallel. The machines are unreliable and the failure rate of machine depends on the load assigned to the machine. The expected production rate of the system is considered to be a non-monotonic function of its load. Because of the machine failure rate, the total production output depends on the combination of loads assigned to different machines. We consider the integration of load distribution decisions with production planning decision. The product demands are considered to be known in advance. The objective is to minimize the sum of holding costs, backorder costs, production costs, setup costs, capacity change costs and unused capacity costs while satisfying the demand over specified time horizon. The constraint is not to exceed available repair resources required to repair the machine breakdown. The paper develops two heuristics to solve the integrated load distribution and production planning problem. The first heuristic consists of a three-phase approach, while the second one is based on tabu search metaheuristic. The efficiency of the proposed heuristics is tested through the randomly generated problem instances. - Highlights: • The expected performance of the system is a non-monotonic function of its load. • We consider the integration of load distribution and production planning decisions. • The paper proposes three phase and tabu search based heuristics to solve the problem. • Lower bound has been developed for checking the effectiveness of the heuristics. • The efficiency of the heuristic is tested through randomly generated instances.
Blackboard system generator (BSG) - An alternative distributed problem-solving paradigm
Silverman, Barry G.; Feggos, Kostas; Chang, Joseph Shih
1989-01-01
A status review is presented for a generic blackboard-based distributed problem-solving environment in which multiple-agent cooperation can be effected. This environment is organized into a shared information panel, a chairman control panel, and a metaplanning panel. Each panel contains a number of embedded AI techniques that facilitate its operation and that provide heuristics for solving the underlying team-agent decision problem. The status of these panels and heuristics is described along with a number of robustness considerations. The techniques for each of the three panels and for four sets of paradigm-related advances are described, along with selected results from classroom teaching experiments and from three applications.
On the problem of finding a suitable distribution of students to universities in Germany
Schneider, Johannes J.; Hirtreiter, Christian; Morgenstern, Ingo
2009-10-01
For many years, the problem of how to distribute students to the various universities in Germany according to the preferences of the students has remained unsolved. Various approaches, like the centralized method to let a central agency organize the distribution to the various universities or the decentralized method to let the students apply directly at their preferred universities, turned out to lead to a significant fraction of frustrated students ending up at universities not being on their preference list or even not having a place to study at all. With our centralized approach, we are able to decrease the fraction of frustrated students as well as the bureaucratic expenses for applicants and universities drastically.
Modeling of Elastodynamic Problems in Finite Solid Media
International Nuclear Information System (INIS)
Cho, Youn Ho
2000-01-01
Various modeling techniques for ultrasonic wave propagation and scattering problems in finite solid media are presented. Elastodynamic boundary value problems in inhomogeneous multi-layered plate-like structures are set up for modal analysis of guided wave propagation and numerically solved to obtain dispersion curves which show propagation characteristics of guided waves. As a powerful modeling tool to overcome such numerical difficulties in wave scattering problems as the geometrical complexity and mode conversion, the Boundary Element Method(BEM) is introduced and is combined with the normal mode expansion technique to develop the hybrid BEM, an efficient technique for modeling multi mode conversion of guided wave scattering problems. Time dependent wave forms are obtained through the inverse Fourier transformation of the numerical solutions in the frequency domain. 3D BEM program development is underway to model more practical ultrasonic wave signals. Some encouraging numerical results have recently been obtained in comparison with the analytical solutions for wave propagation in a bar subjected to time harmonic longitudinal excitation. It is expected that the presented modeling techniques for elastic wave propagation and scattering can be applied to establish quantitative nondestructive evaluation techniques in various ways
Personality Disorder Models and their Coverage of Interpersonal Problems
Williams, Trevor F.; Simms, Leonard J.
2015-01-01
Interpersonal dysfunction is a defining feature of personality disorders (PDs) and can serve as a criterion for comparing PD models. In this study, the interpersonal coverage of four competing PD models was examined using a sample of 628 current or recent psychiatric patients who completed the NEO Personality Inventory-3 First Half (NEO-PI-3FH; McCrae & Costa, 2007), Personality Inventory for the DSM-5 (PID-5; Krueger et al., 2012), Computerized Adaptive Test of Personality Disorder-Static Form (CAT-PD-SF; Simms et al., 2011), and Structured Clinical Interview for DSM-IV Personality Questionnaire (SCID-II PQ; First, Spitzer, Gibbon, & Williams, 1995). Participants also completed the Inventory of Interpersonal Problems-Short Circumplex (IIP-SC; Soldz, Budman, Demby, & Merry, 1995) to assess interpersonal dysfunction. Analyses compared the severity and style of interpersonal problems that characterize PD models. Previous research with DSM-5 Section II and III models was generally replicated. Extraversion and Agreeableness facets related to the most well defined interpersonal problems across normal-range and pathological traits. Pathological trait models provided more coverage of dominance problems, whereas normal-range traits covered nonassertiveness better. These results suggest that more work may be needed to reconcile descriptions of personality pathology at the level of specific constructs. PMID:26168406
Stochastic reduced order models for inverse problems under uncertainty.
Warner, James E; Aquino, Wilkins; Grigoriu, Mircea D
2015-03-01
This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well.
Guidance for modeling causes and effects in environmental problem solving
Armour, Carl L.; Williamson, Samuel C.
1988-01-01
Environmental problems are difficult to solve because their causes and effects are not easily understood. When attempts are made to analyze causes and effects, the principal challenge is organization of information into a framework that is logical, technically defensible, and easy to understand and communicate. When decisionmakers attempt to solve complex problems before an adequate cause and effect analysis is performed there are serious risks. These risks include: greater reliance on subjective reasoning, lessened chance for scoping an effective problem solving approach, impaired recognition of the need for supplemental information to attain understanding, increased chance for making unsound decisions, and lessened chance for gaining approval and financial support for a program/ Cause and effect relationships can be modeled. This type of modeling has been applied to various environmental problems, including cumulative impact assessment (Dames and Moore 1981; Meehan and Weber 1985; Williamson et al. 1987; Raley et al. 1988) and evaluation of effects of quarrying (Sheate 1986). This guidance for field users was written because of the current interest in documenting cause-effect logic as a part of ecological problem solving. Principal literature sources relating to the modeling approach are: Riggs and Inouye (1975a, b), Erickson (1981), and United States Office of Personnel Management (1986).
Directory of Open Access Journals (Sweden)
Diogo de Carvalho Bezerra
2015-12-01
Full Text Available ABSTRACT Contributions from the sensitivity analysis of the parameters of the linear programming model for the elicitation of experts' beliefs are presented. The process allows for the calibration of the family of probability distributions obtained in the elicitation process. An experiment to obtain the probability distribution of a future event (Brazil vs. Spain soccer game in the 2013 FIFA Confederations Cup final game was conducted. The proposed sensitivity analysis step may help to reduce the vagueness of the information given by the expert.
PENERAPAN MODEL PEMBELAJARAN PROBLEM BASED INSTRUCTION DENGAN PENDEKATAN PREDICT-OBSERVE-EXPLAIN
Directory of Open Access Journals (Sweden)
Ayu Dwi Listiowati
2015-11-01
Full Text Available This research aimed to determine the effect of Problem Based Instruction learning model with Predict-Observe-Explain approach on chemistry learning outcomes. The population is XI grader Sciences of Senior High School in Brebes for academic year 2011/2012. Initial data analysis showed that the population are normally distributed and homogeneous, so the sampling technique which used is cluster random sampling. From this sampling, XI Science-5 used as a control class (Problem Based Instruction learning model without Predict-Observe-Explain approach and XI Science-1 as an experiment class (Problem Based Instruction with Predict Observe Explain approach. Final data analysis showed that learning outcomes for both classes are normally distributed and have equal variances. In the correlation test, obtained 0.433 of r b value, which showed a middle correlation, so Problem Based Instruction with Predict-ObserveExplain approach has middle effect on chemistry learning outcomes in solubility and solubility product. This learning contributes to student learning outcomes is 19%. The average value of affective and psychomotor in experimental class is better than the control class. Based on this research, we can conclude that Problem Based Instruction with Predict-Observe-Explain approach has a positive effect on chemistry learning product in Senior High School students.Key Words: Problem Based Instruction Learning
A review of mathematical models in economic environmental problems
DEFF Research Database (Denmark)
Nahorski, Z.; Ravn, H.F.
2000-01-01
The paper presents a review of mathematical models used,in economic analysis of environmental problems. This area of research combines macroeconomic models of growth, as dependent on capital, labour, resources, etc., with environmental models describing such phenomena like natural resources...... exhaustion or pollution accumulation and degradation. In simpler cases the models can be treated analytically and the utility function can be optimized using, e.g., such tools as the maximum principle. In more complicated cases calculation of the optimal environmental policies requires a computer solution....
Size distribution of dust grains: A problem of self-similarity
International Nuclear Information System (INIS)
Henning, TH.; Dorschner, J.; Guertler, J.
1989-01-01
Distribution functions describing the results of natural processes frequently show the shape of power laws. It is an open question whether this behavior is a result simply coming about by the chosen mathematical representation of the observational data or reflects a deep-seated principle of nature. The authors suppose the latter being the case. Using a dust model consisting of silicate and graphite grains Mathis et al. (1977) showed that the interstellar extinction curve can be represented by taking a grain radii distribution of power law type n(a) varies as a(exp -p) with 3.3 less than or equal to p less than or equal to 3.6 (example 1) as a basis. A different approach to understanding power laws like that in example 1 becomes possible by the theory of self-similar processes (scale invariance). The beta model of turbulence (Frisch et al., 1978) leads in an elementary way to the concept of the self-similarity dimension D, a special case of Mandelbrot's (1977) fractal dimension. In the frame of this beta model, it is supposed that on each stage of a cascade the system decays to N clumps and that only the portion beta N remains active further on. An important feature of this model is that the active eddies become less and less space-filling. In the following, the authors assume that grain-grain collisions are such a scale-invarient process and that the remaining grains are the inactive (frozen) clumps of the cascade. In this way, a size distribution n(a) da varies as a(exp -(D+1))da (example 2) results. It seems to be highly probable that the power law character of the size distribution of interstellar dust grains is the result of a self-similarity process. We can, however, not exclude that the process leading to the interstellar grain size distribution is not fragmentation at all
Modeling the economics and market adoption of distributed power generation
International Nuclear Information System (INIS)
Maribu, Karl Magnus
2006-01-01
significant value in postponing investment until larger projects are profitable. In the second paper, Combined Heat and Power in Commercial Buildings: Investment and Risk Analysis, a Monte Carlo simulation program to find the value and risk characteristics of combined heat and power units is presented. Using historical price data to estimate price process parameters, it is shown that uncertain prices should not be a barrier for investment, since on-site generators can adapt to uncertain prices and reduce the total energy cost risks. In, Optimizing Distributed Generation Systems for Commercial Buildings, which uses a mixed integer linear program, distributed generation portfolios that maximize profitability are tailored to a building's energy load. Distributed generation with heat recovery and thermally activated cooling are found profitable in an office and a health care building, using current generator data and energy tariffs from California. With the fourth paper, Distributed Energy Resources Market Diffusion Model, the analysis is taken a step further to predict distributed generation market diffusion. Market penetration is assumed to depend on economic attractiveness and knowledge and trust in the technologies. A case study based on the U.S. commercial sector depicts a large market for reciprocating engines and micro turbines, with the West and Northeast regions driving market diffusion. Technology research and outreach programs can speed up and change the path of capacity expansion. The thesis presents three different models for analyzing investments in distributed generation, all of which have benefits and disadvantages. Choice of model depends on the specific application, but the different approaches can be used on the same problem to analyze it from different viewpoints. The cases in the thesis indicate that distributed generation can reduce expected energy costs while at the same time improve cost predictability. Further, the thesis identifies several important
Bayesian Nonparametric Model for Estimating Multistate Travel Time Distribution
Directory of Open Access Journals (Sweden)
Emmanuel Kidando
2017-01-01
Full Text Available Multistate models, that is, models with more than two distributions, are preferred over single-state probability models in modeling the distribution of travel time. Literature review indicated that the finite multistate modeling of travel time using lognormal distribution is superior to other probability functions. In this study, we extend the finite multistate lognormal model of estimating the travel time distribution to unbounded lognormal distribution. In particular, a nonparametric Dirichlet Process Mixture Model (DPMM with stick-breaking process representation was used. The strength of the DPMM is that it can choose the number of components dynamically as part of the algorithm during parameter estimation. To reduce computational complexity, the modeling process was limited to a maximum of six components. Then, the Markov Chain Monte Carlo (MCMC sampling technique was employed to estimate the parameters’ posterior distribution. Speed data from nine links of a freeway corridor, aggregated on a 5-minute basis, were used to calculate the corridor travel time. The results demonstrated that this model offers significant flexibility in modeling to account for complex mixture distributions of the travel time without specifying the number of components. The DPMM modeling further revealed that freeway travel time is characterized by multistate or single-state models depending on the inclusion of onset and offset of congestion periods.
Stieltjes electrostatic model interpretation for bound state problems
Indian Academy of Sciences (India)
In this paper, it is shown that Stieltjes electrostatic model and quantum Hamilton Jacobi formalism are analogous to each other. This analogy allows the bound state problem to mimic as unit moving imaginary charges i ℏ , which are placed in between the two fixed imaginary charges arising due to the classical turning ...
The hierarchy problem and Physics Beyond the Standard Model
Indian Academy of Sciences (India)
f . Fine-tuning has to be done order by order in perturbation theory. Hierarchy problem. What guarantees the stability of v against quantum fluctuations? ⇒ Physics Beyond the Standard Model. Experimental side: Dark matter, neutrino mass, matter-antimatter asymmetry, ... Gautam Bhattacharyya. IASc Annual Meeting, IISER, ...
Stieltjes electrostatic model interpretation for bound state problems
Indian Academy of Sciences (India)
Abstract. In this paper, it is shown that Stieltjes electrostatic model and quantum Hamilton Jacobi formalism are analogous to each other. This analogy allows the bound state problem to mimic as n unit moving imaginary charges i¯h, which are placed in between the two fixed imaginary charges arising due to the classical ...
Modeling and Identification of Harmonic Instability Problems In Wind Farms
DEFF Research Database (Denmark)
Ebrahimzadeh, Esmaeil; Blaabjerg, Frede; Wang, Xiongfei
2016-01-01
to identify harmonic instability problems in wind farms, where many wind turbines, cables, transformers, capacitor banks, shunt reactors, etc, typically are located. This methodology introduces the wind farm as a Multi-Input Multi-Outpur (MIMO) control system, where the linearized models of fast inner control...
A psychological cascade model for persisting voice problems in teachers.
Jong, F.I.C.R.S. de; Cornelis, B.E.; Wuyts, F.L.; Kooijman, P.G.C.; Schutte, H.K.; Oudes, M.J.; Graamans, K.
2003-01-01
In 76 teachers with persisting voice problems, the maintaining factors and coping strategies were examined. Physical, functional, psychological and socioeconomic factors were assessed. A parallel was drawn to a psychological cascade model designed for patients with chronic back pain. The majority of
An examination of the developmental propensity model of conduct problems.
Rhee, Soo Hyun; Friedman, Naomi P; Corley, Robin P; Hewitt, John K; Hink, Laura K; Johnson, Daniel P; Smith Watts, Ashley K; Young, Susan E; Robinson, JoAnn; Waldman, Irwin D; Zahn-Waxler, Carolyn
2016-05-01
The present study tested specific hypotheses advanced by the developmental propensity model of the etiology of conduct problems in the Colorado Longitudinal Twin Study, a prospective, longitudinal, genetically informative sample. High negative emotionality, low behavioral inhibition, low concern and high disregard for others, and low cognitive ability assessed during toddlerhood (age 14 to 36 months) were examined as predictors of conduct problems in later childhood and adolescence (age 4 to 17 years). Each hypothesized antisocial propensity dimension predicted conduct problems, but some predictions may be context specific or due to method covariance. The most robust predictors were observed disregard for others (i.e., responding to others' distress with active, negative responses such as anger and hostility), general cognitive ability, and language ability, which were associated with conduct problems reported by parents, teachers, and adolescents, and change in observed negative emotionality (i.e., frustration tolerance), which was associated with conduct problems reported by teachers and adolescents. Furthermore, associations between the most robust early predictors and later conduct problems were influenced by the shared environment rather than genes. We conclude that shared environmental influences that promote disregard for others and detract from cognitive and language development during toddlerhood also predispose individuals to conduct problems in later childhood and adolescence. The identification of those shared environmental influences common to early antisocial propensity and later conduct problems is an important future direction, and additional developmental behavior genetic studies examining the interaction between children's characteristics and socializing influences on conduct problems are needed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Review of Arc Models in Distribution Networks
Directory of Open Access Journals (Sweden)
Yin Qi
2016-01-01
Full Text Available The incipient fault in underground cable is recognized as arc fault. Then the arc model selection is very important for the incipient fault detection. The arc features and some typical models have been introduced in detail, including traditional thermal based models, arc models in low voltage and models of arc in long free air. At last, the Kizilcay’s model is recommended to analyze the incipient fault in underground cable for its accuracy and widely utilized. Finally, some conclusions are summarized.
A constraint programming model for mixed model type 2 assembly line balancing problem
Directory of Open Access Journals (Sweden)
Hacı Mehmet Alağaş
2016-08-01
Full Text Available This paper presents a new constraint programming model for mixed-model assembly line balancing problem. The proposed model minimizes the cycle time for a given number of stations. The proposed model is tested with literature problems and its performance is evaluated by comparing to mathematical model. Best obtained solution and elapsed CPU time are used as performance criteria. The experimental results show that the proposed constraint programming model performs well and can be used as an alternative modeling technique to solve the problem.
Radar meteors range distribution model. I. Theory
Czech Academy of Sciences Publication Activity Database
Pecinová, Drahomíra; Pecina, Petr
2007-01-01
Roč. 37, č. 2 (2007), s. 83-106 ISSN 1335-1842 R&D Projects: GA ČR GA205/03/1405 Institutional research plan: CEZ:AV0Z10030501 Keywords : physics of meteors * radar meteors * range distribution Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics
Directory of Open Access Journals (Sweden)
Eckhard Limpert
Full Text Available BACKGROUND: The gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by mean ± SD, or with the standard error of the mean, mean ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. METHODOLOGY/PRINCIPAL FINDINGS: Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the "95% range check", their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log- normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to mean ± SD, it connects the multiplicative (or geometric mean mean * and the multiplicative standard deviation s* in the form mean * x/s*, that is advantageous and recommended. CONCLUSIONS/SIGNIFICANCE: The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life.
DEFF Research Database (Denmark)
Khoshfetrat Pakazad, Sina; Hansson, Anders; Andersen, Martin S.
2017-01-01
In this paper, we propose a distributed algorithm for solving coupled problems with chordal sparsity or an inherent tree structure which relies on primal–dual interior-point methods. We achieve this by distributing the computations at each iteration, using message-passing. In comparison to existing...... distributed algorithms for solving such problems, this algorithm requires far fewer iterations to converge to a solution with high accuracy. Furthermore, it is possible to compute an upper-bound for the number of required iterations which, unlike existing methods, only depends on the coupling structure...... in the problem. We illustrate the performance of our proposed method using a set of numerical examples....
Improving permafrost distribution modelling using feature selection algorithms
Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail
2016-04-01
The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its
The model of drugs distribution dynamics in biological tissue
Ginevskij, D. A.; Izhevskij, P. V.; Sheino, I. N.
2017-09-01
The dose distribution by Neutron Capture Therapy follows the distribution of 10B in the tissue. The modern models of pharmacokinetics of drugs describe the processes occurring in conditioned "chambers" (blood-organ-tumor), but fail to describe the spatial distribution of the drug in the tumor and in normal tissue. The mathematical model of the spatial distribution dynamics of drugs in the tissue, depending on the concentration of the drug in the blood, was developed. The modeling method is the representation of the biological structure in the form of a randomly inhomogeneous medium in which the 10B distribution occurs. The parameters of the model, which cannot be determined rigorously in the experiment, are taken as the quantities subject to the laws of the unconnected random processes. The estimates of 10B distribution preparations in the tumor and healthy tissue, inside/outside the cells, are obtained.
Species Distribution modeling as a tool to unravel determinants of palm distribution in Thailand
DEFF Research Database (Denmark)
Tovaranonte, Jantrararuk; Barfod, Anders S.; Balslev, Henrik
2011-01-01
distribution under specific sets of assumptions. In this study we used maximum entropy to map potential distributions of 103 species of palms for which more than 5 herbarium records exist. Palms constitute key-stone plant group from both an ecological, economical and conservation perspective. The models were......As a consequence of the decimation of the forest cover in Thailand from 50% to ca. 20 % since the 1950ies, it is difficult to gain insight in the drivers behind past, present and future distribution ranges of plant species. Species distribution modeling allows visualization of potential species...
Mathematical modeling of heat transfer problems in the permafrost
Gornov, V. F.; Stepanov, S. P.; Vasilyeva, M. V.; Vasilyev, V. I.
2014-11-01
In this work we present results of numerical simulation of three-dimensional temperature fields in soils for various applied problems: the railway line in the conditions of permafrost for different geometries, the horizontal tunnel underground storage and greenhouses of various designs in the Far North. Mathematical model of the process is described by a nonstationary heat equation with phase transitions of pore water. The numerical realization of the problem is based on the finite element method using a library of scientific computing FEniCS. For numerical calculations we use high-performance computing systems.
Degree distribution of a new model for evolving networks
Indian Academy of Sciences (India)
Research Articles Volume 74 Issue 3 March 2010 pp 469-474 ... Evolving networks; degree distribution; Markov chain; scale-free network. ... Based on the concept of Markov chain, we provide the exact solution of the degree distribution of this model and show that the model can generate scale-free evolving network.
Automated Decomposition of Model-based Learning Problems
Williams, Brian C.; Millar, Bill
1996-01-01
A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of [\\em decompositional model-based learning (DML)], a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.
Preliminary 2D numerical modeling of common granular problems
Wyser, Emmanuel; Jaboyedoff, Michel
2017-04-01
Granular studies received an increasing interest during the last decade. Many scientific investigations were successfully addressed to acknowledge the ubiquitous behavior of granular matter. We investigate liquid impacts onto granular beds, i.e. the influence of the packing and compaction-dilation transition. However, a physically-based model is still lacking to address complex microscopic features of granular bed response during liquid impacts such as compaction-dilation transition or granular bed uplifts (Wyser et al. in review). We present our preliminary 2D numerical modeling based on the Discrete Element Method (DEM) using nonlinear contact force law (the Hertz-Mindlin model) for disk shape particles. The algorithm is written in C programming language. Our 2D model provides an analytical tool to address granular problems such as i) granular collapses and ii) static granular assembliy problems. This provides a validation framework of our numerical approach by comparing our numerical results with previous laboratory experiments or numerical works. Inspired by the work of Warnett et al. (2014) and Staron & Hinch (2005), we studied i) the axisymetric collapse of granular columns. We addressed the scaling between the initial aspect ratio and the final runout distance. Our numerical results are in good aggreement with the previous studies of Warnett et al. (2014) and Staron & Hinch (2005). ii) Reproducing static problems for regular and randomly stacked particles provides a valid comparison to results of Egholm (2007). Vertical and horizontal stresses within the assembly are quite identical to stresses obtained by Egholm (2007), thus demonstating the consistency of our 2D numerical model. Our 2D numerical model is able to reproduce common granular case studies such as granular collapses or static problems. However, a sufficient small timestep should be used to ensure a good numerical consistency, resulting in higher computational time. The latter becomes critical
A model for the inverse 1-median problem on trees under uncertain costs
Directory of Open Access Journals (Sweden)
Kien Trung Nguyen
2016-01-01
Full Text Available We consider the problem of justifying vertex weights of a tree under uncertain costs so that a prespecified vertex become optimal and the total cost should be optimal in the uncertainty scenario. We propose a model which delivers the information about the optimal cost which respect to each confidence level \\(\\alpha \\in [0,1]\\. To obtain this goal, we first define an uncertain variable with respect to the minimum cost in each confidence level. If all costs are independently linear distributed, we present the inverse distribution function of this uncertain variable in \\(O(n^{2}\\log n\\ time, where \\(n\\ is the number of vertices in the tree.
Karmeshu; Gupta, Varun; Kadambari, K V
2011-06-01
A single neuronal model incorporating distributed delay (memory)is proposed. The stochastic model has been formulated as a Stochastic Integro-Differential Equation (SIDE) which results in the underlying process being non-Markovian. A detailed analysis of the model when the distributed delay kernel has exponential form (weak delay) has been carried out. The selection of exponential kernel has enabled the transformation of the non-Markovian model to a Markovian model in an extended state space. For the study of First Passage Time (FPT) with exponential delay kernel, the model has been transformed to a system of coupled Stochastic Differential Equations (SDEs) in two-dimensional state space. Simulation studies of the SDEs provide insight into the effect of weak delay kernel on the Inter-Spike Interval(ISI) distribution. A measure based on Jensen-Shannon divergence is proposed which can be used to make a choice between two competing models viz. distributed delay model vis-á-vis LIF model. An interesting feature of the model is that the behavior of (CV(t))((ISI)) (Coefficient of Variation) of the ISI distribution with respect to memory kernel time constant parameter η reveals that neuron can switch from a bursting state to non-bursting state as the noise intensity parameter changes. The membrane potential exhibits decaying auto-correlation structure with or without damped oscillatory behavior depending on the choice of parameters. This behavior is in agreement with empirically observed pattern of spike count in a fixed time window. The power spectral density derived from the auto-correlation function is found to exhibit single and double peaks. The model is also examined for the case of strong delay with memory kernel having the form of Gamma distribution. In contrast to fast decay of damped oscillations of the ISI distribution for the model with weak delay kernel, the decay of damped oscillations is found to be slower for the model with strong delay kernel.
Optimal model distributions in supervisory adaptive control
Ghosh, D.; Baldi, S.
2017-01-01
Several classes of multi-model adaptive control schemes have been proposed in literature: instead of one single parameter-varying controller, in this adaptive methodology multiple fixed-parameter controllers for different operating regimes (i.e. different models) are utilised. Despite advances in
Robustness of a Distributed Knowledge Management Model
DEFF Research Database (Denmark)
Pedersen, Mogens Kühn; Larsen, Michael Holm
1999-01-01
Knowledge management based on symmetric incentives is rarely found in literature. A knowledge exchange model relies upon a double loop knowledge conversion with symmetric incentives in a network. The model merges specific knowledge with knowledge from other actors into a decision support system...
Tempered stable distributions stochastic models for multiscale processes
Grabchak, Michael
2015-01-01
This brief is concerned with tempered stable distributions and their associated Levy processes. It is a good text for researchers interested in learning about tempered stable distributions. A tempered stable distribution is one which takes a stable distribution and modifies its tails to make them lighter. The motivation for this class comes from the fact that infinite variance stable distributions appear to provide a good fit to data in a variety of situations, but the extremely heavy tails of these models are not realistic for most real world applications. The idea of using distributions that modify the tails of stable models to make them lighter seems to have originated in the influential paper of Mantegna and Stanley (1994). Since then, these distributions have been extended and generalized in a variety of ways. They have been applied to a wide variety of areas including mathematical finance, biostatistics,computer science, and physics.
Fuzzification of the Distributed Activation Energy Model Using the Fuzzy Weibull Distribution
Directory of Open Access Journals (Sweden)
Alok Dhaundiyal
2018-01-01
Full Text Available This study focuses on the influence of some of the relevant parameters of biomass pyrolysis on a fuzzified solution of the Distributed Activation Energy Model (DAEM due to randomness and inaccuracy of data. The study investigates the fuzzified Distributed Activation Energy Model using the fuzzy Weibull distribution. The activation energy, frequency factor, and distribution variables of the 3-parameter Weibull analysis are converted into a non-crisp set. The expression for the fuzzy sets, and their α-cut are discussed with an initial distribution for the activation energies following the Weibull distribution function. The thermo-analytical data for pine needles is used to illustrate the methodology to exhibit the fuzziness of some of the parameters relevant to biomass pyrolysis.
Photovoltaic subsystem marketing and distribution model: programming manual. Final report
Energy Technology Data Exchange (ETDEWEB)
1982-07-01
Complete documentation of the marketing and distribution (M and D) computer model is provided. The purpose is to estimate the costs of selling and transporting photovoltaic solar energy products from the manufacturer to the final customer. The model adjusts for the inflation and regional differences in marketing and distribution costs. The model consists of three major components: the marketing submodel, the distribution submodel, and the financial submodel. The computer program is explained including the input requirements, output reports, subprograms and operating environment. The program specifications discuss maintaining the validity of the data and potential improvements. An example for a photovoltaic concentrator collector demonstrates the application of the model.
Maxent modelling for predicting the potential distribution of Thai Palms
DEFF Research Database (Denmark)
Tovaranonte, Jantrararuk; Barfod, Anders S.; Overgaard, Anne Blach
2011-01-01
Increasingly species distribution models are being used to address questions related to ecology, biogeography and species conservation on global and regional scales. We used the maximum entropy approach implemented in the MAXENT programme to build a habitat suitability model for Thai palms based...... overprediction of species distribution ranges. The models with the best predictive power were found by calculating the area under the curve (AUC) of receiver-operating characteristic (ROC). Here, we provide examples of contrasting predicted species distribution ranges as well as a map of modeled palm diversity...
Correlation Structures of Correlated Binomial Models and Implied Default Distribution
Mori, Shintaro; Kitsukawa, Kenji; Hisakado, Masato
2008-11-01
We show how to analyze and interpret the correlation structures, the conditional expectation values and correlation coefficients of exchangeable Bernoulli random variables. We study implied default distributions for the iTraxx-CJ tranches and some popular probabilistic models, including the Gaussian copula model, Beta binomial distribution model and long-range Ising model. We interpret the differences in their profiles in terms of the correlation structures. The implied default distribution has singular correlation structures, reflecting the credit market implications. We point out two possible origins of the singular behavior.
Applications of Markov random field models for inversion problems in geosciences
Kuwatani, T.; Nagata, K.; Okada, M.; Toriumi, M.
2012-12-01
Recently, a variety of spatial and temporal data sets can be obtained thanks to technological advances of measurement and observation in geosciences. It is very important to inverse spatial or temporal physical variables from these imaging data sets. The Markov random field (MRF) model is a stochastic model using Markov chains that is often applied for image restoration and pattern recognition in information science. In the MRF model, the spatial or temporal variations of physical properties are assumed to be relatively small compared to the observational noise and analytical uncertainty. By the Bayesian approach, the MRF model appropriately filters out high-frequency noise, and we can obtain accurate spatial distributions or time series of physical properties. Furthermore, it has the potential advantage of the incorporation of prior geophysical and geological information through the evaluation function. The purpose of this study is to develop the MRF model in order to apply it to various inversion problems in geosciences. Based on the Bayesian inference, we incorporated the nonlinear generation process of observational data sets into the MRF model. The Markov chain Monte Carlo (MCMC) algorithm was implemented to estimate hyperparameters and optimize target variables. Furthermore, it's important for inversion problems in geosciences to understand discontinuous behavior of physical variables, for example, detection of fault planes and lithospheric boundaries in the earth's interior. By introducing Potts spins as latent variables to the MRF model, we can simultaneously estimate the distributions of continuous and discontinuous variables. For examples of applications, we will introduce two inversion problems: one is a pressure-temperature inversion from compositional data of zoned minerals, and the other is an inversion of fluid distributions from observed seismic velocity structure. Based on these examples, we will discuss effectiveness and broad applicability of the
Alloy design as an inverse problem of cluster expansion models
DEFF Research Database (Denmark)
Larsen, Peter Mahler; Kalidindi, Arvind R.; Schmidt, Søren
2017-01-01
Central to a lattice model of an alloy system is the description of the energy of a given atomic configuration, which can be conveniently developed through a cluster expansion. Given a specific cluster expansion, the ground state of the lattice model at 0 K can be solved by finding the configurat......Central to a lattice model of an alloy system is the description of the energy of a given atomic configuration, which can be conveniently developed through a cluster expansion. Given a specific cluster expansion, the ground state of the lattice model at 0 K can be solved by finding...... the inverse problem in terms of energetically distinct configurations, using a constraint satisfaction model to identify constructible configurations, and show that a convex hull can be used to identify ground states. To demonstrate the approach, we solve for all ground states for a binary alloy in a 2D...
Sholihat, Seli Siti; Murfi, Hendri
2016-01-01
Banks must be able to manage all of banking risk; one of them is operational risk. Banks manage operational risk by calculates estimating operational risk which is known as the economic capital (EC). Loss Distribution Approach (LDA) is a popular method to estimate economic capital(EC).This paper propose Gaussian Mixture Model(GMM) for severity distribution estimation of loss distribution approach(LDA). The result on this research is the value at EC of LDA method using GMM is smaller 2 % -...
Everyday ethical problems in dementia care: a teleological model.
Bolmsjö, Ingrid Agren; Edberg, Anna-Karin; Sandman, Lars
2006-07-01
In this article, a teleological model for analysis of everyday ethical situations in dementia care is used to analyse and clarify perennial ethical problems in nursing home care for persons with dementia. This is done with the aim of describing how such a model could be useful in a concrete care context. The model was developed by Sandman and is based on four aspects: the goal; ethical side-constraints to what can be done to realize such a goal; structural constraints; and nurses' ethical competency. The model contains the following main steps: identifying and describing the normative situation; identifying and describing the different possible alternatives; assessing and evaluating the different alternatives; and deciding on, implementing and evaluating the chosen alternative. Three ethically difficult situations from dementia care were used for the application of the model. The model proved useful for the analysis of nurses' everyday ethical dilemmas and will be further explored to evaluate how well it can serve as a tool to identify and handle problems that arise in nursing care.
Irving, J.; Koepke, C.; Elsheikh, A. H.
2017-12-01
Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion
Directory of Open Access Journals (Sweden)
Lin Zhou
2016-08-01
Full Text Available With the increasing interest in online shopping, the Last Mile delivery is regarded as one of the most expensive and pollutive—and yet the least efficient—stages of the e-commerce supply chain. To address this challenge, a novel location-routing problem with simultaneous home delivery and customer’s pickup is proposed. This problem aims to build a more effective Last Mile distribution system by providing two kinds of service options when delivering packages to customers. To solve this specific problem, a hybrid evolution search algorithm by combining genetic algorithm (GA and local search (LS is presented. In this approach, a diverse population generation algorithm along with a two-phase solution initialization heuristic is first proposed to give high quality initial population. Then, advantaged solution representation, individual evaluation, crossover and mutation operations are designed to enhance the evolution and search efficiency. Computational experiments based on a large family of instances are conducted, and the results obtained indicate the validity of the proposed model and method.
Distributed Prognostics Based on Structural Model Decomposition
National Aeronautics and Space Administration — Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based...
MODEL 9975 SHIPPING PACKAGE FABRICATION PROBLEMS AND SOLUTIONS
Energy Technology Data Exchange (ETDEWEB)
May, C; Allen Smith, A
2008-05-07
The Model 9975 Shipping Package is the latest in a series (9965, 9968, etc.) of radioactive material shipping packages that have been the mainstay for shipping radioactive materials for several years. The double containment vessels are relatively simple designs using pipe and pipe cap in conjunction with the Chalfont closure to provide a leak-tight vessel. The fabrication appears simple in nature, but the history of fabrication tells us there are pitfalls in the different fabrication methods and sequences. This paper will review the problems that have arisen during fabrication and precautions that should be taken to meet specifications and tolerances. The problems and precautions can also be applied to the Models 9977 and 9978 Shipping Packages.
Model Checking Geographically Distributed Interlocking Systems Using UMC
DEFF Research Database (Denmark)
Fantechi, Alessandro; Haxthausen, Anne Elisabeth; Nielsen, Michel Bøje Randahl
2017-01-01
The current trend of distributing computations over a network is here, as a novelty, applied to a safety critical system, namely a railway interlocking system. We show how the challenge of guaranteeing safety of the distributed application has been attacked by formally specifying and model checking...... the relevant distributed protocols. By doing that we obey the safety guidelines of the railway signalling domain, that require formal methods to support the certification of such products. We also show how formal modelling can help designing alternative distributed solutions, while maintaining adherence...
Klaim-DB: A Modeling Language for Distributed Database Applications
DEFF Research Database (Denmark)
Wu, Xi; Li, Ximeng; Lluch Lafuente, Alberto
2015-01-01
We present the modelling language, Klaim-DB, for distributed database applications. Klaim-DB borrows the distributed nets of the coordination language Klaim but essentially re-incarnates the tuple spaces of Klaim as databases, and provides high-level language abstractions for the access and manip......We present the modelling language, Klaim-DB, for distributed database applications. Klaim-DB borrows the distributed nets of the coordination language Klaim but essentially re-incarnates the tuple spaces of Klaim as databases, and provides high-level language abstractions for the access...
Problem gambling in adolescents: an examination of the pathways model.
Gupta, Rina; Nower, Lia; Derevensky, Jeffrey L; Blaszczynski, Alex; Faregh, Neda; Temcheff, Caroline
2013-09-01
This research tests the applicability of the Integrated Pathways Model for gambling to adolescent problem gamblers, utilizing a cross-sectional design and self-report questionnaires. Although the overall sample consisted of 1,133 adolescents (Quebec: n = 994, 87.7 %; Ontario: n = 139, 12.3 %: Male = 558, 49.5 %; Female = 569, 50.5 %), only problem gamblers were retained in testing the model (N = 109). Personality and clinical features were assessed using the Millon Adolescent Clinical Inventory, attention deficit hyperactivity (ADHD) using the Conners-Wells' Adolescent Self-Report Scale, and the DSM-IV-MR-J and Gambling Activities Questionnaire to determine gambling severity and reasons for gambling. Latent class analysis concluded 5 classes, yet still provided preliminary support for three distinct subgroups similar to those proposed by the Pathways Model, adding a depression only subtype, and a subtype of problem gamblers experiencing both internalizing and externalizing disorders. ADHD symptoms were found to be common to 4 of the 5 classes.
Building a generalized distributed system model
Mukkamala, R.
1993-01-01
The key elements in the 1992-93 period of the project are the following: (1) extensive use of the simulator to implement and test - concurrency control algorithms, interactive user interface, and replica control algorithms; and (2) investigations into the applicability of data and process replication in real-time systems. In the 1993-94 period of the project, we intend to accomplish the following: (1) concentrate on efforts to investigate the effects of data and process replication on hard and soft real-time systems - especially we will concentrate on the impact of semantic-based consistency control schemes on a distributed real-time system in terms of improved reliability, improved availability, better resource utilization, and reduced missed task deadlines; and (2) use the prototype to verify the theoretically predicted performance of locking protocols, etc.
Modelling Difficulties and Their Overcoming Strategies in the Solution of a Modelling Problem
Dede, Ayse Tekin
2016-01-01
The purpose of the study is to reveal the elementary mathematics student teachers' difficulties encountered in the solution of a modelling problem, the strategies to overcome those difficulties and whether the strategies worked or not. Nineteen student teachers solved the modelling problem in their four or five-person groups, and the video records…
A Transmission Line Model for the Spherical Beltrami Problem
Papageorgiou, C. D.; Raptis, T. E.
We extend a previously introduced model for finding eigenvalues and eigenfunctions of PDEs with a certain natural symmetry set based on an analysis of an equivalent transmission line circuit. This was previously applied with success in the case of optical fibers [8], [9] as well as in the case of a linear Schroedinger equation [10], [11] and recently in the case of spherical symmetry (Ball Lightning) [12]. We explore the interpretation of eigenvalues as resonances of the corresponding transmission line model. We use the generic Beltrami problem of non-constant eigen-vorticity in spherical coordinates as a test bed and we locate the bound states and the eigen-vorticity functions.
A Contextualized Model of Headquarters-subsidiary Agency Problems
DEFF Research Database (Denmark)
Kostova, Tatiana; Nell, Phillip Christopher; Hoenen, Anne Kristin
This paper proposes an agency model for headquarters-subsidiary relationships in multinational organizations with headquarters as the principal and the subsidiary as the agent. As a departure from classical agency theory, our model is developed for the unit level of analysis and considers two root...... in which the headquarters-subsidiary dyad is embedded. We then discuss several agency scenarios that lead to different manifestations of the agency problem. The framework informs more relevant applications of agency theory in organizational studies and motivates future research....
Business model and problem about the radioactive wastes management
International Nuclear Information System (INIS)
Yoshida, Norimasa; Torii, Hiroyuki
2007-01-01
The PFI (Private Finance Initiative) is a new method to construct, maintain and manage public facilities by using private capital, management skills, and technical abilities. This article described business model and related problem for making use of PFI for the management of low-level radioactive wastes produced at reactors and nuclear fuel facilities of research institutes, universities and others. This service projects could provide public services with higher quality while reducing the business costs to the country and the local authority. Social impacts, business models and risks of the projects had been assessed. (T. Tanaka)
A Distribution Line Model for Lightning Overvoltage Studies
Matsuura, Susumu; Noda, Taku; Asakawa, Akira; Yokoyama, Shigeru
Recently, the focus of lightning protection measures for distribution lines has moved from a nearby lightning stroke to a direct lightning stroke. Studies of direct lightning stroke countermeasures are generally carried out by digital simulations using the EMTP (Electro-Magnetic Transients Program). Thus, components of a distribution line must be modeled appropriately in the EMTP for accurate simulations. The authors have previously clarified the surge response of a distribution line by pulse tests using a reduced-scale distribution line model. In this paper, first, the results of the pulse tests are simulated in the EMTP using a conventional model which represents a distribution pole by a single lossless distributed-parameter line model, and comparisons with the test results show that transient overvoltages generated at the insulators cannot accurately be reproduced by the conventional model. This indicates that a special treatment is required to represent the transient response of a distribution pole and wires. Then, this paper proposes new EMTP models of the pole and wires which can reproduce the transient overvoltages at the insulators. The parameter values of the proposed models can be determined based on a pulse test result.
A model of procedural and distributive fairness
Krawczyk, M.W.
2007-01-01
This paper presents a new model aimed at predicting behav- ior in games involving a randomized allocation procedure. It is designed to capture the relative importance and interaction between procedural justice (defined crudely in terms of the share of one's expected outcome in the sum of all
3D Temperature Distribution Model Based on Thermal Infrared Image
Directory of Open Access Journals (Sweden)
Tong Jia
2017-01-01
Full Text Available This paper aims to study the construction of 3D temperature distribution reconstruction system based on binocular vision technology. Initially, a traditional calibration method cannot be directly used, because the thermal infrared camera is only sensitive to temperature. Therefore, the thermal infrared camera is calibrated separately. Belief propagation algorithm is also investigated and its smooth model is improved in terms of stereo matching to optimize mismatching rate. Finally, the 3D temperature distribution model is built based on the matching of 3D point cloud and 2D thermal infrared information. Experimental results show that the method can accurately construct the 3D temperature distribution model and has strong robustness.
Energy Technology Data Exchange (ETDEWEB)
Marcondes, Eduardo; Goldbarg, Elizabeth; Goldbarg, Marco; Cunha, Thatiana [Universidade Federal do Rio Grande do Norte (UFRN), Natal, RN (Brazil)
2008-07-01
A major problem about the planning of production in refinery is the determination of what should be done in each stage of production as a horizon of time. Among such problems, distribution of oil products through networks of pipelines is a very significant problem because of its economic importance. In this work, a problem of distribution of oil through a network of pipelines is modeled. The network studied is a simplification of a real network. There are several restrictions to be met, such as limits of storage, transmission or receipt of limits and limitations of transport. The model is adopted bi-goal where you want to minimize the fragmentation and the time of transmission, given the restrictions of demand and storage capacity. Whereas the occupancy rate of networks is increasingly high, is of great importance optimize its use. In this work, the technique of optimization by Cloud of particles is applied to the problem of distribution of oil products by networks of pipelines. (author)
Finite element model to study calcium distribution in oocytes ...
African Journals Online (AJOL)
A program has been developed in MATLAB 7.10 for the entire problem and executed to obtain numerical results. The numerical results have been used to study the effect of buffers, RyR and VGCC on calcium distribution in oocyte. The results indicate that buffers can significantly decrease the calcium concentration and ...
Distributed Model Predictive Control for Smart Energy Systems
DEFF Research Database (Denmark)
Halvgaard, Rasmus Fogtmann; Vandenberghe, Lieven; Poulsen, Niels Kjølstad
2016-01-01
. The total power consumption is controlled through a negotiation procedure between all cooperating units and an aggregator that coordinates the overall objective. For large-scale systems, this method is faster than solving the original problem and can be distributed to include an arbitrary number of units...
Analysis and Comparison of Typical Models within Distribution Network Design
DEFF Research Database (Denmark)
Jørgensen, Hans Jacob; Larsen, Allan; Madsen, Oli B.G.
Efficient and cost effective transportation and logistics plays a vital role in the supply chains of the modern world’s manufacturers. Global distribution of goods is a very complicated matter as it involves many different distinct planning problems. The focus of this presentation is to demonstrate...
modeling and optimization of an electric power distribution network
African Journals Online (AJOL)
user
possibilities of configurations that exist in the selection of network facilities. The electric distribution network expansion planning problem (EDNEPP) is to determine the optimal location and capacities of substations, and optimal feeders route paths which would provide electric power to a given set of load demand nodes at ...
Kovalev, I. V.; Zelenkov, P. V.; Karaseva, M. V.; Tsarev, M. Yu; Tsarev, R. Yu
2015-01-01
The paper considers the problem of the analysis of distributed computer systems reliability with client-server architecture. A distributed computer system is a set of hardware and software for implementing the following main functions: processing, storage, transmission and data protection. This paper discusses the distributed computer systems architecture "client-server". The paper presents the scheme of the distributed computer system functioning represented as a graph where vertices are the functional state of the system and arcs are transitions from one state to another depending on the prevailing conditions. In reliability analysis we consider such reliability indicators as the probability of the system transition in the stopping state and accidents, as well as the intensity of these transitions. The proposed model allows us to obtain correlations for the reliability parameters of the distributed computer system without any assumptions about the distribution laws of random variables and the elements number in the system.
Linear Power-Flow Models in Multiphase Distribution Networks: Preprint
Energy Technology Data Exchange (ETDEWEB)
Bernstein, Andrey; Dall' Anese, Emiliano
2017-05-26
This paper considers multiphase unbalanced distribution systems and develops approximate power-flow models where bus-voltages, line-currents, and powers at the point of common coupling are linearly related to the nodal net power injections. The linearization approach is grounded on a fixed-point interpretation of the AC power-flow equations, and it is applicable to distribution systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. The proposed linear models can facilitate the development of computationally-affordable optimization and control applications -- from advanced distribution management systems settings to online and distributed optimization routines. Performance of the proposed models is evaluated on different test feeders.
Distributed modelling of shallow landslides triggered by intense rainfall
Directory of Open Access Journals (Sweden)
G. B. Crosta
2003-01-01
Full Text Available Hazard assessment of shallow landslides represents an important aspect of land management in mountainous areas. Among all the methods proposed in the literature, physically based methods are the only ones that explicitly includes the dynamic factors that control landslide triggering (rainfall pattern, land-use. For this reason, they allow forecasting both the temporal and the spatial distribution of shallow landslides. Physically based methods for shallow landslides are based on the coupling of the infinite slope stability analysis with hydrological models. Three different grid-based distributed hydrological models are presented in this paper: a steady state model, a transient "piston-flow" wetting front model, and a transient diffusive model. A comparative test of these models was performed to simulate landslide occurred during a rainfall event (27–28 June 1997 that triggered hundreds of shallow landslides within Lecco province (central Southern Alps, Italy. In order to test the potential for a completely distributed model for rainfall-triggered landslides, radar detected rainfall intensity has been used. A new procedure for quantitative evaluation of distributed model performance is presented and used in this paper. The diffusive model results in the best model for the simulation of shallow landslide triggering after a rainfall event like the one that we have analysed. Finally, radar data available for the June 1997 event permitted greatly improving the simulation. In particular, radar data allowed to explain the non-uniform distribution of landslides within the study area.
Feng, Wenshuai; Shi, Haiyang; Xu, Baoxiang; Ding, Dongfa
2017-10-01
In this paper, factors of fiber coil winding asymmetry, winding tension, non-ideal fiber type, adhensive glue type,and bonding way in fiber optic gyroscope could lead to fiber coils have different temperature distribution, and thermal induced nonreciprocity errors(Shupe errors). The influence of fiber coil temperature distribution in different wingding states on the fiber optic gyrocope temperature performance is studied in this paper, a temperatue distribution measure system of fiber coil is established, and the different wingding states coils are tested. Compared to the truly temperature distribution, the temperatue distribution measure model is exact relatively. The measure system can give more symmetrical and more uniform wingding state of fiber coil by meausure the temperatue distribution. Finally, the contrast experiment of fiber optic gyrocope is progressed, the experimental results agree well with the theory
DEFF Research Database (Denmark)
Soares, Tiago; Pereira, Fábio; Morais, Hugo
2015-01-01
The high penetration of distributed energy resources (DER) in distribution networks and the competitive environment of electricity markets impose the use of new approaches in several domains. The network cost allocation, traditionally used in transmission networks, should be adapted and used...... in the distribution networks considering the specifications of the connected resources. The main goal is to develop a fairer methodology trying to distribute the distribution network use costs to all players which are using the network in each period. In this paper, a model considering different type of costs (fixed......, losses, and congestion costs) is proposed comprising the use of a large set of DER, namely distributed generation (DG), demand response (DR) of direct load control type, energy storage systems (ESS), and electric vehicles with capability of discharging energy to the network, which is known as vehicle...
A proposed centralised distribution model for the South African automotive component industry
Directory of Open Access Journals (Sweden)
Micheline J. Naude
2009-12-01
Full Text Available Purpose: This article explores the possibility of developing a distribution model, similar to the model developed and implemented by the South African pharmaceutical industry, which could be implemented by automotive component manufacturers for supply to independent retailers. Problem Investigated: The South African automotive components distribution chain is extensive with a number of players of varying sizes, from the larger spares distribution groups to a number of independent retailers. Distributing to the smaller independent retailers is costly for the automotive component manufacturers. Methodology: This study is based on a preliminary study of an explorative nature. Interviews were conducted with a senior staff member from a leading automotive component manufacturer in KwaZulu Natal and nine participants at a senior management level at five of their main customers (aftermarket retailers. Findings: The findings from the empirical study suggest that the aftermarket component industry is mature with the role players well established. The distribution chain to the independent retailer is expensive in terms of transaction and distribution costs for the automotive component manufacturer. A proposed centralised distribution model for supply to independent retailers has been developed which should reduce distribution costs for the automotive component manufacturer in terms of (1 the lowest possible freight rate; (2 timely and controlled delivery; and (3 reduced congestion at the customer's receiving dock. Originality: This research is original in that it explores the possibility of implementing a centralised distribution model for independent retailers in the automotive component industry. Furthermore, there is a dearth of published research on the South African automotive component industry particularly addressing distribution issues. Conclusion: The distribution model as suggested is a practical one and should deliver added value to automotive
A Parallel Computational Model for Multichannel Phase Unwrapping Problem
Imperatore, Pasquale; Pepe, Antonio; Lanari, Riccardo
2015-05-01
In this paper, a parallel model for the solution of the computationally intensive multichannel phase unwrapping (MCh-PhU) problem is proposed. Firstly, the Extended Minimum Cost Flow (EMCF) algorithm for solving MCh-PhU problem is revised within the rigorous mathematical framework of the discrete calculus ; thus permitting to capture its topological structure in terms of meaningful discrete differential operators. Secondly, emphasis is placed on those methodological and practical aspects, which lead to a parallel reformulation of the EMCF algorithm. Thus, a novel dual-level parallel computational model, in which the parallelism is hierarchically implemented at two different (i.e., process and thread) levels, is presented. The validity of our approach has been demonstrated through a series of experiments that have revealed a significant speedup. Therefore, the attained high-performance prototype is suitable for the solution of large-scale phase unwrapping problems in reasonable time frames, with a significant impact on the systematic exploitation of the existing, and rapidly growing, large archives of SAR data.
The effects of model and data complexity on predictions from species distributions models
DEFF Research Database (Denmark)
García-Callejas, David; Bastos, Miguel
2016-01-01
by their geometrical properties. Tests involved analysis of models' ability to predict virtual species distributions in the same region and the same time as used for training the models, and to project distributions in different times under climate change. Of the eight species distribution models analyzed five (Random...
Modeling of unified power quality conditioner (UPQC) in distribution systems load flow
International Nuclear Information System (INIS)
Hosseini, M.; Shayanfar, H.A.; Fotuhi-Firuzabad, M.
2009-01-01
This paper presents modeling of unified power quality conditioner (UPQC) in load flow calculations for steady-state voltage compensation. An accurate model for this device is derived to use in load flow calculations. The rating of this device as well as direction of reactive power injection required to compensate voltage to the desired value (1 p.u.) is derived and discussed analytically and mathematically using phasor diagram method. Since performance of the compensator varies when it reaches to its maximum capacity, modeling of UPQC in its maximum rating of reactive power injection is derived. The validity of the proposed model is examined using two standard distribution systems consisting of 33 and 69 nodes, respectively. The best location of UPQC for under voltage problem mitigation in the distribution network is determined. The results show the validity of the proposed model for UPQC in large distribution systems.
Modeling of unified power quality conditioner (UPQC) in distribution systems load flow
Energy Technology Data Exchange (ETDEWEB)
Hosseini, M.; Shayanfar, H.A. [Center of Excellence for Power System Automation and Operation, Department of Electrical Engineering, Iran University of Science and Technology, Tehran (Iran); Fotuhi-Firuzabad, M. [Department of Electrical Engineering, Sharif University of Technology, Tehran (Iran)
2009-06-15
This paper presents modeling of unified power quality conditioner (UPQC) in load flow calculations for steady-state voltage compensation. An accurate model for this device is derived to use in load flow calculations. The rating of this device as well as direction of reactive power injection required to compensate voltage to the desired value (1 p.u.) is derived and discussed analytically and mathematically using phasor diagram method. Since performance of the compensator varies when it reaches to its maximum capacity, modeling of UPQC in its maximum rating of reactive power injection is derived. The validity of the proposed model is examined using two standard distribution systems consisting of 33 and 69 nodes, respectively. The best location of UPQC for under voltage problem mitigation in the distribution network is determined. The results show the validity of the proposed model for UPQC in large distribution systems. (author)
Comparing the performance of species distribution models of
Valle , M.; van Katwijk, M.M.; de Jong, D.J.; Bouma, T.; Schipper, A.M.; Chust, G.; Benito, B.M.; Garmendia, J.M.; Borja, A.
2013-01-01
Intertidal seagrasses show high variability in their extent and location, with local extinctions and (re-)colonizations being inherent in their population dynamics. Suitable habitats are identified usually using Species Distribution Models (SDM), based upon the overall distribution of the species;
Development of a distributed air pollutant dry deposition modeling framework
Satoshi Hirabayashi; Charles N. Kroll; David J. Nowak
2012-01-01
A distributed air pollutant dry deposition modeling systemwas developed with a geographic information system (GIS) to enhance the functionality of i-Tree Eco (i-Tree, 2011). With the developed system, temperature, leaf area index (LAI) and air pollutant concentration in a spatially distributed form can be estimated, and based on these and other input variables, dry...
Prior distributions for item parameters in IRT models
Matteucci, M.; S. Mignani, Prof.; Veldkamp, Bernard P.
2012-01-01
The focus of this article is on the choice of suitable prior distributions for item parameters within item response theory (IRT) models. In particular, the use of empirical prior distributions for item parameters is proposed. Firstly, regression trees are implemented in order to build informative
Improved mathematical models for particle-size distribution data ...
African Journals Online (AJOL)
Prior studies have suggested that particle-size distribution data of soils is central and helpful in this regard. This study proposes two improved mathematical models to describe and represent the varied particle-size distribution (PSD) data for tropically weathered residual (TWR) soils. The theoretical analysis and the ...
Modelling aspects of distributed processing in telecommunication networks
Tomasgard, A; Audestad, JA; Dye, S; Stougie, L; van der Vlerk, MH; Wallace, SW
1998-01-01
The purpose of this paper is to formally describe new optimization models for telecommunication networks with distributed processing. Modem distributed networks put more focus on the processing of information and less on the actual transportation of data than we are traditionally used to in
Modified Normal Demand Distributions in (R,S)-Inventory Models
Strijbosch, L.W.G.; Moors, J.J.A.
2003-01-01
To model demand, the normal distribution is by far the most popular; the disadvantage that it takes negative values is taken for granted.This paper proposes two modi.cations of the normal distribution, both taking non-negative values only.Safety factors and order-up-to-levels for the familiar (R,
Modeling and analysis of solar distributed generation
Ortiz Rivera, Eduardo Ivan
Recent changes in the global economy are creating a big impact in our daily life. The price of oil is increasing and the number of reserves are less every day. Also, dramatic demographic changes are impacting the viability of the electric infrastructure and ultimately the economic future of the industry. These are some of the reasons that many countries are looking for alternative energy to produce electric energy. The most common form of green energy in our daily life is solar energy. To convert solar energy into electrical energy is required solar panels, dc-dc converters, power control, sensors, and inverters. In this work, a photovoltaic module, PVM, model using the electrical characteristics provided by the manufacturer data sheet is presented for power system applications. Experimental results from testing are showed, verifying the proposed PVM model. Also in this work, three maximum power point tracker, MPPT, algorithms would be presented to obtain the maximum power from a PVM. The first MPPT algorithm is a method based on the Rolle's and Lagrange's Theorems and can provide at least an approximate answer to a family of transcendental functions that cannot be solved using differential calculus. The second MPPT algorithm is based on the approximation of the proposed PVM model using fractional polynomials where the shape, boundary conditions and performance of the proposed PVM model are satisfied. The third MPPT algorithm is based in the determination of the optimal duty cycle for a dc-dc converter and the previous knowledge of the load or load matching conditions. Also, four algorithms to calculate the effective irradiance level and temperature over a photovoltaic module are presented in this work. The main reasons to develop these algorithms are for monitoring climate conditions, the elimination of temperature and solar irradiance sensors, reductions in cost for a photovoltaic inverter system, and development of new algorithms to be integrated with maximum
Distributed Generation Market Demand Model (dGen): Documentation
Energy Technology Data Exchange (ETDEWEB)
Sigrin, Benjamin [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gleason, Michael [National Renewable Energy Lab. (NREL), Golden, CO (United States); Preus, Robert [National Renewable Energy Lab. (NREL), Golden, CO (United States); Baring-Gould, Ian [National Renewable Energy Lab. (NREL), Golden, CO (United States); Margolis, Robert [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2016-02-01
The Distributed Generation Market Demand model (dGen) is a geospatially rich, bottom-up, market-penetration model that simulates the potential adoption of distributed energy resources (DERs) for residential, commercial, and industrial entities in the continental United States through 2050. The National Renewable Energy Laboratory (NREL) developed dGen to analyze the key factors that will affect future market demand for distributed solar, wind, storage, and other DER technologies in the United States. The new model builds off, extends, and replaces NREL's SolarDS model (Denholm et al. 2009a), which simulates the market penetration of distributed PV only. Unlike the SolarDS model, dGen can model various DER technologies under one platform--it currently can simulate the adoption of distributed solar (the dSolar module) and distributed wind (the dWind module) and link with the ReEDS capacity expansion model (Appendix C). The underlying algorithms and datasets in dGen, which improve the representation of customer decision making as well as the spatial resolution of analyses (Figure ES-1), also are improvements over SolarDS.
Scale Problems in Geometric-Kinematic Modelling of Geological Objects
Siehl, Agemar; Thomsen, Andreas
To reveal, to render and to handle complex geological objects and their history of structural development, appropriate geometric models have to be designed. Geological maps, sections, sketches of strain and stress patterns are such well-known analogous two-dimensional models. Normally, the set of observations and measurements supporting them is small in relation to the complexity of the real objects they derive from. Therefore, modelling needs guidance by additional expert knowledge to bridge empty spaces which are not supported by data. Generating digital models of geological objects has some substantial advantages compared to conventional methods, especially if they are supported by an efficient database management system. Consistent 3D models of some complexity can be created, and experiments with time-dependent geological geometries may help to restore coherent sequences of paleogeological states. In order to cope with the problems arising from the combined usage of 3D-geometry models of different scale and resolution within an information system on subsurface geology, geometrical objects need to be annotated with information on the context, within which the geometry model has been established and within which it is valid, and methods supporting storage and retrieval as well as manipulation of geometry at different scales must also take into account and handle such context information to achieve meaningful results. An example is given of a detailed structural study of an open pit lignite mine in the Lower Rhine Basin.
Modeling of problems of projection: A non-countercyclic approach
Directory of Open Access Journals (Sweden)
Jason Ginsburg
2016-06-01
Full Text Available This paper describes a computational implementation of the recent Problems of Projection (POP approach to the study of language (Chomsky 2013; 2015. While adopting the basic proposals of POP, notably with respect to how labeling occurs, we a attempt to formalize the basic proposals of POP, and b develop new proposals that overcome some problems with POP that arise with respect to cyclicity, labeling, and wh-movement operations. We show how this approach accounts for simple declarative sentences, ECM constructions, and constructions that involve long-distance movement of a wh-phrase (including the that-trace effect. We implemented these proposals with a computer model that automatically constructs step-by-step derivations of target sentences, thus making it possible to verify that these proposals work.
Quantization of second-order Lagrangians: Model problem
Moore, R. A.; Scott, T. C.
1991-08-01
Many aspects of a model problem, the Lagrangian of which contains a term depending quadratically on the acceleration, are examined in the regime where the classical solution consists of two independent normal modes. It is shown that the techniques of conversion to a problem of Lagrange, generalized mechanics, and Dirac's method for constrained systems all yield the same canonical form for the Hamiltonian. It is also seen that the resultant canonical equations of motion are equivalent to the Euler-Lagrange equations. In canonical form, all of the standard results apply, quantization follows in the usual way, and the interpretation of the results is straightforward. It is also demonstrated that perturbative methods fail, both classically and quantum mechanically, indicating the need for the nonperturbative techniques applied herein. Finally, it is noted that this result may have fundamental implications for certain relativistic theories.
Fractional and multivariable calculus model building and optimization problems
Mathai, A M
2017-01-01
This textbook presents a rigorous approach to multivariable calculus in the context of model building and optimization problems. This comprehensive overview is based on lectures given at five SERC Schools from 2008 to 2012 and covers a broad range of topics that will enable readers to understand and create deterministic and nondeterministic models. Researchers, advanced undergraduate, and graduate students in mathematics, statistics, physics, engineering, and biological sciences will find this book to be a valuable resource for finding appropriate models to describe real-life situations. The first chapter begins with an introduction to fractional calculus moving on to discuss fractional integrals, fractional derivatives, fractional differential equations and their solutions. Multivariable calculus is covered in the second chapter and introduces the fundamentals of multivariable calculus (multivariable functions, limits and continuity, differentiability, directional derivatives and expansions of multivariable ...
Modeling of Drift Effects on Solar Tower Concentrated Flux Distributions
Directory of Open Access Journals (Sweden)
Luis O. Lara-Cerecedo
2016-01-01
Full Text Available A novel modeling tool for calculation of central receiver concentrated flux distributions is presented, which takes into account drift effects. This tool is based on a drift model that includes different geometrical error sources in a rigorous manner and on a simple analytic approximation for the individual flux distribution of a heliostat. The model is applied to a group of heliostats of a real field to obtain the resulting flux distribution and its variation along the day. The distributions differ strongly from those obtained assuming the ideal case without drift or a case with a Gaussian tracking error function. The time evolution of peak flux is also calculated to demonstrate the capabilities of the model. The evolution of this parameter also shows strong differences in comparison to the case without drift.
Modelling distributed energy resources in energy service networks
Acha, Salvador
2013-01-01
Focuses on modelling two key infrastructures (natural gas and electrical) in urban energy systems with embedded technologies (cogeneration and electric vehicles) to optimise the operation of natural gas and electrical infrastructures under the presence of distributed energy resources
Directory of Open Access Journals (Sweden)
Frieda Beauregard
Full Text Available Both climatic and edaphic conditions determine plant distribution, however many species distribution models do not include edaphic variables especially over large geographical extent. Using an exceptional database of vegetation plots (n = 4839 covering an extent of ∼55,000 km2, we tested whether the inclusion of fine scale edaphic variables would improve model predictions of plant distribution compared to models using only climate predictors. We also tested how well these edaphic variables could predict distribution on their own, to evaluate the assumption that at large extents, distribution is governed largely by climate. We also hypothesized that the relative contribution of edaphic and climatic data would vary among species depending on their growth forms and biogeographical attributes within the study area. We modelled 128 native plant species from diverse taxa using four statistical model types and three sets of abiotic predictors: climate, edaphic, and edaphic-climate. Model predictive accuracy and variable importance were compared among these models and for species' characteristics describing growth form, range boundaries within the study area, and prevalence. For many species both the climate-only and edaphic-only models performed well, however the edaphic-climate models generally performed best. The three sets of predictors differed in the spatial information provided about habitat suitability, with climate models able to distinguish range edges, but edaphic models able to better distinguish within-range variation. Model predictive accuracy was generally lower for species without a range boundary within the study area and for common species, but these effects were buffered by including both edaphic and climatic predictors. The relative importance of edaphic and climatic variables varied with growth forms, with trees being more related to climate whereas lower growth forms were more related to edaphic conditions. Our study
Directory of Open Access Journals (Sweden)
Aleksandr G. Podvesovskii
2017-12-01
Full Text Available The article deals with an approach for modeling and software support of distribution of students between graduate supervisors at large graduate department. The approach is based on the stable matching problem and the Gale-Shapley deferred acceptance algorithm, and takes into account both students and supervisors’ preferences. The formalized description of distribution model is given, and the results of its practical verification are described. The advantages and disadvantages of the proposed approach are discussed, and the problem of preferences manipulation of graduate supervisors are examined. The architecture of the distribution support software system is presented, and some features of its implementation as a Web-service within the complex information system of the graduate department are described.
Five (or so) challenges for species distribution modelling
DEFF Research Database (Denmark)
Bastos Araujo, Miguel; Guisan, Antoine
2006-01-01
Species distribution modelling is central to both fundamental and applied research in biogeography. Despite widespread use of models, there are still important conceptual ambiguities as well as biotic and algorithmic uncertainties that need to be investigated in order to increase confidence in mo...... contribution; and (5) improved model evaluation. The challenges discussed in this essay do not preclude the need for developments of other areas of research in this field. However, they are critical for allowing the science of species distribution modelling to move forward.......Species distribution modelling is central to both fundamental and applied research in biogeography. Despite widespread use of models, there are still important conceptual ambiguities as well as biotic and algorithmic uncertainties that need to be investigated in order to increase confidence...
Statistical modelling of a new global potential vegetation distribution
Levavasseur, G.; Vrac, M.; Roche, D. M.; Paillard, D.
2012-12-01
The potential natural vegetation (PNV) distribution is required for several studies in environmental sciences. Most of the available databases are quite subjective or depend on vegetation models. We have built a new high-resolution world-wide PNV map using a objective statistical methodology based on multinomial logistic models. Our method appears as a fast and robust alternative in vegetation modelling, independent of any vegetation model. In comparison with other databases, our method provides a realistic PNV distribution in agreement with respect to BIOME 6000 data. Among several advantages, the use of probabilities allows us to estimate the uncertainty, bringing some confidence in the modelled PNV, or to highlight the regions needing some data to improve the PNV modelling. Despite our PNV map being highly dependent on the distribution of data points, it is easily updatable as soon as additional data are available and provides very useful additional information for further applications.
Acoustic field distribution of sawtooth wave with nonlinear SBE model
Energy Technology Data Exchange (ETDEWEB)
Liu, Xiaozhou, E-mail: xzliu@nju.edu.cn; Zhang, Lue; Wang, Xiangda; Gong, Xiufen [Key Laboratory of Modern Acoustics, Ministry of Education, Institute of Acoustics, Nanjing University, Nanjing 210093 (China)
2015-10-28
For precise prediction of the acoustic field distribution of extracorporeal shock wave lithotripsy with an ellipsoid transducer, the nonlinear spheroidal beam equations (SBE) are employed to model acoustic wave propagation in medium. To solve the SBE model with frequency domain algorithm, boundary conditions are obtained for monochromatic and sawtooth waves based on the phase compensation. In numerical analysis, the influence of sinusoidal wave and sawtooth wave on axial pressure distributions are investigated.
Challenges and perspectives for species distribution modelling in the neotropics
Kamino, Luciana H. Y.; Stehmann, João Renato; Amaral, Silvana; De Marco, Paulo; Rangel, Thiago F.; de Siqueira, Marinez F.; De Giovanni, Renato; Hortal, Joaquín
2011-01-01
The workshop ‘Species distribution models: applications, challenges and perspectives’ held at Belo Horizonte (Brazil), 29–30 August 2011, aimed to review the state-of-the-art in species distribution modelling (SDM) in the neotropical realm. It brought together researchers in ecology, evolution, biogeography and conservation, with different backgrounds and research interests. The application of SDM in the megadiverse neotropics—where data on species occurrences are scarce—presents several chal...
The Lunar Internal Structure Model: Problems and Solutions
Nefedyev, Yuri; Gusev, Alexander; Petrova, Natalia; Varaksina, Natalia
decomposition of gravitational field of the Moon of members up to 165th order with a high degree of accuracy. Judging from the given data, the distinctive feature of the Moon’s gravitational field is that harmonics of the third and even the fourth order are comparable with harmonics of the second order, except for member J2. General conclusion: according to recent data, the true figure of the Moon is much more complex than a three-axis ellipsoid. Gravitational field and dynamic figure of the multilayered Moon: One of the main goals of selenodesy is the study of a dynamic figure of the Moon which determines distribution of the mass within the Moon’s body. A dynamic figure is shaped by the inertia ellipsoid set by values of resultant moments of inertia of the Moon A, B, C and their orientation in space. Selenoid satellites (SS) open new and most perspective opportunities in the study of gravitational field and the Moon’s figure. SSs “Moon 10”, “Apollo”, “Clementine”, “Lunar Prospector” trajectory tracking data processing has allowed for identification of coefficients in decomposition of gravitational field of the Moon of members up to 165th order with a high degree of accuracy. Judging from the given data, the distinctive feature of the Moon’s gravitational field is that harmonics of the third and even the fourth order are comparable with harmonics of the second order. Difference from zero of c-coefficients proves asymmetry of gravitational fields on the visible and invisible sides of the Moon. As a first attempt at solving the problem, the report presents the survey of internal structure of the Moon, tabulated values of geophysical parameters and geophysical profile of the Moon, including liquid lunar core, analytical solution of Clairaut’s equation for the two-layer model of the Moon; mathematical and bifurcational analysis of solution based on physically justified task options; original debugged software in VBA programming language for computer
A Complex Network Approach to Distributional Semantic Models.
Directory of Open Access Journals (Sweden)
Akira Utsumi
Full Text Available A number of studies on network analysis have focused on language networks based on free word association, which reflects human lexical knowledge, and have demonstrated the small-world and scale-free properties in the word association network. Nevertheless, there have been very few attempts at applying network analysis to distributional semantic models, despite the fact that these models have been studied extensively as computational or cognitive models of human lexical knowledge. In this paper, we analyze three network properties, namely, small-world, scale-free, and hierarchical properties, of semantic networks created by distributional semantic models. We demonstrate that the created networks generally exhibit the same properties as word association networks. In particular, we show that the distribution of the number of connections in these networks follows the truncated power law, which is also observed in an association network. This indicates that distributional semantic models can provide a plausible model of lexical knowledge. Additionally, the observed differences in the network properties of various implementations of distributional semantic models are consistently explained or predicted by considering the intrinsic semantic features of a word-context matrix and the functions of matrix weighting and smoothing. Furthermore, to simulate a semantic network with the observed network properties, we propose a new growing network model based on the model of Steyvers and Tenenbaum. The idea underlying the proposed model is that both preferential and random attachments are required to reflect different types of semantic relations in network growth process. We demonstrate that this model provides a better explanation of network behaviors generated by distributional semantic models.
Directory of Open Access Journals (Sweden)
Mojtaba Ganjali
Full Text Available In this paper, the problem of identifying differentially expressed genes under different conditions using gene expression microarray data, in the presence of outliers, is discussed. For this purpose, the robust modeling of gene expression data using some powerful distributions known as normal/independent distributions is considered. These distributions include the Student's t and normal distributions which have been used previously, but also include extensions such as the slash, the contaminated normal and the Laplace distributions. The purpose of this paper is to identify differentially expressed genes by considering these distributional assumptions instead of the normal distribution. A Bayesian approach using the Markov Chain Monte Carlo method is adopted for parameter estimation. Two publicly available gene expression data sets are analyzed using the proposed approach. The use of the robust models for detecting differentially expressed genes is investigated. This investigation shows that the choice of model for differentiating gene expression data is very important. This is due to the small number of replicates for each gene and the existence of outlying data. Comparison of the performance of these models is made using different statistical criteria and the ROC curve. The method is illustrated using some simulation studies. We demonstrate the flexibility of these robust models in identifying differentially expressed genes.
Numerical solution of a model for a superconductor field problem
International Nuclear Information System (INIS)
Alsop, L.E.; Goodman, A.S.; Gustavson, F.G.; Miranker, W.L.
1979-01-01
A model of a magnetic field problem occurring in connection with Josephson junction devices is derived, and numerical solutions are obtained. The model is of mathematical interest, because the magnetic vector potential satisfies inhomogeneous Helmholtz equations in part of the region, i.e., the superconductors, and the Laplace equation elsewhere. Moreover, the inhomogeneities are the guage constants for the potential, which are different for each superconductor, and their magnitudes are proportional to the currents flowing in the superconductors. These constants are directly related to the self and mutual inductances of the superconducting elements in the device. The numerical solution is obtained by the iterative use of a fast Poisson solver. Chebyshev acceleration is used to reduce the number of iterations required to obtain a solution. A typical problem involves solving 100,000 simultaneous equations, which the algorithm used with this model does in 20 iterations, requiring three minutes of CPU time on an IBM VM/370/168. Excellent agreement is obtained between calculated and observed values for the inductances
Application of oil spill model to marine pollution and risk control problems
Aseev, Nikita; Agoshkov, Valery; Sheloput, Tatyana
2017-04-01
Oil transportation by sea induces challenging problems of environmental control. Millions of tonnes of oil are yearly released during routine ship operations, not to mention vast spills due to different accidents (e.g. tanker collisions, grounding, etc.). Oil pollution is dangerous to marine organisms such as plants, fish and mammals, leading to widespread damage to our planet. In turn, fishery and travel agencies can lose money and clients, and ship operators are obliged to pay huge penalties for environmental pollution. In this work we present the method of accessing oil pollution of marine environment using recently developed oil spill model. The model describes basic processes of the oil slick evolution: oil transport due to currents, drift under the action of wind, spreading on the surface, evaporation, emulsification and dispersion. Such parameters as slick location, mass, density of oil, water content, viscosity and density of "water-in-oil" emulsion can be calculated. We demonstrate how to apply the model to damage calculation problems using a concept of average damage to particular marine area. We also formulate the problem of oil spill risk control, when some accident parameters are not known, but their probability distribution is given. We propose a new algorithm to solve such problems and show results of our model simulations. The work can be interesting to broad environmental, physics and mathematics community. The work is supported by Russian Foundation for Basic Research grant 16-31-00510.
Flightdeck Automation Problems (FLAP) Model for Safety Technology Portfolio Assessment
Ancel, Ersin; Shih, Ann T.
2014-01-01
NASA's Aviation Safety Program (AvSP) develops and advances methodologies and technologies to improve air transportation safety. The Safety Analysis and Integration Team (SAIT) conducts a safety technology portfolio assessment (PA) to analyze the program content, to examine the benefits and risks of products with respect to program goals, and to support programmatic decision making. The PA process includes systematic identification of current and future safety risks as well as tracking several quantitative and qualitative metrics to ensure the program goals are addressing prominent safety risks accurately and effectively. One of the metrics within the PA process involves using quantitative aviation safety models to gauge the impact of the safety products. This paper demonstrates the role of aviation safety modeling by providing model outputs and evaluating a sample of portfolio elements using the Flightdeck Automation Problems (FLAP) model. The model enables not only ranking of the quantitative relative risk reduction impact of all portfolio elements, but also highlighting the areas with high potential impact via sensitivity and gap analyses in support of the program office. Although the model outputs are preliminary and products are notional, the process shown in this paper is essential to a comprehensive PA of NASA's safety products in the current program and future programs/projects.
Performance modeling of parallel algorithms for solving neutron diffusion problems
International Nuclear Information System (INIS)
Azmy, Y.Y.; Kirk, B.L.
1995-01-01
Neutron diffusion calculations are the most common computational methods used in the design, analysis, and operation of nuclear reactors and related activities. Here, mathematical performance models are developed for the parallel algorithm used to solve the neutron diffusion equation on message passing and shared memory multiprocessors represented by the Intel iPSC/860 and the Sequent Balance 8000, respectively. The performance models are validated through several test problems, and these models are used to estimate the performance of each of the two considered architectures in situations typical of practical applications, such as fine meshes and a large number of participating processors. While message passing computers are capable of producing speedup, the parallel efficiency deteriorates rapidly as the number of processors increases. Furthermore, the speedup fails to improve appreciably for massively parallel computers so that only small- to medium-sized message passing multiprocessors offer a reasonable platform for this algorithm. In contrast, the performance model for the shared memory architecture predicts very high efficiency over a wide range of number of processors reasonable for this architecture. Furthermore, the model efficiency of the Sequent remains superior to that of the hypercube if its model parameters are adjusted to make its processors as fast as those of the iPSC/860. It is concluded that shared memory computers are better suited for this parallel algorithm than message passing computers
DEFF Research Database (Denmark)
Han, Xue; Sandels, Claes; Zhu, Kun
2013-01-01
operation will be changed by various parameters of DERs. This article proposed a modelling framework for an overview analysis on the correlation between DERs. Furthermore, to validate the framework, the authors described the reference models of different categories of DERs with their unique characteristics......, comprising distributed generation, active demand and electric vehicles. Subsequently, quantitative analysis was made on the basis of the current and envisioned DER deployment scenarios proposed for Sweden. Simulations are performed in two typical distribution network models for four seasons. The simulation...
Software reliability growth models with normal failure time distributions
International Nuclear Information System (INIS)
Okamura, Hiroyuki; Dohi, Tadashi; Osaki, Shunji
2013-01-01
This paper proposes software reliability growth models (SRGM) where the software failure time follows a normal distribution. The proposed model is mathematically tractable and has sufficient ability of fitting to the software failure data. In particular, we consider the parameter estimation algorithm for the SRGM with normal distribution. The developed algorithm is based on an EM (expectation-maximization) algorithm and is quite simple for implementation as software application. Numerical experiment is devoted to investigating the fitting ability of the SRGMs with normal distribution through 16 types of failure time data collected in real software projects
Directory of Open Access Journals (Sweden)
Michala Jakubcová
2015-01-01
Full Text Available The presented paper provides the analysis of selected versions of the particle swarm optimization (PSO algorithm. The tested versions of the PSO were combined with the shuffling mechanism, which splits the model population into complexes and performs distributed PSO optimization. One of them is a new proposed PSO modification, APartW, which enhances the global exploration and local exploitation in the parametric space during the optimization process through the new updating mechanism applied on the PSO inertia weight. The performances of four selected PSO methods were tested on 11 benchmark optimization problems, which were prepared for the special session on single-objective real-parameter optimization CEC 2005. The results confirm that the tested new APartW PSO variant is comparable with other existing distributed PSO versions, AdaptW and LinTimeVarW. The distributed PSO versions were developed for finding the solution of inverse problems related to the estimation of parameters of hydrological model Bilan. The results of the case study, made on the selected set of 30 catchments obtained from MOPEX database, show that tested distributed PSO versions provide suitable estimates of Bilan model parameters and thus can be used for solving related inverse problems during the calibration process of studied water balance hydrological model.
Modeling highway-traffic headway distributions using superstatistics.
Abul-Magd, A Y
2007-11-01
We study traffic clearance distributions (i.e., the instantaneous gap between successive vehicles) and time-headway distributions by applying the Beck and Cohen superstatistics. We model the transition from free phase to congested phase with the increase of vehicle density as a transition from the Poisson statistics to that of the random-matrix theory. We derive an analytic expression for the spacing distributions that interpolates from the Poisson distribution and Wigner's surmise and apply it to the distributions of the net distance and time gaps among the succeeding cars at different densities of traffic flow. The obtained distribution fits the experimental results for single-vehicle data of the Dutch freeway A9 and the German freeway A5.
International Nuclear Information System (INIS)
Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim
2014-01-01
A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems
The Distributed Geothermal Market Demand Model (dGeo): Documentation
Energy Technology Data Exchange (ETDEWEB)
McCabe, Kevin [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Mooney, Meghan E [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Sigrin, Benjamin O [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gleason, Michael [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Liu, Xiaobing [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2017-11-06
The National Renewable Energy Laboratory (NREL) developed the Distributed Geothermal Market Demand Model (dGeo) as a tool to explore the potential role of geothermal distributed energy resources (DERs) in meeting thermal energy demands in the United States. The dGeo model simulates the potential for deployment of geothermal DERs in the residential and commercial sectors of the continental United States for two specific technologies: ground-source heat pumps (GHP) and geothermal direct use (DU) for district heating. To quantify the opportunity space for these technologies, dGeo leverages a highly resolved geospatial database and robust bottom-up, agent-based modeling framework. This design is consistent with others in the family of Distributed Generation Market Demand models (dGen; Sigrin et al. 2016), including the Distributed Solar Market Demand (dSolar) and Distributed Wind Market Demand (dWind) models. dGeo is intended to serve as a long-term scenario-modeling tool. It has the capability to simulate the technical potential, economic potential, market potential, and technology deployment of GHP and DU through the year 2050 under a variety of user-defined input scenarios. Through these capabilities, dGeo can provide substantial analytical value to various stakeholders interested in exploring the effects of various techno-economic, macroeconomic, financial, and policy factors related to the opportunity for GHP and DU in the United States. This report documents the dGeo modeling design, methodology, assumptions, and capabilities.
Statistical Modelling of Synaptic Vesicles Distribution and Analysing their Physical Characteristics
DEFF Research Database (Denmark)
Khanmohammadi, Mahdieh
This Ph.D. thesis deals with mathematical and statistical modeling of synaptic vesicle distribution, shape, orientation and interactions. The first major part of this thesis treats the problem of determining the effect of stress on synaptic vesicle distribution and interactions. Serial section...... transmission electron microscopy is used to acquire images from two experimental groups of rats: 1) rats subjected to a behavioral model of stress and 2) rats subjected to sham stress as the control group. The synaptic vesicle distribution and interactions are modeled by employing a point process approach...... on differences of statistical measures in section and the same measures in between sections. Three-dimensional (3D) datasets are reconstructed by using image registration techniques and estimated thicknesses. We distinguish the effect of stress by estimating the synaptic vesicle densities and modeling...
Renewable Distributed Generation Models in Three-Phase Load Flow Analysis for Smart Grid
Directory of Open Access Journals (Sweden)
K. M. Nor
2013-11-01
Full Text Available The paper presents renewable distributed generationÂ (RDG models as three-phase resource in load flow computation and analyzes their effect when they are connected in composite networks. The RDG models that have been considered comprise of photovoltaic (PV and wind turbine generation (WTG. The voltage-controlled node and complex power injection node are used in the models. These improvement models are suitable for smart grid power system analysis. The combination of IEEE transmission and distribution data used to test and analyze the algorithm in solving balanced/unbalanced active systems. The combination of IEEE transmission data and IEEE test feeder are used to test the the algorithm for balanced and unbalanced multi-phase distribution system problem. The simulation results show that by increased number and size of RDG units have improved voltage profile and reduced system losses.
Requirements and Problems in Parallel Model Development at DWD
Directory of Open Access Journals (Sweden)
Ulrich Schäattler
2000-01-01
Full Text Available Nearly 30 years after introducing the first computer model for weather forecasting, the Deutscher Wetterdienst (DWD is developing the 4th generation of its numerical weather prediction (NWP system. It consists of a global grid point model (GME based on a triangular grid and a non-hydrostatic Lokal Modell (LM. The operational demand for running this new system is immense and can only be met by parallel computers. From the experience gained in developing earlier NWP models, several new problems had to be taken into account during the design phase of the system. Most important were portability (including efficieny of the programs on several computer architectures and ease of code maintainability. Also the organization and administration of the work done by developers from different teams and institutions is more complex than it used to be. This paper describes the models and gives some performance results. The modular approach used for the design of the LM is explained and the effects on the development are discussed.
Dynamic models for problems of species occurrence with multiple states
MacKenzie, D.I.; Nichols, J.D.; Seamans, M.E.; Gutierrez, R.J.
2009-01-01
Recent extensions of occupancy modeling have focused not only on the distribution of species over space, but also on additional state variables (e.g., reproducing or not, with or without disease organisms, relative abundance categories) that provide extra information about occupied sites. These biologist-driven extensions are characterized by ambiguity in both species presence and correct state classification, caused by imperfect detection. We first show the relationships between independently published approaches to the modeling of multistate occupancy. We then extend the pattern-based modeling to the case of sampling over multiple seasons or years in order to estimate state transition probabilities associated with system dynamics. The methodology and its potential for addressing relevant ecological questions are demonstrated using both maximum likelihood (occupancy and successful reproduction dynamics of California Spotted Owl) and Markov chain Monte Carlo estimation approaches (changes in relative abundance of green frogs in Maryland). Just as multistate capture?recapture modeling has revolutionized the study of individual marked animals, we believe that multistate occupancy modeling will dramatically increase our ability to address interesting questions about ecological processes underlying population-level dynamics.
Directory of Open Access Journals (Sweden)
Christian Vögeli
2016-12-01
Full Text Available Accurate knowledge on snow distribution in alpine terrain is crucial for various applicationssuch as flood risk assessment, avalanche warning or managing water supply and hydro-power.To simulate the seasonal snow cover development in alpine terrain, the spatially distributed,physics-based model Alpine3D is suitable. The model is typically driven by spatial interpolationsof observations from automatic weather stations (AWS, leading to errors in the spatial distributionof atmospheric forcing. With recent advances in remote sensing techniques, maps of snowdepth can be acquired with high spatial resolution and accuracy. In this work, maps of the snowdepth distribution, calculated from summer and winter digital surface models based on AirborneDigital Sensors (ADS, are used to scale precipitation input data, with the aim to improve theaccuracy of simulation of the spatial distribution of snow with Alpine3D. A simple method toscale and redistribute precipitation is presented and the performance is analysed. The scalingmethod is only applied if it is snowing. For rainfall the precipitation is distributed by interpolation,with a simple air temperature threshold used for the determination of the precipitation phase.It was found that the accuracy of spatial snow distribution could be improved significantly forthe simulated domain. The standard deviation of absolute snow depth error is reduced up toa factor 3.4 to less than 20 cm. The mean absolute error in snow distribution was reducedwhen using representative input sources for the simulation domain. For inter-annual scaling, themodel performance could also be improved, even when using a remote sensing dataset from adifferent winter. In conclusion, using remote sensing data to process precipitation input, complexprocesses such as preferential snow deposition and snow relocation due to wind or avalanches,can be substituted and modelling performance of spatial snow distribution is improved.
The gA problem in hedgehog soliton models revisited
Wakamatsu, M.; Watabe, T.
1993-08-01
It is widely known that the considerable underestimation of the nucleon axial-vector coupling constant gA is an undesirable common feature of soliton models of the nucleon based on the so-called “hedgehog” configuration. This fundamental physical quantity is reanalyzed within the framework of the chiral quark soliton model. The path integral quantization based on the cranking procedure leads to a systematic power series expansion in the Coriolis coupling (which scales as {1}/{N c}) induced by the isorotation of the hedgehog mean field. We find that an important {1}/{N c} corrections exists in this scheme and it greatly improves the situation for the gA problem.
Solving seismological problems using sgraph program: II-waveform modeling
International Nuclear Information System (INIS)
Abdelwahed, Mohamed F.
2012-01-01
One of the seismological programs to manipulate seismic data is SGRAPH program. It consists of integrated tools to perform advanced seismological techniques. SGRAPH is considered a new system for maintaining and analyze seismic waveform data in a stand-alone Windows-based application that manipulate a wide range of data formats. SGRAPH was described in detail in the first part of this paper. In this part, I discuss the advanced techniques including in the program and its applications in seismology. Because of the numerous tools included in the program, only SGRAPH is sufficient to perform the basic waveform analysis and to solve advanced seismological problems. In the first part of this paper, the application of the source parameters estimation and hypocentral location was given. Here, I discuss SGRAPH waveform modeling tools. This paper exhibits examples of how to apply the SGRAPH tools to perform waveform modeling for estimating the focal mechanism and crustal structure of local earthquakes.
An inverse problem for a mathematical model of aquaponic agriculture
Bobak, Carly; Kunze, Herb
2017-01-01
Aquaponic agriculture is a sustainable ecosystem that relies on a symbiotic relationship between fish and macrophytes. While the practice has been growing in popularity, relatively little mathematical models exist which aim to study the system processes. In this paper, we present a system of ODEs which aims to mathematically model the population and concetrations dynamics present in an aquaponic environment. Values of the parameters in the system are estimated from the literature so that simulated results can be presented to illustrate the nature of the solutions to the system. As well, a brief sensitivity analysis is performed in order to identify redundant parameters and highlight those which may need more reliable estimates. Specifically, an inverse problem with manufactured data for fish and plants is presented to demonstrate the ability of the collage theorem to recover parameter estimates.
Makkonen, K
1993-01-01
Intrauterine contraceptive devices (IUDs) are a popular method of contraception worldwide. However, some serious problems have been associated with them. Finland has developed and now manufactures and exports IUDs. Therefore, drug control and the quality of drug information existing in Finland are significant for other countries, as well. This study analyzes the information in the Finnish commercial drug catalog on copper-releasing IUDs and compares it with the scientific literature, the instructions from the licensing authority, and material in its U.S. counterpart, during the last two decades. The results indicate that the distribution of scientific knowledge to the drug catalogs has often been slow. In the early 1980s Finnish manufacturers did not give any practical information on their products, and then and later the Finnish catalog was less comprehensive than the U.S. catalog. The variations in the control system in different nations were reflected in the contents of the Finnish catalog. For practitioners, drug catalogs are important sources of medical information. The results of this study demonstrate (1) that more attention should be paid to the contents of these catalogs, and (2) the continuous need for up-to-date, unbiased drug information.
Directory of Open Access Journals (Sweden)
2009-03-01
Full Text Available We define a special case for the vehicle routing problem with stochastic demands (SC-VRPSD where customer demands are normally distributed. We propose a new linear model for computing the expected length of a tour in SC-VRPSD. The proposed model is based on the integration of the “Traveling Salesman Problem” (TSP and the Assignment Problem. For large-scale problems, we also use an Iterated Local Search (ILS algorithm in order to reach an effective solution.
Directory of Open Access Journals (Sweden)
Alok Dhaundiyal
2016-10-01
Full Text Available This article focuses on the influence of relevant parameters of biomass pyrolysis on the numerical solution of the isothermal nth-order distributed activation energy model (DAEM using the Rayleigh distribution as the initial distribution function F(E of the activation energies. In this study, the integral upper limit, the frequency factor, the reaction order and the scale parameters are investigated. This paper also derived the asymptotic approximation for the DAEM. The influence of these parameters is used to calculate the kinetic parameters of the isothermal nth-order DAEM with the help of thermo-analytical results of TGA/DTG analysis.
Modeling highway travel time distribution with conditional probability models
Energy Technology Data Exchange (ETDEWEB)
Oliveira Neto, Francisco Moraes [ORNL; Chin, Shih-Miao [ORNL; Hwang, Ho-Ling [ORNL; Han, Lee [University of Tennessee, Knoxville (UTK)
2014-01-01
ABSTRACT Under the sponsorship of the Federal Highway Administration's Office of Freight Management and Operations, the American Transportation Research Institute (ATRI) has developed performance measures through the Freight Performance Measures (FPM) initiative. Under this program, travel speed information is derived from data collected using wireless based global positioning systems. These telemetric data systems are subscribed and used by trucking industry as an operations management tool. More than one telemetric operator submits their data dumps to ATRI on a regular basis. Each data transmission contains truck location, its travel time, and a clock time/date stamp. Data from the FPM program provides a unique opportunity for studying the upstream-downstream speed distributions at different locations, as well as different time of the day and day of the week. This research is focused on the stochastic nature of successive link travel speed data on the continental United States Interstates network. Specifically, a method to estimate route probability distributions of travel time is proposed. This method uses the concepts of convolution of probability distributions and bivariate, link-to-link, conditional probability to estimate the expected distributions for the route travel time. Major contribution of this study is the consideration of speed correlation between upstream and downstream contiguous Interstate segments through conditional probability. The established conditional probability distributions, between successive segments, can be used to provide travel time reliability measures. This study also suggests an adaptive method for calculating and updating route travel time distribution as new data or information is added. This methodology can be useful to estimate performance measures as required by the recent Moving Ahead for Progress in the 21st Century Act (MAP 21).
Scholten, H.
2008-01-01
Mathematical models are more and more used to support to solve multidisciplinary, real world problems of increasing complexity. They are often plagued by obstacles such as miscommunication between modellers with different disciplinary backgrounds leading to a non-transparent modelling process. Other
Sustainable energy from biomass: Biomethane manufacturing plant location and distribution problem
International Nuclear Information System (INIS)
Wu, Bingqing; Sarker, Bhaba R.; Paudel, Krishna P.
2015-01-01
Highlights: • Optimal strategy to locate biogas reactor and allocating feedstock. • Nonlinear mixed integer programming problem structure. • Real world supply chain of biogas production system. • Considers construction cost, transportation and labor costs. • Novel heuristic improves efficiency to obtain optimal solution. - Abstract: As an environment-friendly and renewable energy source, biomethane plays a significant role in the supply of sustainable energy. To facilitate the decision-making process of where to build a biomethane production system (BMPS) and how to allocate the resources for the BMPS, this paper develops an analytical method to find the solutions to location and allocation problems by minimizing the supply chain cost of the BMPS. The BMPS consists of the local farms for providing feedstock, the hubs for collecting and storing feedstock from farms, and the reactors for producing biomethane from feedstock. A mixed integer nonlinear programming (MINLP) is introduced to model the supply chain by considering building, transportation, and labor costs. An alternative heuristic is proposed to obtain an optimal/sub-optimal solution from the MINLP. The validity of the proposed heuristic is proven by numerical examples that are abstracted from practical scenarios.
An energy-based model for the image edge-histogram specification problem.
Mignotte, Max
2012-01-01
In this correspondence, we present an original energy-based model that achieves the edge-histogram specification of a real input image and thus extends the exact specification method of the image luminance (or gray level) distribution recently proposed by Coltuc et al. Our edge-histogram specification approach is stated as an optimization problem in which each edge of a real input image will tend iteratively toward some specified gradient magnitude values given by a target edge distribution (or a normalized edge histogram possibly estimated from a target image). To this end, a hybrid optimization scheme combining a global and deterministic conjugate-gradient-based procedure and a local stochastic search using the Metropolis criterion is proposed herein to find a reliable solution to our energy-based model. Experimental results are presented, and several applications follow from this procedure.
Calibration process of highly parameterized semi-distributed hydrological model
Vidmar, Andrej; Brilly, Mitja
2017-04-01
Hydrological phenomena take place in the hydrological system, which is governed by nature, and are essentially stochastic. These phenomena are unique, non-recurring, and changeable across space and time. Since any river basin with its own natural characteristics and any hydrological event therein, are unique, this is a complex process that is not researched enough. Calibration is a procedure of determining the parameters of a model that are not known well enough. Input and output variables and mathematical model expressions are known, while only some parameters are unknown, which are determined by calibrating the model. The software used for hydrological modelling nowadays is equipped with sophisticated algorithms for calibration purposes without possibility to manage process by modeler. The results are not the best. We develop procedure for expert driven process of calibration. We use HBV-light-CLI hydrological model which has command line interface and coupling it with PEST. PEST is parameter estimation tool which is used widely in ground water modeling and can be used also on surface waters. Process of calibration managed by expert directly, and proportionally to the expert knowledge, affects the outcome of the inversion procedure and achieves better results than if the procedure had been left to the selected optimization algorithm. First step is to properly define spatial characteristic and structural design of semi-distributed model including all morphological and hydrological phenomena, like karstic area, alluvial area and forest area. This step includes and requires geological, meteorological, hydraulic and hydrological knowledge of modeler. Second step is to set initial parameter values at their preferred values based on expert knowledge. In this step we also define all parameter and observation groups. Peak data are essential in process of calibration if we are mainly interested in flood events. Each Sub Catchment in the model has own observations group
Assigning probability distributions to input parameters of performance assessment models
International Nuclear Information System (INIS)
Mishra, Srikanta
2002-02-01
This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available
Assigning probability distributions to input parameters of performance assessment models
Energy Technology Data Exchange (ETDEWEB)
Mishra, Srikanta [INTERA Inc., Austin, TX (United States)
2002-02-01
This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.
Hafner, Robert; Stewart, Jim
Past problem-solving research has provided a basis for helping students structure their knowledge and apply appropriate problem-solving strategies to solve problems for which their knowledge (or mental models) of scientific phenomena is adequate (model-using problem solving). This research examines how problem solving in the domain of Mendelian genetics proceeds in situations where solvers' mental models are insufficient to solve problems at hand (model-revising problem solving). Such situations require solvers to use existing models to recognize anomalous data and to revise those models to accommodate the data. The study was conducted in the context of 9-week high school genetics course and addressed: the heuristics charactenstic of successful model-revising problem solving: the nature of the model revisions, made by students as well as the nature of model development across problem types; and the basis upon which solvers decide that a revised model is sufficient (that t has both predictive and explanatory power).
Handbook of EOQ inventory problems stochastic and deterministic models and applications
Choi, Tsan-Ming
2013-01-01
This book explores deterministic and stochastic EOQ-model based problems and applications, presenting technical analyses of single-echelon EOQ model based inventory problems, and applications of the EOQ model for multi-echelon supply chain inventory analysis.
Stand diameter distribution modelling and prediction based on Richards function.
Directory of Open Access Journals (Sweden)
Ai-guo Duan
Full Text Available The objective of this study was to introduce application of the Richards equation on modelling and prediction of stand diameter distribution. The long-term repeated measurement data sets, consisted of 309 diameter frequency distributions from Chinese fir (Cunninghamia lanceolata plantations in the southern China, were used. Also, 150 stands were used as fitting data, the other 159 stands were used for testing. Nonlinear regression method (NRM or maximum likelihood estimates method (MLEM were applied to estimate the parameters of models, and the parameter prediction method (PPM and parameter recovery method (PRM were used to predict the diameter distributions of unknown stands. Four main conclusions were obtained: (1 R distribution presented a more accurate simulation than three-parametric Weibull function; (2 the parameters p, q and r of R distribution proved to be its scale, location and shape parameters, and have a deep relationship with stand characteristics, which means the parameters of R distribution have good theoretical interpretation; (3 the ordinate of inflection point of R distribution has significant relativity with its skewness and kurtosis, and the fitted main distribution range for the cumulative diameter distribution of Chinese fir plantations was 0.4∼0.6; (4 the goodness-of-fit test showed diameter distributions of unknown stands can be well estimated by applying R distribution based on PRM or the combination of PPM and PRM under the condition that only quadratic mean DBH or plus stand age are known, and the non-rejection rates were near 80%, which are higher than the 72.33% non-rejection rate of three-parametric Weibull function based on the combination of PPM and PRM.
Directory of Open Access Journals (Sweden)
Tian-tian Feng
2017-06-01
Full Text Available The development of distributed energy systems in China is one of the important measures to promote the revolution for energy production and its utilization patterns. First of all, we analyze the present application status of China’s distributed generation from three major types: natural gas, photovoltaic, and distributed wind. Secondly, based on the analysis of the project overview, project scale, and project effect in three patterns of distributed generation, we summarize the policy deficiencies and development obstacles. Finally, aiming to promote the development of distributed energy in China, we propose some relevant policies corresponding to countermeasures on the problems existing in the development process of China’s distributed generation of natural gas, photovoltaic, and wind power.
Wind climate modeling using Weibull and extreme value distribution ...
African Journals Online (AJOL)
The expected number of stress cycles in the projected working life of a structure is related to the expected number of hours in the critical wind speed range and wind climate modelling is required to know this. The most popular model for this purpose is Weibull distribution. Again, wind energy is proportional to the cube of the ...
Optimal dimensioning model of water distribution systems | Gomes ...
African Journals Online (AJOL)
This study is aimed at developing a pipe-sizing model for a water distribution system. The optimal solution minimises the system's total cost, which comprises the hydraulic network capital cost, plus the capitalised cost of pumping energy. The developed model, called Lenhsnet, may also be used for economical design when ...
Degree distribution of a new model for evolving networks
Indian Academy of Sciences (India)
on intuitive but realistic consideration that nodes are added to the network with both preferential and random attachments. The degree distribution of the model is between a power-law and an exponential decay. Motivated by the features of network evolution, we introduce a new model of evolving networks, incorporating the ...
A Discrete Model for HIV Infection with Distributed Delay
Directory of Open Access Journals (Sweden)
Brahim EL Boukari
2014-01-01
Full Text Available We give a consistent discretization of a continuous model of HIV infection, with distributed time delays to express the lag between the times when the virus enters a cell and when the cell becomes infected. The global stability of the steady states of the model is determined and numerical simulations are presented to illustrate our theoretical results.
Occam factors and model independent Bayesian learning of continuous distributions
International Nuclear Information System (INIS)
Nemenman, Ilya; Bialek, William
2002-01-01
Learning of a smooth but nonparametric probability density can be regularized using methods of quantum field theory. We implement a field theoretic prior numerically, test its efficacy, and show that the data and the phase space factors arising from the integration over the model space determine the free parameter of the theory ('smoothness scale') self-consistently. This persists even for distributions that are atypical in the prior and is a step towards a model independent theory for learning continuous distributions. Finally, we point out that a wrong parametrization of a model family may sometimes be advantageous for small data sets
Spatial distribution of emissions to air - the SPREAD model
Energy Technology Data Exchange (ETDEWEB)
Plejdrup, M.S.; Gyldenkaerne, S.
2011-04-15
The National Environmental Research Institute (NERI), Aarhus University, completes the annual national emission inventories for greenhouse gases and air pollutants according to Denmark's obligations under international conventions, e.g. the climate convention, UNFCCC and the convention on long-range transboundary air pollution, CLRTAP. NERI has developed a model to distribute emissions from the national emission inventories on a 1x1 km grid covering the Danish land and sea territory. The new spatial high resolution distribution model for emissions to air (SPREAD) has been developed according to the requirements for reporting of gridded emissions to CLRTAP. Spatial emission data is e.g. used as input for air quality modelling, which again serves as input for assessment and evaluation of health effects. For these purposes distributions with higher spatial resolution have been requested. Previously, a distribution on the 17x17 km EMEP grid has been set up and used in research projects combined with detailed distributions for a few sectors or sub-sectors e.g. a distribution for emissions from road traffic on 1x1 km resolution. SPREAD is developed to generate improved spatial emission data for e.g. air quality modelling in exposure studies. SPREAD includes emission distributions for each sector in the Danish inventory system; stationary combustion, mobile sources, fugitive emissions from fuels, industrial processes, solvents and other product use, agriculture and waste. This model enables generation of distributions for single sectors and for a number of sub-sectors and single sources as well. This report documents the methodologies in this first version of SPREAD and presents selected results. Further, a number of potential improvements for later versions of SPREAD are addressed and discussed. (Author)
Smoluchowski coagulation models of sea ice thickness distribution dynamics
Godlovitch, D.; Illner, R.; Monahan, A.
2011-12-01
Sea ice thickness distributions display a ubiquitous exponential decrease with thickness. This tail characterizes the range of ice thickness produced by mechanical redistribution of ice through the process of ridging, rafting, and shearing. We investigate how well the thickness distribution can be simulated by representing mechanical redistribution as a generalized stacking process. Such processes are naturally described by a well-studied class of models known as Smoluchowski Coagulation Models (SCMs), which describe the dynamics of a population of fixed-mass "particles" which combine in pairs to form a "particle" with the combined mass of the constituent pair at a rate which depends on the mass of the interacting particles. Like observed sea ice thickness distributions, the mass distribution of the populations generated by SCMs has an exponential or quasi-exponential form. We use SCMs to model sea ice, identifying mass-increasing particle combinations with thickness-increasing ice redistribution processes. Our model couples an SCM component with a thermodynamic component and generates qualitatively accurate thickness distributions with a variety of rate kernels. Our results suggest that the exponential tail of the sea ice thickness distribution arises from the nature of the ridging process, rather than specific physical properties of sea ice or the spatial arrangement of floes, and that the relative strengths of the dynamic and thermodynamic processes are key in accurately simulating the rate at which the sea ice thickness tail drops off with thickness.
Controlling sign problems in spin models using tensor renormalization
Energy Technology Data Exchange (ETDEWEB)
Denbleyker, Alan [Iowa U.; Liu, Yuzhi [Colorado U.; Meurice, Y. [Iowa U.; Qin, M. P. [Beijing, Inst. Phys.; Xiang, T. [Beijing, Inst. Phys.; Xie, Z. Y. [Beijing, Inst. Phys.; Yu, J. F. [Beijing, Inst. Phys.; Zou, Haiyuan [Iowa U.
2014-01-09
We consider the sign problem for classical spin models at complex $\\beta =1/g_0^2$ on $L\\times L$ lattices. We show that the tensor renormalization group method allows reliable calculations for larger Im$\\beta$ than the reweighting Monte Carlo method. For the Ising model with complex $\\beta$ we compare our results with the exact Onsager-Kaufman solution at finite volume. The Fisher zeros can be determined precisely with the TRG method. We check the convergence of the TRG method for the O(2) model on $L\\times L$ lattices when the number of states $D_s$ increases. We show that the finite size scaling of the calculated Fisher zeros agrees very well with the Kosterlitz-Thouless transition assumption and predict the locations for larger volume. The location of these zeros agree with Monte Carlo reweighting calculation for small volume. The application of the method for the O(2) model with a chemical potential is briefly discussed.
Computational Model for Internal Relative Humidity Distributions in Concrete
Directory of Open Access Journals (Sweden)
Wondwosen Ali
2014-01-01
Full Text Available A computational model is developed for predicting nonuniform internal relative humidity distribution in concrete. Internal relative humidity distribution is known to have a direct effect on the nonuniform drying shrinkage strains. These nonuniform drying shrinkage strains result in the buildup of internal stresses, which may lead to cracking of concrete. This may be particularly true at early ages of concrete since the concrete is relatively weak while the difference in internal relative humidity is probably high. The results obtained from this model can be used by structural and construction engineers to predict critical drying shrinkage stresses induced due to differential internal humidity distribution. The model uses finite elment-finite difference numerical methods. The finite element is used to space discretization while the finite difference is used to obtain transient solutions of the model. The numerical formulations are then programmed in Matlab. The numerical results were compared with experimental results found in the literature and demonstrated very good agreement.
PESTLCI – A PESTICIDE DISTRIBUTION MODEL FOR LCA
DEFF Research Database (Denmark)
Birkved, Morten; Hauschild, Michael Zwicky
The aim of the presented work is to develop a model for distribution of pesticides into the environment following application to the field. Based on input of required substance characteristics and applied quantities for the pesticides, the model will estimate the emissions to the air, water, soil...... and assessment of pesticide applications. The report therefore starts with a review of the work reported by the CAPER project as described in / / in order to locate new methods amenable for: 1. Handling of pesticide screening in LCA 2. Distribution modelling of pesticides in LCA 3. Evaluation of human exposure...... in LCA Following the review of existing methods, a number of modifications and new modules are developed and integrated into the existing method for pesticide distribution modelling to arrive at PESTLCI. Finally, PESTLCI is tested on three pesticide applications and the results compared to the results...
An extensive comparison of species-abundance distribution models.
Baldridge, Elita; Harris, David J; Xiao, Xiao; White, Ethan P
2016-01-01
A number of different models have been proposed as descriptions of the species-abundance distribution (SAD). Most evaluations of these models use only one or two models, focus on only a single ecosystem or taxonomic group, or fail to use appropriate statistical methods. We use likelihood and AIC to compare the fit of four of the most widely used models to data on over 16,000 communities from a diverse array of taxonomic groups and ecosystems. Across all datasets combined the log-series, Poisson lognormal, and negative binomial all yield similar overall fits to the data. Therefore, when correcting for differences in the number of parameters the log-series generally provides the best fit to data. Within individual datasets some other distributions performed nearly as well as the log-series even after correcting for the number of parameters. The Zipf distribution is generally a poor characterization of the SAD.
An extensive comparison of species-abundance distribution models
Directory of Open Access Journals (Sweden)
Elita Baldridge
2016-12-01
Full Text Available A number of different models have been proposed as descriptions of the species-abundance distribution (SAD. Most evaluations of these models use only one or two models, focus on only a single ecosystem or taxonomic group, or fail to use appropriate statistical methods. We use likelihood and AIC to compare the fit of four of the most widely used models to data on over 16,000 communities from a diverse array of taxonomic groups and ecosystems. Across all datasets combined the log-series, Poisson lognormal, and negative binomial all yield similar overall fits to the data. Therefore, when correcting for differences in the number of parameters the log-series generally provides the best fit to data. Within individual datasets some other distributions performed nearly as well as the log-series even after correcting for the number of parameters. The Zipf distribution is generally a poor characterization of the SAD.
Applications of Skew Models Using Generalized Logistic Distribution
Directory of Open Access Journals (Sweden)
Pushpa Narayan Rathie
2016-04-01
Full Text Available We use the skew distribution generation procedure proposed by Azzalini [Scand. J. Stat., 1985, 12, 171–178] to create three new probability distribution functions. These models make use of normal, student-t and generalized logistic distribution, see Rathie and Swamee [Technical Research Report No. 07/2006. Department of Statistics, University of Brasilia: Brasilia, Brazil, 2006]. Expressions for the moments about origin are derived. Graphical illustrations are also provided. The distributions derived in this paper can be seen as generalizations of the distributions given by Nadarajah and Kotz [Acta Appl. Math., 2006, 91, 1–37]. Applications with unimodal and bimodal data are given to illustrate the applicability of the results derived in this paper. The applications include the analysis of the following data sets: (a spending on public education in various countries in 2003; (b total expenditure on health in 2009 in various countries and (c waiting time between eruptions of the Old Faithful Geyser in the Yellow Stone National Park, Wyoming, USA. We compare the fit of the distributions introduced in this paper with the distributions given by Nadarajah and Kotz [Acta Appl. Math., 2006, 91, 1–37]. The results show that our distributions, in general, fit better the data sets. The general R codes for fitting the distributions introduced in this paper are given in Appendix A.
Crowd Sourcing for Challenging Technical Problems and Business Model
Davis, Jeffrey R.; Richard, Elizabeth
2011-01-01
imaging, microbial detection and even the use of pharmaceuticals for radiation protection. The internal challenges through NASA@Work drew over 6000 participants across all NASA centers. Challenges conducted by each NASA center elicited ideas and solutions from several other NASA centers and demonstrated rapid and efficient participation from employees at multiple centers to contribute to problem solving. Finally, on January 19, 2011, the SLSD conducted a workshop on open collaboration and innovation strategies and best practices through the newly established NASA Human Health and Performance Center (NHHPC). Initial projects will be described leading to a new business model for SLSD.
A Model for Determining Leakage in Water Distribution Systems
Stathis, Jonathan Alexander
1998-01-01
Leaks in pipe networks cause significant problems for utilities and water users in terms of lost revenue and interrupted service. In many cities the leakage is as high as forty percent. A water audit is carried out to assess system-wide leakage. However, to detect leakage at the level of a pipeline, a physical measurement technique is generally employed. For large cities the distribution piping length amounts to a few thousand miles. Therefore, the physical measurements can become tediou...
Distributing Correlation Coefficients of Linear Structure-Activity/Property Models
Directory of Open Access Journals (Sweden)
Sorana D. BOLBOACA
2011-12-01
Full Text Available Quantitative structure-activity/property relationships are mathematical relationships linking chemical structure and activity/property in a quantitative manner. These in silico approaches are frequently used to reduce animal testing and risk-assessment, as well as to increase time- and cost-effectiveness in characterization and identification of active compounds. The aim of our study was to investigate the pattern of correlation coefficients distribution associated to simple linear relationships linking the compounds structure with their activities. A set of the most common ordnance compounds found at naval facilities with a limited data set with a range of toxicities on aquatic ecosystem and a set of seven properties was studied. Statistically significant models were selected and investigated. The probability density function of the correlation coefficients was investigated using a series of possible continuous distribution laws. Almost 48% of the correlation coefficients proved fit Beta distribution, 40% fit Generalized Pareto distribution, and 12% fit Pert distribution.
Behaviour of ion velocity distributions for a simple collision model
St-Maurice, J.-P.; Schunk, R. W.
1974-01-01
Calculation of the ion velocity distributions for a weakly ionized plasma subjected to crossed electric and magnetic fields. An exact solution to Boltzmann's equation has been obtained by replacing the Boltzmann collision integral with a simple relaxation model. At altitudes above about 150 km, where the ion collision frequency is much less than the ion cyclotron frequency, the ion distribution takes the shape of a torus in velocity space for electric fields greater than 40 mV/m. This shape persists for one to two hours after application of the electric field. At altitudes where the ion collision and cyclotron frequencies are approximately equal (about 120 km), the ion velocity distribution is shaped like a bean for large electric field strengths. This bean-shaped distribution persists throughout the lifetime of ionospheric electric fields. These highly non-Maxwellian ion velocity distributions may have an appreciable affect on the interpretation of ion temperature measurements.
A Cascade-Based Emergency Model for Water Distribution Network
Directory of Open Access Journals (Sweden)
Qing Shuang
2015-01-01
Full Text Available Water distribution network is important in the critical physical infrastructure systems. The paper studies the emergency resource strategies on water distribution network with the approach of complex network and cascading failures. The model of cascade-based emergency for water distribution network is built. The cascade-based model considers the network topology analysis and hydraulic analysis to provide a more realistic result. A load redistribution function with emergency recovery mechanisms is established. From the aspects of uniform distribution, node betweenness, and node pressure, six recovery strategies are given to reflect the network topology and the failure information, respectively. The recovery strategies are evaluated with the complex network indicators to describe the failure scale and failure velocity. The proposed method is applied by an illustrative example. The results showed that the recovery strategy considering the node pressure can enhance the network robustness effectively. Besides, this strategy can reduce the failure nodes and generate the least failure nodes per time.
Bilinear reduced order approximate model of parabolic distributed solar collectors
Elmetennani, Shahrazed
2015-07-01
This paper proposes a novel, low dimensional and accurate approximate model for the distributed parabolic solar collector, by means of a modified gaussian interpolation along the spatial domain. The proposed reduced model, taking the form of a low dimensional bilinear state representation, enables the reproduction of the heat transfer dynamics along the collector tube for system analysis. Moreover, presented as a reduced order bilinear state space model, the well established control theory for this class of systems can be applied. The approximation efficiency has been proven by several simulation tests, which have been performed considering parameters of the Acurex field with real external working conditions. Model accuracy has been evaluated by comparison to the analytical solution of the hyperbolic distributed model and its semi discretized approximation highlighting the benefits of using the proposed numerical scheme. Furthermore, model sensitivity to the different parameters of the gaussian interpolation has been studied.
The Application of Phase Type Distributions for Modelling Queuing Systems
Directory of Open Access Journals (Sweden)
EIMUTIS VALAKEVICIUS
2007-12-01
Full Text Available Queuing models are important tools for studying the performance of complex systems, but despite the substantial queuing theory literature, it is often necessary to use approximations in the case the system is nonmarkovian. Phase type distribution is by now indispensable tool in creation of queuing system models. The purpose of this paper is to suggest a method and software for evaluating queuing approximations. A numerical queuing model with priorities is used to explore the behaviour of exponential phase-type approximation of service-time distribution. The performance of queuing systems described in the event language is used for generating the set of states and transition matrix between them. Two examples of numerical models are presented – a queuing system model with priorities and a queuing system model with quality control.
Analysis and Comparison of Typical Models within Distribution Network Design
DEFF Research Database (Denmark)
Jørgensen, Hans Jacob; Larsen, Allan; Madsen, Oli B.G.
This paper investigates the characteristics of typical optimisation models within Distribution Network Design. During the paper fourteen models known from the literature will be thoroughly analysed. Through this analysis a schematic approach to categorisation of distribution network design models...... for educational purposes. Furthermore, the paper can be seen as a practical introduction to network design modelling as well as a being an art manual or recipe when constructing such a model....... are covered in the categorisation include fixed vs. general networks, specialised vs. general nodes, linear vs. nonlinear costs, single vs. multi commodity, uncapacitated vs. capacitated activities, single vs. multi modal and static vs. dynamic. The models examined address both strategic and tactical planning...
Gravitational and capillary soil moisture dynamics for distributed hydrologic models
Directory of Open Access Journals (Sweden)
A. Castillo
2015-04-01
Full Text Available Distributed and continuous catchment models are used to simulate water and energy balance and fluxes across varied topography and landscape. The landscape is discretized into computational plan elements at resolutions of 101–103 m, and soil moisture is the hydrologic state variable. At the local scale, the vertical soil moisture dynamics link hydrologic fluxes and provide continuity in time. In catchment models these local-scale processes are modeled using 1-D soil columns that are discretized into layers that are usually 10−3–10−1 m in thickness. This creates a mismatch between the horizontal and vertical scales. For applications across large domains and in ensemble mode, this treatment can be a limiting factor due to its high computational demand. This study compares continuous multi-year simulations of soil moisture at the local scale using (i a 1-pixel version of a distributed catchment hydrologic model and (ii a benchmark detailed soil water physics solver. The distributed model uses a single soil layer with a novel dual-pore structure and employs linear parameterization of infiltration and some other fluxes. The detailed solver uses multiple soil layers and employs nonlinear soil physics relations to model flow in unsaturated soils. Using two sites with different climates (semiarid and sub-humid, it is shown that the efficient parameterization in the distributed model captures the essential dynamics of the detailed solver.
Specification, Model Generation, and Verification of Distributed Applications
Madelaine, Eric
2011-01-01
Since 2001, in the Oasis team, I have developed research on the semantics of applications based on distributed objects, applying in the context of a real language, and applications of realistic size, my previous researches in the field of process algebras. The various aspects of this work naturally include behavioral semantics and the definition of procedures for model generation, taking into account the different concepts of distributed applications, but also upstream, static code analysis a...
Information Modeling for Direct Control of Distributed Energy Resources
DEFF Research Database (Denmark)
Biegel, Benjamin; Andersen, Palle; Stoustrup, Jakob
2013-01-01
We present an architecture for an unbundled liberalized electricity market system where a virtual power plant (VPP) is able to control a number of distributed energy resources (DERs) directly through a two-way communication link. The aggregator who operates the VPP utilizes the accumulated...... for a whole range of different DERs. The devised information model can serve as input to the international standardization efforts on distributed energy resources....
QCD Sum Rules and Models for Generalized Parton Distributions
Energy Technology Data Exchange (ETDEWEB)
Anatoly Radyushkin
2004-10-01
I use QCD sum rule ideas to construct models for generalized parton distributions. To this end, the perturbative parts of QCD sum rules for the pion and nucleon electromagnetic form factors are interpreted in terms of GPDs and two models are discussed. One of them takes the double Borel transform at adjusted value of the Borel parameter as a model for nonforward parton densities, and another is based on the local duality relation. Possible ways of improving these Ansaetze are briefly discussed.
Thinking outside the curve, part I: modeling birthweight distribution
Directory of Open Access Journals (Sweden)
Charnigo Richard
2010-07-01
Full Text Available Abstract Background Greater epidemiologic understanding of the relationships among fetal-infant mortality and its prognostic factors, including birthweight, could have vast public health implications. A key step toward that understanding is a realistic and tractable framework for analyzing birthweight distributions and fetal-infant mortality. The present paper is the first of a two-part series that introduces such a framework. Methods We propose describing a birthweight distribution via a normal mixture model in which the number of components is determined from the data using a model selection criterion rather than fixed a priori. Results We address a number of methodological issues, including how the number of components selected depends on the sample size, how the choice of model selection criterion influences the results, and how estimates of mixture model parameters based on multiple samples from the same population can be combined to produce confidence intervals. As an illustration, we find that a 4-component normal mixture model reasonably describes the birthweight distribution for a population of white singleton infants born to heavily smoking mothers. We also compare this 4-component normal mixture model to two competitors from the existing literature: a contaminated normal model and a 2-component normal mixture model. In a second illustration, we discover that a 6-component normal mixture model may be more appropriate than a 4-component normal mixture model for a general population of black singletons. Conclusions The framework developed in this paper avoids assuming the existence of an interval of birthweights over which there are no compromised pregnancies and does not constrain birthweights within compromised pregnancies to be normally distributed. Thus, the present framework can reveal heterogeneity in birthweight that is undetectable via a contaminated normal model or a 2-component normal mixture model.
Control of the SCOLE configuration using distributed parameter models
Hsiao, Min-Hung; Huang, Jen-Kuang
1994-01-01
A continuum model for the SCOLE configuration has been derived using transfer matrices. Controller designs for distributed parameter systems have been analyzed. Pole-assignment controller design is considered easy to implement but stability is not guaranteed. An explicit transfer function of dynamic controllers has been obtained and no model reduction is required before the controller is realized. One specific LQG controller for continuum models had been derived, but other optimal controllers for more general performances need to be studied.
Extending Growth Mixture Models Using Continuous Non-Elliptical Distributions
Wei, Yuhong; Tang, Yang; Shireman, Emilie; McNicholas, Paul D.; Steinley, Douglas L.
2017-01-01
Growth mixture models (GMMs) incorporate both conventional random effects growth modeling and latent trajectory classes as in finite mixture modeling; therefore, they offer a way to handle the unobserved heterogeneity between subjects in their development. GMMs with Gaussian random effects dominate the literature. When the data are asymmetric and/or have heavier tails, more than one latent class is required to capture the observed variable distribution. Therefore, a GMM with continuous non-el...
Modeling of non-linear CHP efficiency curves in distributed energy systems
DEFF Research Database (Denmark)
Milan, Christian; Stadler, Michael; Cardoso, Gonçalo
2015-01-01
Distributed energy resources gain an increased importance in commercial and industrial building design. Combined heat and power (CHP) units are considered as one of the key technologies for cost and emission reduction in buildings. In order to make optimal decisions on investment and operation...... for these technologies, detailed system models are needed. These models are often formulated as linear programming problems to keep computational costs and complexity in a reasonable range. However, CHP systems involve variations of the efficiency for large nameplate capacity ranges and in case of part load operation......, which can be even of non-linear nature. Since considering these characteristics would turn the models into non-linear problems, in most cases only constant efficiencies are assumed. This paper proposes possible solutions to address this issue. For a mixed integer linear programming problem two...
Distributed model based control of multi unit evaporation systems
International Nuclear Information System (INIS)
Yudi Samyudia
2006-01-01
In this paper, we present a new approach to the analysis and design of distributed control systems for multi-unit plants. The approach is established after treating the effect of recycled dynamics as a gap metric uncertainty from which a distributed controller can be designed sequentially for each unit to tackle the uncertainty. We then use a single effect multi-unit evaporation system to illustrate how the proposed method is used to analyze different control strategies and to systematically achieve a better closed-loop performance using a distributed model-based controller
Using the Weibull distribution reliability, modeling and inference
McCool, John I
2012-01-01
Understand and utilize the latest developments in Weibull inferential methods While the Weibull distribution is widely used in science and engineering, most engineers do not have the necessary statistical training to implement the methodology effectively. Using the Weibull Distribution: Reliability, Modeling, and Inference fills a gap in the current literature on the topic, introducing a self-contained presentation of the probabilistic basis for the methodology while providing powerful techniques for extracting information from data. The author explains the use of the Weibull distribution
Shell model test of the Porter-Thomas distribution
International Nuclear Information System (INIS)
Grimes, S.M.; Bloom, S.D.
1981-01-01
Eigenvectors have been calculated for the A=18, 19, 20, 21, and 26 nuclei in an sd shell basis. The decomposition of these states into their shell model components shows, in agreement with other recent work, that this distribution is not a single Gaussian. We find that the largest amplitudes are distributed approximately in a Gaussian fashion. Thus, many experimental measurements should be consistent with the Porter-Thomas predictions. We argue that the non-Gaussian form of the complete distribution can be simply related to the structure of the Hamiltonian
An instructional model for training competence in solving clinical problems.
Ramaekers, Stephan P J; van Beukelen, Peter; Kremer, Wim D J; van Keulen, Hanno; Pilot, Albert
2011-01-01
We examined the design of a course that aims to ease the transition from pre-clinical learning into clinical work. This course is based on the premise that many of the difficulties with which students are confronted in this transition result from a lack of experience in applying knowledge in real practice situations. It is focused on the development of competence in solving clinical problems; uses an instructional model with alternating clinical practicals, demonstrations, and tutorials; and extends throughout the last pre-clinical year. We used a "proof-of-concept" approach to establish whether the core principles of the course design are feasible with regard to achieving the intended results. With the learning functions and processes as a frame of reference, retrospective analysis of the course's design features shows that this design matches the conditions from theories of the development of competence in solving clinical problems and instructional design. Three areas of uncertainty in the design are identified: the quality of the cases (information, openness), effective teaching (student and teacher roles), and adjustment to the development of competence (progress, coherence).
A distribution-free newsvendor model with balking penalty and random yield
Directory of Open Access Journals (Sweden)
Chongfeng Lan
2015-05-01
Full Text Available Purpose: The purpose of this paper is to extend the analysis of the distribution-free newsvendor problem in an environment of customer balking, which occurs when customers are reluctant to buy a product if its available inventory falls below a threshold level. Design/methodology/approach: We provide a new tradeoff tool as a replacement of the traditional one to weigh the holding cost and the goodwill costs segment: in addition to the shortage penalty, we also introduce the balking penalty. Furthermore, we extend our model to the case of random yield. Findings: A model is presented for determining both an optimal order quantity and a lower bound on the profit under the worst possible distribution of the demand. We also study the effects of shortage penalty and the balking penalty on the optimal order quantity, which have been largely bypassed in the existing distribution free single period models with balking. Numerical examples are presented to illustrate the result. Originality/value: The incorporation of balking penalty and random yield represents an important improvement in inventory policy performance for distribution-free newsvendor problem when customer balking occurs and the distributional form of demand is unknown.
Modeling risk and uncertainty in designing reverse logistics problem
Directory of Open Access Journals (Sweden)
Aida Nazari Gooran
2018-01-01
Full Text Available Increasing attention to environmental problems and social responsibility lead to appear reverse logistic (RL issues in designing supply chain which, in most recently, has received considerable attention from both academicians and practitioners. In this paper, a multi-product reverse logistic network design model is developed; then a hybrid method including Chance-constrained programming, Genetic algorithm and Monte Carlo simulation, are proposed to solve the developed model. The proposed model is solved for risk-averse and risk-seeking decision makers by conditional value at risk, sum of the excepted value and standard deviation, respectively. Comparisons of the results show that minimizing the costs had no direct relation with the kind of decision makers; however, in the most cases, risk-seeking decision maker gained more return products than risk-averse ones. It is clear that by increasing returned products to the chain, production costs of new products and material will be reduced and also by this act, environmental benefits will be created.
A model problem concerning ionic transport in microstructured solid electrolytes
Curto Sillamoni, Ignacio J.; Idiart, Martín I.
2015-11-01
We consider ionic transport by diffusion and migration through microstructured solid electrolytes. The assumed constitutive relations for the constituent phases follow from convex energy and dissipation potentials which guarantee thermodynamic consistency. The effective response is determined by homogenizing the relevant field equations via the notion ofmulti-scale convergence. The resulting homogenized response involves several effective tensors, but they all require the solution of just one standard conductivity problem over the representative volume element. A multi-scale model for semicrystalline polymer electrolytes with spherulitic morphologies is derived by applying the theory to a specific class of two-dimensional microgeometries for which the effective response can be computed exactly. An enriched model accounting for a random dispersion of filler particles with interphases is also derived. In both cases, explicit expressions for the effective material parameters are provided. The models are used to explore the effect of crystallinity and filler content on the overall response. Predictions support recent experimental observations on doped poly-ethylene-oxide systems which suggest that the anisotropic crystalline phase can actually support faster ion transport than the amorphous phase along certain directions dictated by the morphology of the polymeric chains. Predictions also support the viewpoint that ceramic fillers improve ionic conductivity and cation transport number via interphasial effects.
Cooling problems of thermal power plants. Physical model studies
International Nuclear Information System (INIS)
Neale, L.C.
1975-01-01
The Alden Research Laboratories of Worcester Polytechnic Institute has for many years conducted physical model studies, which are normally classified as river or structural hydraulic studies. Since 1952 one aspect of these studies has involved the heated discharge from steam power plants. The early studies on such problems concentrated on improving the thermal efficiency of the system. This was accomplished by minimizing recirculation and by assuring full use of available cold water supplies. With the growing awareness of the impact of thermal power generation on the environment attention has been redirected to reducing the effect of heated discharges on the biology of the receiving body of water. More specifically the efforts of designers and operators of power plants are aimed at meeting or complying with standards established by various governmental agencies. Thus the studies involve developing means of minimizing surface temperatures at an outfall or establishing a local area of higher temperature with limits specified in terms of areas or distances. The physical models used for these studies have varied widely in scope, size, and operating features. These models have covered large areas with both distorted geometric scales and uniform dimensions. Instrumentations has also varied from simple mercury thermometers to computer control and processing of hundreds of thermocouple indicators
On one model problem for the reaction-diffusion-advection equation
Davydova, M. A.; Zakharova, S. A.; Levashova, N. T.
2017-09-01
The asymptotic behavior of the solution with boundary layers in the time-independent mathematical model of reaction-diffusion-advection arising when describing the distribution of greenhouse gases in the surface atmospheric layer is studied. On the basis of the asymptotic method of differential inequalities, the existence of a boundary-layer solution and its asymptotic Lyapunov stability as a steady-state solution of the corresponding parabolic problem is proven. One of the results of this work is the determination of the local domain of the attraction of a boundary-layer solution.
The Cauchy problem for the Bogolyubov hierarchy of equations. The BCS model
International Nuclear Information System (INIS)
Vidybida, A.K.
1975-01-01
A chain of Bogolyubov's kinetic equations for an infinite quantum system of particles distributed in space with the mean density 1/V and interacting with the BCS model operator is considered as a single abstract equation in some countable normalized space bsup(v) of sequences of integral operators. In this case an unique solution of the Cauchy problem has been obtained at arbitrary initial conditions from bsup(v), stationary solutions of the equation have been derived, and the class of the initial conditions which approach to stationary ones is indicated
International Nuclear Information System (INIS)
1995-01-01
The Natural Gas Transmission and Distribution Model (NGTDM) is the component of the National Energy Modeling System (NEMS) that is used to represent the domestic natural gas transmission and distribution system. NEMS was developed in the Office of integrated Analysis and Forecasting of the Energy information Administration (EIA). NEMS is the third in a series of computer-based, midterm energy modeling systems used since 1974 by the EIA and its predecessor, the Federal Energy Administration, to analyze domestic energy-economy markets and develop projections. The NGTDM is the model within the NEMS that represents the transmission, distribution, and pricing of natural gas. The model also includes representations of the end-use demand for natural gas, the production of domestic natural gas, and the availability of natural gas traded on the international market based on information received from other NEMS models. The NGTDM determines the flow of natural gas in an aggregate, domestic pipeline network, connecting domestic and foreign supply regions with 12 demand regions. The methodology employed allows the analysis of impacts of regional capacity constraints in the interstate natural gas pipeline network and the identification of pipeline capacity expansion requirements. There is an explicit representation of core and noncore markets for natural gas transmission and distribution services, and the key components of pipeline tariffs are represented in a pricing algorithm. Natural gas pricing and flow patterns are derived by obtaining a market equilibrium across the three main elements of the natural gas market: the supply element, the demand element, and the transmission and distribution network that links them. The NGTDM consists of four modules: the Annual Flow Module, the Capacity F-expansion Module, the Pipeline Tariff Module, and the Distributor Tariff Module. A model abstract is provided in Appendix A
Energy Technology Data Exchange (ETDEWEB)
NONE
1995-02-17
The Natural Gas Transmission and Distribution Model (NGTDM) is the component of the National Energy Modeling System (NEMS) that is used to represent the domestic natural gas transmission and distribution system. NEMS was developed in the Office of integrated Analysis and Forecasting of the Energy information Administration (EIA). NEMS is the third in a series of computer-based, midterm energy modeling systems used since 1974 by the EIA and its predecessor, the Federal Energy Administration, to analyze domestic energy-economy markets and develop projections. The NGTDM is the model within the NEMS that represents the transmission, distribution, and pricing of natural gas. The model also includes representations of the end-use demand for natural gas, the production of domestic natural gas, and the availability of natural gas traded on the international market based on information received from other NEMS models. The NGTDM determines the flow of natural gas in an aggregate, domestic pipeline network, connecting domestic and foreign supply regions with 12 demand regions. The methodology employed allows the analysis of impacts of regional capacity constraints in the interstate natural gas pipeline network and the identification of pipeline capacity expansion requirements. There is an explicit representation of core and noncore markets for natural gas transmission and distribution services, and the key components of pipeline tariffs are represented in a pricing algorithm. Natural gas pricing and flow patterns are derived by obtaining a market equilibrium across the three main elements of the natural gas market: the supply element, the demand element, and the transmission and distribution network that links them. The NGTDM consists of four modules: the Annual Flow Module, the Capacity F-expansion Module, the Pipeline Tariff Module, and the Distributor Tariff Module. A model abstract is provided in Appendix A.
Directory of Open Access Journals (Sweden)
M. I. Fursanov
2014-01-01
Full Text Available This article reflects algorithmization of search methods of effective replacement of consumer transformers in distributed electrical networks. As any electrical equipment of power systems, power transformers have their own limited service duration, which is determined by natural processes of materials degradation and also by unexpected wear under different conditions of overload and overvoltage. According to the standards, adapted by in the Republic of Belarus, rated service life of power transformers is 25 years. But it can be situations that transformers should be better changed till this time – economically efficient. The possibility of such replacement is considered in order to increase efficiency of electrical network operation connected with its physical wear and aging.In this article the faults of early developed mathematical models of transformers replacement were discussed. Early such worked out transformers were not used. But in practice they can be replaced in one substation but they can be successfully used in other substations .Especially if there are limits of financial resources and the replacement needs more detail technical and economical basis.During the research the authors developed the efficient algorithm for determining of optimal location of transformers at substations of distributed electrical networks, based on search of the best solution from all sets of displacement in oriented graph. Suggested algorithm allows considerably reduce design time of optimal placement of transformers using a set of simplifications. The result of algorithm’s work is series displacement of transformers in networks, which allow obtain a great economic effect in comparison with replacement of single transformer.
A two-stage stochastic programming model for the optimal design of distributed energy systems
International Nuclear Information System (INIS)
Zhou, Zhe; Zhang, Jianyun; Liu, Pei; Li, Zheng; Georgiadis, Michael C.; Pistikopoulos, Efstratios N.
2013-01-01
Highlights: ► The optimal design of distributed energy systems under uncertainty is studied. ► A stochastic model is developed using genetic algorithm and Monte Carlo method. ► The proposed system possesses inherent robustness under uncertainty. ► The inherent robustness is due to energy storage facilities and grid connection. -- Abstract: A distributed energy system is a multi-input and multi-output energy system with substantial energy, economic and environmental benefits. The optimal design of such a complex system under energy demand and supply uncertainty poses significant challenges in terms of both modelling and corresponding solution strategies. This paper proposes a two-stage stochastic programming model for the optimal design of distributed energy systems. A two-stage decomposition based solution strategy is used to solve the optimization problem with genetic algorithm performing the search on the first stage variables and a Monte Carlo method dealing with uncertainty in the second stage. The model is applied to the planning of a distributed energy system in a hotel. Detailed computational results are presented and compared with those generated by a deterministic model. The impacts of demand and supply uncertainty on the optimal design of distributed energy systems are systematically investigated using proposed modelling framework and solution approach.
Mathematical modelling and numerical simulation of oil pollution problems
2015-01-01
Written by outstanding experts in the fields of marine engineering, atmospheric physics and chemistry, fluid dynamics and applied mathematics, the contributions in this book cover a wide range of subjects, from pure mathematics to real-world applications in the oil spill engineering business. Offering a truly interdisciplinary approach, the authors present both mathematical models and state-of-the-art numerical methods for adequately solving the partial differential equations involved, as well as highly practical experiments involving actual cases of ocean oil pollution. It is indispensable that different disciplines of mathematics, like analysis and numerics, together with physics, biology, fluid dynamics, environmental engineering and marine science, join forces to solve today’s oil pollution problems. The book will be of great interest to researchers and graduate students in the environmental sciences, mathematics and physics, showing the broad range of techniques needed in order to solve these poll...
Solving the Standard Model Problems in Softened Gravity
Salvio, Alberto
2016-11-16
The Higgs naturalness problem is solved if the growth of Einstein's gravitational interaction is softened at an energy $ \\lesssim 10^{11}\\,$GeV (softened gravity). We work here within an explicit realization where the Einstein-Hilbert Lagrangian is extended to include terms quadratic in the curvature and a non-minimal coupling with the Higgs. We show that this solution is preserved by adding three right-handed neutrinos with masses below the electroweak scale, accounting for neutrino oscillations, dark matter and the baryon asymmetry. The smallness of the right-handed neutrino masses (compared to the Planck scale) and the QCD $\\theta$-term are also shown to be natural. We prove that a possible gravitational source of CP violation cannot spoil the model, thanks to the presence of right-handed neutrinos. Starobinsky inflation can occur in this context, even if we live in a metastable vacuum.
Block factorization of step response model predictive control problems
DEFF Research Database (Denmark)
Kufoalor, D. K.M.; Frison, Gianluca; Imsland, L.
2017-01-01
implemented in the HPMPC framework, and the performance is evaluated through simulation studies. The results confirm that a computationally fast controller is achieved, compared to the traditional step response MPC scheme that relies on an explicit prediction formulation. Moreover, the tailored condensing......By introducing a stage-wise prediction formulation that enables the use of highly efficient quadratic programming (QP) solution methods, this paper expands the computational toolbox for solving step response MPC problems. We propose a novel MPC scheme that is able to incorporate step response data...... algorithm exhibits superior performance and produces solution times comparable to that achieved when using a condensing scheme for an equivalent (but much smaller) state-space model derived from first-principles. Implementation aspects necessary for high performance on embedded platforms are discussed...
The hierarchy problem of the electroweak standard model revisited
Energy Technology Data Exchange (ETDEWEB)
Jegerlehner, Fred [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)
2013-05-15
A careful renormalization group analysis of the electroweak Standard Model reveals that there is no hierarchy problem in the SM. In the broken phase a light Higgs turns out to be natural as it is self-protected and self-tuned by the Higgs mechanism. It means that the scalar Higgs needs not be protected by any extra symmetry, specifically super symmetry, in order not to be much heavier than the other SM particles which are protected by gauge- or chiral-symmetry. Thus the existence of quadratic cutoff effects in the SM cannot motivate the need for a super symmetric extensions of the SM, but in contrast plays an important role in triggering the electroweak phase transition and in shaping the Higgs potential in the early universe to drive inflation as supported by observation.
Jackson, C. E., Jr.
1977-01-01
A sample problem library containing 20 problems covering most facets of Nastran Thermal Analyzer modeling is presented. Areas discussed include radiative interchange, arbitrary nonlinear loads, transient temperature and steady-state structural plots, temperature-dependent conductivities, simulated multi-layer insulation, and constraint techniques. The use of the major control options and important DMAP alters is demonstrated.
MR imaging of model drug distribution in simulated vitreous
Directory of Open Access Journals (Sweden)
Stein Sandra
2015-09-01
Full Text Available The in vitro and in vivo characterization of intravitreal injections plays an important role in developing innovative therapy approaches. Using the established vitreous model (VM and eye movement system (EyeMoS the distribution of contrast agents with different molecular weight was studied in vitro. The impact of the simulated age-related vitreal liquefaction (VL on drug distribution in VM was examined either with injection through the gel phase or through the liquid phase. For comparison the distribution was studied ex vivo in the porcine vitreous. The studies were performed in a magnetic resonance (MR scanner. As expected, with increasing molecular weight the diffusion velocity and the visual distribution of the injected substances decreased. Similar drug distribution was observed in VM and in porcine eye. VL causes enhanced convective flow and faster distribution in VM. Confirming the importance of the injection technique in progress of VL, injection through gelatinous phase caused faster distribution into peripheral regions of the VM than following injection through liquefied phase. VM and MR scanner in combination present a new approach for the in vitro characterization of drug release and distribution of intravitreal dosage forms.
The Distribution of Carbon Monoxide in the GOCART Model
Fan, Xiaobiao; Chin, Mian; Einaudi, Franco (Technical Monitor)
2000-01-01
Carbon monoxide (CO) is an important trace gas because it is a significant source of tropospheric Ozone (O3) as well as a major sink for atmospheric hydroxyl radical (OH). The distribution of CO is set by a balance between the emissions, transport, and chemical processes in the atmosphere. The Georgia Tech/Goddard Global Ozone Chemistry Aerosol Radiation and Transport (GOCART) model is used to simulate the atmospheric distribution of CO. The GOCART model is driven by the assimilated meteorological data from the Goddard Earth Observing System Data Assimilation System (GEOS DAS) in an off-line mode. We study the distribution of CO on three time scales: (1) day to day fluctuation produced by the synoptic waves; (2) seasonal changes due to the annual cycle of CO sources and sinks; and (3) interannual variability induced by dynamics. Comparison of model results with ground based and remote sensing measurements will also be presented.
Spatial distribution of emissions to air – the SPREAD model
DEFF Research Database (Denmark)
Plejdrup, Marlene Schmidt; Gyldenkærne, Steen
to the requirements for reporting of gridded emissions to CLRTAP. Spatial emission data is e.g. used as input for air quality modelling, which again serves as input for assessment and evaluation of health effects. For these purposes distributions with higher spatial resolution have been requested. Previously......The National Environmental Research Institute (NERI), Aarhus University, completes the annual national emission inventories for greenhouse gases and air pollutants according to Denmark’s obligations under international conventions, e.g. the climate convention, UNFCCC and the convention on long......-range transboundary air pollution, CLRTAP. NERI has developed a model to distribute emissions from the national emission inventories on a 1x1 km grid covering the Danish land and sea territory. The new spatial high resolution distribution model for emissions to air (SPREAD) has been developed according...
International Nuclear Information System (INIS)
Chen, Y W; Zhang, L F; Huang, J P
2007-01-01
By using theoretical analysis and computer simulations, we develop the Watts-Strogatz network model by including degree distribution, in an attempt to improve the comparison between characteristic path lengths and clustering coefficients predicted by the original Watts-Strogatz network model and those of the real networks with the small-world property. Good agreement between the predictions of the theoretical analysis and those of the computer simulations has been shown. It is found that the developed Watts-Strogatz network model can fit the real small-world networks more satisfactorily. Some other interesting results are also reported by adjusting the parameters in a model degree-distribution function. The developed Watts-Strogatz network model is expected to help in the future analysis of various social problems as well as financial markets with the small-world property
Directory of Open Access Journals (Sweden)
Penny Masuoka
2010-11-01
Full Text Available Over 35,000 cases of Japanese encephalitis (JE are reported worldwide each year. Culex tritaeniorhynchus is the primary vector of the JE virus, while wading birds are natural reservoirs and swine amplifying hosts. As part of a JE risk analysis, the ecological niche modeling programme, Maxent, was used to develop a predictive model for the distribution of Cx. tritaeniorhynchus in the Republic of Korea, using mosquito collection data, temperature, precipitation, elevation, land cover and the normalized difference vegetation index (NDVI. The resulting probability maps from the model were consistent with the known environmental limitations of the mosquito with low probabilities predicted for forest covered mountains. July minimum temperature and land cover were the most important variables in the model. Elevation, summer NDVI (July-September, precipitation in July, summer minimum temperature (May-August and maximum temperature for fall and winter months also contributed to the model. Comparison of the Cx. tritaeniorhynchus model to the distribution of JE cases in the Republic of Korea from 2001 to 2009 showed that cases among a highly vaccinated Korean population were located in high-probability areas for Cx. tritaeniorhynchus. No recent JE cases were reported from the eastern coastline, where higher probabilities of mosquitoes were predicted, but where only small numbers of pigs are raised. The geographical distribution of reported JE cases corresponded closely with the predicted high-probability areas for Cx. tritaeniorhynchus, making the map a useful tool for health risk analysis that could be used for planning preventive public health measures.
UV Stellar Distribution Model for the Derivation of Payload
Directory of Open Access Journals (Sweden)
Young-Jun Choi
1999-12-01
Full Text Available We present the results of a model calculation of the stellar distribution in a UV and centered at 2175Å corresponding to the well-known bump in the interstellar extinction curve. The stellar distribution model used here is based on the Bahcall-Soneira galaxy model (1980. The source code for model calculation was designed by Brosch (1991 and modified to investigate various designing factors for UV satellite payload. The model predicts UV stellar densities in different sky directions, and its results are compared with the TD-1 star counts for a number of sky regions. From this study, we can determine the field of view, size of optics, angular resolution, and number of stars in one orbit. There will provide the basic constrains in designing a satellite payload for UV observations.
The classical Stefan problem basic concepts, modelling and analysis
Gupta, SC
2003-01-01
This volume emphasises studies related toclassical Stefan problems. The term "Stefan problem" isgenerally used for heat transfer problems with phase-changes suchas from the liquid to the solid. Stefan problems have somecharacteristics that are typical of them, but certain problemsarising in fields such as mathematical physics and engineeringalso exhibit characteristics similar to them. The term``classical" distinguishes the formulation of these problems fromtheir weak formulation, in which the solution need not possessclassical derivatives. Under suitable assumptions, a weak solutioncould be as good as a classical solution. In hyperbolic Stefanproblems, the characteristic features of Stefan problems arepresent but unlike in Stefan problems, discontinuous solutions areallowed because of the hyperbolic nature of the heat equation. Thenumerical solutions of inverse Stefan problems, and the analysis ofdirect Stefan problems are so integrated that it is difficult todiscuss one without referring to the other. So no...
Species distribution models of tropical deep-sea snappers.
Directory of Open Access Journals (Sweden)
Céline Gomez
Full Text Available Deep-sea fisheries provide an important source of protein to Pacific Island countries and territories that are highly dependent on fish for food security. However, spatial management of these deep-sea habitats is hindered by insufficient data. We developed species distribution models using spatially limited presence data for the main harvested species in the Western Central Pacific Ocean. We used bathymetric and water temperature data to develop presence-only species distribution models for the commercially exploited deep-sea snappers Etelis Cuvier 1828, Pristipomoides Valenciennes 1830, and Aphareus Cuvier 1830. We evaluated the performance of four different algorithms (CTA, GLM, MARS, and MAXENT within the BIOMOD framework to obtain an ensemble of predicted distributions. We projected these predictions across the Western Central Pacific Ocean to produce maps of potential deep-sea snapper distributions in 32 countries and territories. Depth was consistently the best predictor of presence for all species groups across all models. Bathymetric slope was consistently the poorest predictor. Temperature at depth was a good predictor of presence for GLM only. Model precision was highest for MAXENT and CTA. There were strong regional patterns in predicted distribution of suitable habitat, with the largest areas of suitable habitat (> 35% of the Exclusive Economic Zone predicted in seven South Pacific countries and territories (Fiji, Matthew & Hunter, Nauru, New Caledonia, Tonga, Vanuatu and Wallis & Futuna. Predicted habitat also varied among species, with the proportion of predicted habitat highest for Aphareus and lowest for Etelis. Despite data paucity, the relationship between deep-sea snapper presence and their environments was sufficiently strong to predict their distribution across a large area of the Pacific Ocean. Our results therefore provide a strong baseline for designing monitoring programs that balance resource exploitation and
Koper, Rob
2003-01-01
Please refer to: Koper, R. (2004). Use of the Semantic Web to Solve Some Basic Problems in Education: Increase Flexible, Distributed Lifelong Learning, Decrease Teacher's Workload. Journal of Interactive Media in Education, 2004 (6). Special Issue on the Educational Semantic Web. ISSN:1365-893X [
Real-time modeling and simulation of distribution feeder and distributed resources
Singh, Pawan
The analysis of the electrical system dates back to the days when analog network analyzers were used. With the advent of digital computers, many programs were written for power-flow and short circuit analysis for the improvement of the electrical system. Real-time computer simulations can answer many what-if scenarios in the existing or the proposed power system. In this thesis, the standard IEEE 13-Node distribution feeder is developed and validated on a real-time platform OPAL-RT. The concept and the challenges of the real-time simulation are studied and addressed. Distributed energy resources include some of the commonly used distributed generation and storage devices like diesel engine, solar photovoltaic array, and battery storage system are modeled and simulated on a real-time platform. A microgrid encompasses a portion of an electric power distribution which is located downstream of the distribution substation. Normally, the microgrid operates in paralleled mode with the grid; however, scheduled or forced isolation can take place. In such conditions, the microgrid must have the ability to operate stably and autonomously. The microgrid can operate in grid connected and islanded mode, both the operating modes are studied in the last chapter. Towards the end, a simple microgrid controller modeled and simulated on the real-time platform is developed for energy management and protection for the microgrid.
International Nuclear Information System (INIS)
1996-01-01
The Natural Gas Transmission and Distribution Model (NGTDM) of the National Energy Modeling System is developed and maintained by the Energy Information Administration (EIA), Office of Integrated Analysis and Forecasting. This report documents the archived version of the NGTDM that was used to produce the natural gas forecasts presented in the Annual Energy Outlook 1996, (DOE/EIA-0383(96)). The purpose of this report is to provide a reference document for model analysts, users, and the public that defines the objectives of the model, describes its basic approach, and provides detail on the methodology employed. Previously this report represented Volume I of a two-volume set. Volume II reported on model performance, detailing convergence criteria and properties, results of sensitivity testing, comparison of model outputs with the literature and/or other model results, and major unresolved issues
Energy Technology Data Exchange (ETDEWEB)
NONE
1996-02-26
The Natural Gas Transmission and Distribution Model (NGTDM) of the National Energy Modeling System is developed and maintained by the Energy Information Administration (EIA), Office of Integrated Analysis and Forecasting. This report documents the archived version of the NGTDM that was used to produce the natural gas forecasts presented in the Annual Energy Outlook 1996, (DOE/EIA-0383(96)). The purpose of this report is to provide a reference document for model analysts, users, and the public that defines the objectives of the model, describes its basic approach, and provides detail on the methodology employed. Previously this report represented Volume I of a two-volume set. Volume II reported on model performance, detailing convergence criteria and properties, results of sensitivity testing, comparison of model outputs with the literature and/or other model results, and major unresolved issues.
Investigating Secondary School Students’ Difficulties in Modeling Problems PISA-Model Level 5 And 6
Directory of Open Access Journals (Sweden)
Sri Imelda Edo
2013-01-01
Full Text Available The chart of Indonesian student of mathematical ability development in Program for International Student Assessment (PISA event during the last 4 periods shows an unstable movement. PISA aims to examine the ability of children aged 15 years in reading literacy, mathematics literacy, and science literacy. The concept of mathematical literacy is closely related to several other concepts discussed in mathematics education. The most important is mathematical modelling and its component processes. Therefore the goal of this research is to investigate secondary school students’ difficulties in modeling problems PISA-model level 5 and 6. Qualitative research was used as an appropriate mean to achieve this research goal. This type of research is a greater emphasizing on holistic description, and phenomenon identified to be studied, students’ difficulties in modelling real world problem in PISA model question level 5 and 6. 26 grade 9 students of SMPN 1Palembang, 26 grade 9 students of SMPK Frater Xaverius 1 Palembang, and 31 participants of mathematical literacy context event, were involved in this research. The result of investigate showed that student is difficult to; (1 formulating situations mathematically, Such as to representing a situation mathematically, recognizing mathematical structure (including regularities, relationships, and patterns in problems, (2 evaluating the reasonableness of a mathematical solution in the context of a real-world problem. The students have no problem in solve mathematical problem they have constructed.Keywords: Mathematical model, Modelling competence, PISA, PISA Questions level 5 and 6, Students’difficulties in solving PISA-model Questions. Mathematics Literacy. DOI: http://dx.doi.org/10.22342/jme.4.1.561.41-58
Distributed state-space generation of discrete-state stochastic models
Ciardo, Gianfranco; Gluckman, Joshua; Nicol, David
1995-01-01
High-level formalisms such as stochastic Petri nets can be used to model complex systems. Analysis of logical and numerical properties of these models of ten requires the generation and storage of the entire underlying state space. This imposes practical limitations on the types of systems which can be modeled. Because of the vast amount of memory consumed, we investigate distributed algorithms for the generation of state space graphs. The distributed construction allows us to take advantage of the combined memory readily available on a network of workstations. The key technical problem is to find effective methods for on-the-fly partitioning, so that the state space is evenly distributed among processors. In this paper we report on the implementation of a distributed state-space generator that may be linked to a number of existing system modeling tools. We discuss partitioning strategies in the context of Petri net models, and report on performance observed on a network of workstations, as well as on a distributed memory multi-computer.
A simplified model of saltcake moisture distribution. Letter report
International Nuclear Information System (INIS)
Simmons, C.S.
1995-09-01
This letter report describes the formulation of a simplified model for finding the moisture distribution in a saltcake waste profile that has been stabilized by pumping out the drainable interstitial liquid. The model is based on assuming that capillarity mainly governs the distribution of moisture in the porous saltcake waste. A stead upward flow of moisture driven by evaporation from the waste surface is conceptualized to occur for isothermal conditions. To obtain hydraulic parameters for unsaturated conditions, the model is calibrated or matched to the relative saturation distribution as measured by neutron probe scans. The model is demonstrated on Tanks 104-BY and 105-TX as examples. A value of the model is that it identifies the key physical parameters that control the surface moisture content in a waste profile. Moreover, the model can be used to estimate the brine application rate at the waste surface that would raise the moisture content there to a safe level. Thus, the model can be applied to help design a strategy for correcting the moisture conditions in a saltcake waste tank
Derivation of Distributed Models of Atomic Polarizability for Molecular Simulations.
Soteras, Ignacio; Curutchet, Carles; Bidon-Chanal, Axel; Dehez, François; Ángyán, János G; Orozco, Modesto; Chipot, Christophe; Luque, F Javier
2007-11-01
The main thrust of this investigation is the development of models of distributed atomic polarizabilities for the treatment of induction effects in molecular mechanics simulations. The models are obtained within the framework of the induced dipole theory by fitting the induction energies computed via a fast but accurate MP2/Sadlej-adjusted perturbational approach in a grid of points surrounding the molecule. Particular care is paid in the examination of the atomic quantities obtained from models of implicitly and explicitly interacting polarizabilities. Appropriateness and accuracy of the distributed models are assessed by comparing the molecular polarizabilities recovered from the models and those obtained experimentally and from MP2/Sadlej calculations. The behavior of the models is further explored by computing the polarization energy for aromatic compounds in the context of cation-π interactions and for selected neutral compounds in a TIP3P aqueous environment. The present results suggest that the computational strategy described here constitutes a very effective tool for the development of distributed models of atomic polarizabilities and can be used in the generation of new polarizable force fields.
Applications of Transport/Reaction Codes to Problems in Cell Modeling; TOPICAL
International Nuclear Information System (INIS)
MEANS, SHAWN A.; RINTOUL, MARK DANIEL; SHADID, JOHN N.
2001-01-01
We demonstrate two specific examples that show how our exiting capabilities in solving large systems of partial differential equations associated with transport/reaction systems can be easily applied to outstanding problems in computational biology. First, we examine a three-dimensional model for calcium wave propagation in a Xenopus Laevis frog egg and verify that a proposed model for the distribution of calcium release sites agrees with experimental results as a function of both space and time. Next, we create a model of the neuron's terminus based on experimental observations and show that the sodium-calcium exchanger is not the route of sodium's modulation of neurotransmitter release. These state-of-the-art simulations were performed on massively parallel platforms and required almost no modification of existing Sandia codes
Modeling and Solving the Liner Shipping Service Selection Problem
DEFF Research Database (Denmark)
Karsten, Christian Vad; Balakrishnan, Anant
We address a tactical planning problem, the Liner Shipping Service Selection Problem (LSSSP), facing container shipping companies. Given estimated demand between various ports, the LSSSP entails selecting the best subset of non-simple cyclic sailing routes from a given pool of candidate routes...... requirements and the hop limits to reduce problem size, and describe techniques to accelerate the solution procedure. We present computational results for realistic problem instances from the benchmark suite LINER-LIB....
Nucleon parton distributions in a light-front quark model
Energy Technology Data Exchange (ETDEWEB)
Gutsche, Thomas [Universitaet Tuebingen, Institut fuer Theoretische Physik, Kepler Center for Astro and Particle Physics, Tuebingen (Germany); Lyubovitskij, Valery E. [Universitaet Tuebingen, Institut fuer Theoretische Physik, Kepler Center for Astro and Particle Physics, Tuebingen (Germany); Tomsk State University, Department of Physics, Tomsk (Russian Federation); Tomsk Polytechnic University, Laboratory of Particle Physics, Mathematical Physics Department, Tomsk (Russian Federation); Universidad Tecnica Federico Santa Maria, Departamento de Fisica y Centro Cientifico Tecnologico de Valparaiso (CCTVal), Valparaiso (Chile); Schmidt, Ivan [Universidad Tecnica Federico Santa Maria, Departamento de Fisica y Centro Cientifico Tecnologico de Valparaiso (CCTVal), Valparaiso (Chile)
2017-02-15
Continuing our analysis of parton distributions in the nucleon, we extend our light-front quark model in order to obtain both the helicity-independent and the helicity-dependent parton distributions, analytically matching the results of global fits at the initial scale μ∝ 1 GeV; they also contain the correct Dokshitzer-Gribov-Lipatov-Altarelli-Parisi evolution. We also calculate the transverse parton, Wigner and Husimi distributions from a unified point of view, using our light-front wave functions and expressing them in terms of the parton distributions q{sub v}(x) and δq{sub v}(x). Our results are very relevant for the current and future program of the COMPASS experiment at SPS (CERN). (orig.)
Spatio-temporal modeling of nonlinear distributed parameter systems
Li, Han-Xiong
2011-01-01
The purpose of this volume is to provide a brief review of the previous work on model reduction and identifi cation of distributed parameter systems (DPS), and develop new spatio-temporal models and their relevant identifi cation approaches. In this book, a systematic overview and classifi cation on the modeling of DPS is presented fi rst, which includes model reduction, parameter estimation and system identifi cation. Next, a class of block-oriented nonlinear systems in traditional lumped parameter systems (LPS) is extended to DPS, which results in the spatio-temporal Wiener and Hammerstein s
Viewpoints: a framework for object oriented database modelling and distribution
Directory of Open Access Journals (Sweden)
Fouzia Benchikha
2006-01-01
Full Text Available The viewpoint concept has received widespread attention recently. Its integration into a data model improves the flexibility of the conventional object-oriented data model and allows one to improve the modelling power of objects. The viewpoint paradigm can be used as a means of providing multiple descriptions of an object and as a means of mastering the complexity of current database systems enabling them to be developed in a distributed manner. The contribution of this paper is twofold: to define an object data model integrating viewpoints in databases and to present a federated database system integrating multiple sources following a local-as-extended-view approach.
International Nuclear Information System (INIS)
Attar, Ahmad; Raissi, Sadigh; Khalili-Damghani, Kaveh
2017-01-01
A simulation-based optimization (SBO) method is proposed to handle multi-objective joint availability-redundancy allocation problem (JARAP). Here, there is no emphasis on probability distributions of time to failures and repair times for multi-state multi-component series-parallel configuration under active, cold and hot standby strategies. Under such conditions, estimation of availability is not a trivial task. First, an efficient computer simulation model is proposed to estimate the availability of the aforementioned system. Then, the estimated availability values are used in a repetitive manner as parameter of a two-objective joint availability-redundancy allocation optimization model through SBO mechanism. The optimization model is then solved using two well-known multi-objective evolutionary computation algorithms, i.e., non-dominated sorting genetic algorithm (NSGA-II), and Strength Pareto Evolutionary Algorithm (SPEA2). The proposed SBO approach is tested using non-exponential numerical example with multi-state repairable components. The results are presented and discussed through different demand scenarios under cold and hot standby strategies. Furthermore, performance of NSGA-II and SPEA2 are statistically compared regarding multi-objective accuracy, and diversity metrics. - Highlights: • A Simulation-Based Optimization (SBO) procedure is introduced for JARAP. • The proposed SBO works for any given failure and repair times. • An efficient simulation procedure is developed to estimate availability. • Customized NSGA-II and SPEA2 are proposed to solve the bi-objective JARAP. • Statistical analysis is employed to test the performance of optimization methods.
Collisionless Plasma Modeling in an Arbitrary Potential Energy Distribution
Liemohn, M. W.; Khazanov, G. V.
1997-01-01
A new technique for calculating a collisionless plasma along a field line is presented. The primary feature of the new model is that it can handle an arbitrary (including nonmonotonic) potential energy distribution. This was one of the limiting constraints on the existing models in this class, and these constraints are generalized for an arbitrary potential energy composition. The formulation for relating current density to the field-aligned potential as well as formulas for density, temperature and energy flux calculations are presented for several distribution functions, ranging from a bi-Lorentzian with a loss cone to an isotropic Maxwellian. A comparison of these results with previous models shows that the formulation reduces.to the earlier models under similar assumptions.
A Game-Theoretic Model for Distributed Programming by Contract
DEFF Research Database (Denmark)
Henriksen, Anders Starcke; Hvitved, Tom; Filinski, Andrzej
2009-01-01
We present an extension of the programming-by-contract (PBC) paradigm to a concurrent and distributed environment. Classical PBC is characterized by absolute conformance of code to its specification, assigning blame in case of failures, and a hierarchical, cooperative decomposition model – none...... of which extend naturally to a distributed environment with multiple administrative peers. We therefore propose a more nuanced contract model based on quantifiable performance of implementations; assuming responsibility for success; and a fundamentally adversarial model of system integration, where each...... component provider is optimizing its behavior locally, with respect to potentially conflicting demands. This model gives rise to a game-theoretic formulation of contract-governed process interactions that supports compositional reasoning about contract conformance....
Modelling the distribution of fish accounting for spatial correlation and overdispersion
DEFF Research Database (Denmark)
Lewy, Peter; Kristensen, Kasper
2009-01-01
The spatial distribution of cod (Gadus morhua) in the North Sea and the Skagerrak was analysed over a 24-year period using the Log Gaussian Cox Process (LGCP). In contrast to other spatial models of the distribution of fish, LGCP avoids problems with zero observations and includes the spatial...... correlation between observations. It is therefore possible to predict and interpolate unobserved densities at any location in the area. This is important for obtaining unbiased estimates of stock concentration and other measures depending on the distribution in the entire area. Results show that the spatial...... correlation and dispersion of cod catches remained unchanged during winter throughout the period, in spite of a drastic decline in stock abundance and a movement of the centre of gravity of the distribution towards the northeast in the same period. For the age groups considered, the concentration of the stock...
Linear Model for Optimal Distributed Generation Size Predication
Directory of Open Access Journals (Sweden)
Ahmed Al Ameri
2017-01-01
Full Text Available This article presents a linear model predicting optimal size of Distributed Generation (DG that addresses the minimum power loss. This method is based fundamentally on strong coupling between active power and voltage angle as well as between reactive power and voltage magnitudes. This paper proposes simplified method to calculate the total power losses in electrical grid for different distributed generation sizes and locations. The method has been implemented and tested on several IEEE bus test systems. The results show that the proposed method is capable of predicting approximate optimal size of DG when compared with precision calculations. The method that linearizes a complex model showed a good result, which can actually reduce processing time required. The acceptable accuracy with less time and memory required can help the grid operator to assess power system integrated within large-scale distribution generation.
Applied Distributed Model Predictive Control for Energy Efficient Buildings and Ramp Metering
Koehler, Sarah Muraoka
Industrial large-scale control problems present an interesting algorithmic design challenge. A number of controllers must cooperate in real-time on a network of embedded hardware with limited computing power in order to maximize system efficiency while respecting constraints and despite communication delays. Model predictive control (MPC) can automatically synthesize a centralized controller which optimizes an objective function subject to a system model, constraints, and predictions of disturbance. Unfortunately, the computations required by model predictive controllers for large-scale systems often limit its industrial implementation only to medium-scale slow processes. Distributed model predictive control (DMPC) enters the picture as a way to decentralize a large-scale model predictive control problem. The main idea of DMPC is to split the computations required by the MPC problem amongst distributed processors that can compute in parallel and communicate iteratively to find a solution. Some popularly proposed solutions are distributed optimization algorithms such as dual decomposition and the alternating direction method of multipliers (ADMM). However, these algorithms ignore two practical challenges: substantial communication delays present in control systems and also problem non-convexity. This thesis presents two novel and practically effective DMPC algorithms. The first DMPC algorithm is based on a primal-dual active-set method which achieves fast convergence, making it suitable for large-scale control applications which have a large communication delay across its communication network. In particular, this algorithm is suited for MPC problems with a quadratic cost, linear dynamics, forecasted demand, and box constraints. We measure the performance of this algorithm and show that it significantly outperforms both dual decomposition and ADMM in the presence of communication delay. The second DMPC algorithm is based on an inexact interior point method which is
A Distributed, Developmental Model of Word Recognition and Naming
1989-07-14
whom we have collaborated on studies of the model’s implications concerning acquired forms of dyslexia (Patterson, Seidenberg & McClelland, in press...OF THIS PAGE All Other editions are obsolete. Ucasfe Absract A parallel distributed processing model of visual word recognition and pronunciation and...Readers are typically aware of the results of lexical processing, not the manner in which it occurred. One of the goals of research on visual word
Do Stacked Species Distribution Models Reflect Altitudinal Diversity Patterns?
Mateo, Rubén G.; Felicísimo, Ángel M.; Pottier, Julien; Guisan, Antoine; Muñoz, Jesús
2012-01-01
The objective of this study was to evaluate the performance of stacked species distribution models in predicting the alpha and gamma species diversity patterns of two important plant clades along elevation in the Andes. We modelled the distribution of the species in the Anthurium genus (53 species) and the Bromeliaceae family (89 species) using six modelling techniques. We combined all of the predictions for the same species in ensemble models based on two different criteria: the average of the rescaled predictions by all techniques and the average of the best techniques. The rescaled predictions were then reclassified into binary predictions (presence/absence). By stacking either the original predictions or binary predictions for both ensemble procedures, we obtained four different species richness models per taxa. The gamma and alpha diversity per elevation band (500 m) was also computed. To evaluate the prediction abilities for the four predictions of species richness and gamma diversity, the models were compared with the real data along an elevation gradient that was independently compiled by specialists. Finally, we also tested whether our richness models performed better than a null model of altitudinal changes of diversity based on the literature. Stacking of the ensemble prediction of the individual species models generated richness models that proved to be well correlated with the observed alpha diversity richness patterns along elevation and with the gamma diversity derived from the literature. Overall, these models tend to overpredict species richness. The use of the ensemble predictions from the species models built with different techniques seems very promising for modelling of species assemblages. Stacking of the binary models reduced the over-prediction, although more research is needed. The randomisation test proved to be a promising method for testing the performance of the stacked models, but other implementations may still be developed. PMID
Can fire atlas data improve species distribution model projections?
Crimmins, Shawn M; Dobrowski, Solomon Z; Mynsberge, Alison R; Safford, Hugh D
2014-07-01
Correlative species distribution models (SDMs) are widely used in studies of climate change impacts, yet are often criticized for failing to incorporate disturbance processes that can influence species distributions. Here we use two temporally independent data sets of vascular plant distributions, climate data, and fire atlas data to examine the influence of disturbance history on SDM projection accuracy through time in the mountain ranges of California, USA. We used hierarchical partitioning to examine the influence of fire occurrence on the distribution of 144 vascular plant species and built a suite of SDMs to examine how the inclusion of fire-related predictors (fire occurrence and departure from historical fire return intervals) affects SDM projection accuracy. Fire occurrence provided the least explanatory power among predictor variables for predicting species' distributions, but provided improved explanatory power for species whose regeneration is tied closely to fire. A measure of the departure from historic fire return interval had greater explanatory power for calibrating modern SDMs than fire occurrence. This variable did not improve internal model accuracy for most species, although it did provide marginal improvement to models for species adapted to high-frequency fire regimes. Fire occurrence and fire return interval departure were strongly related to the climatic covariates used in SDM development, suggesting that improvements in model accuracy may not be expected due to limited additional explanatory power. Our results suggest that the inclusion of coarse-scale measures of disturbance in SDMs may not be necessary to predict species distributions under climate change, particularly for disturbance processes that are largely mediated by climate.
Improved Mathematical Models for Particle-Size Distribution Data
African Journals Online (AJOL)
BirukEdimon
four existing curve fitting models common to geotechnical applications are reviewed and presented first. Definitions of Important Parameters and. Variables. A given soil will be made up of grains of many different sizes and described by the grain size distribution. The main variables are % Clay, %. Silt, % Sand, % of fine and ...
Modelling flow dynamics in water distribution networks using ...
African Journals Online (AJOL)
One such approach is the Artificial Neural Networks (ANNs) technique. The advantage of ANNs is that they are robust and can be used to model complex linear and non-linear systems without making implicit assumptions. ANNs can be trained to forecast flow dynamics in a water distribution network. Such flow dynamics ...
Business model scenarios for seamless content distribution and delivery
Wehn de Montalvo, U.W.C.; Ballon, P.J.P.; Sokol, J.
2005-01-01
This paper addresses the issue of feasible business models for seamless multimedia content distribution and delivery over mobile, wireless and fixed networks. An open, interlayer approach to content delivery networks sets the scene for a great deal of flexibility in the value network for content
Modeling customer behavior in multichannel service distribution: A rational approach
Heinhuis, D.
2013-01-01
Most organizations have innovated their distribution strategy and adopted a multichannel strategy. The success of this strategy depends to a large extent on the adoption of new channels by the consumer. This research aims to build a model that explains consumer multichannel behavior. It gives
Business models for distributed generation in a liberalized market environment
International Nuclear Information System (INIS)
Gordijn, Jaap; Akkermans, Hans
2007-01-01
The analysis of the potential of emerging innovative technologies calls for a systems-theoretic approach that takes into account technical as well as socio-economic factors. This paper reports the main findings of several business case studies of different future applications in various countries of distributed power generation technologies, all based on a common methodology for networked business modeling and analysis. (author)
Control and modelling of vertical temperature distribution in greenhouse crops
Kempkes, F.L.K.; Bakker, J.C.; Braak, van de N.J.
1998-01-01
Based on physical transport processes (radiation, convection and latent heat transfer) a model has been developed to describe the vertical temperature distribution of a greenhouse crop. The radiation exchange factors between heating pipes, crop layers, soil and roof were determined as a function of
Quasiparticle momentum distributions in the t-J model
Nishimoto, S; Eder, R; Ohta, Y
A detailed analysis is made for the momentum distribution n(k) of the low-energy states of doped two-dimensional t-J model obtained from small-cluster diagonalizations. We show that the smoothly varying incoherent part of n(k) is practically identical for all the low-energy states with a given hole
A Species Distribution Modeling Informed Conservation Assessment of Bog Spicebush
2016-09-14
ERDC develops innovative solutions in civil and military engineering, geospatial sciences, water resources, and environmental sciences for the Army...Natural Resources Program. This work was conducted by the Ecological Processes Branch (CNN), In- stallations Division (CN), Construction Engineering...declining number of Bog Spicebush populations, as well as limited information about the species’ ecology , indicate species distribution modeling would
High Resolution PV Power Modeling for Distribution Circuit Analysis
Energy Technology Data Exchange (ETDEWEB)
Norris, B. L.; Dise, J. H.
2013-09-01
NREL has contracted with Clean Power Research to provide 1-minute simulation datasets of PV systems located at three high penetration distribution feeders in the service territory of Southern California Edison (SCE): Porterville, Palmdale, and Fontana, California. The resulting PV simulations will be used to separately model the electrical circuits to determine the impacts of PV on circuit operations.
Airport acoustics: Aircraft noise distribution and modelling of some ...
African Journals Online (AJOL)
Airport acoustics: Aircraft noise distribution and modelling of some aircraft parameters. MU Onuu, EO Obisung. Abstract. No Abstract. Nigerian Journal of Physics Vol. 17 (Supplement) 2005: pp. 177-186. Full Text: EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT.
Distributed Services with Foreseen and Unforeseen Tasks: The Mobile Re-allocation Problem
J.A. Larco Martinelli (Jose); R. Dekker (Rommert); U. Kaymak (Uzay)
2007-01-01
textabstractIn this paper we deal with a common problem found in the operations of security and preventive/corrective maintenance services: that of routing a number of mobile resources to serve foreseen and unforeseen tasks during a shift. We define the (Mobile Re-Allocation Problem) MRAP as the
A mathematical model of urban distribution electro-network considering its future development
Directory of Open Access Journals (Sweden)
A. P. Karpenko
2014-01-01
Full Text Available A distribution urban power supply network (further, the power supply network is the network of urban scale. Designed to transfer and distribute electric power it represents a set of transforming and distributional substations and power lines to connect them. We consider a problem of the prospective development of power supply network (PDPSN as a task to define the ways for its optimum development in terms of configuration, equipment loads, parameters, etc., as well as from the point of view of need and terms to put into service the new objects of the power supply network.The program systems represented in the market allow us to calculate parameters of power supply systems, network operating modes, to display power supply schemes, and to make technical documentation, but they do not support the CAD of optimum network topology taking into account factors of the prospective urban development.A main objective of the work is development of mathematical model of the power supply network taking into account its prospective development. Based on this model the task is set to optimize the prospective power supply network development through the solving a problem of multi-criteria structural and parametrical optimization. Expediency is proved to use a method of reduction to the one-criteria task by means of this or that scalar convolution to solve this task.The specified problem of one-criteria optimization of PDPSN represents a problem of continuous-discrete-integer programming. The paper proves its representation as a problem of discrete programming based on the discrete approximation of possible regions to construct new transforming and distributional substations.
Beyond income distribution: an entitlement systems approach to the acquirement problem
Gaay Fortman, B. de
1999-01-01
Behind income distribution are different claiming positions that tend to change with processes of production, distribution and consumption of goods and services. There are no pure economic processes. Ignoring cultural, legal, political and other factors might lead to serious flaws in our efforts to
Energy Technology Data Exchange (ETDEWEB)
NONE
1994-02-24
The Natural Gas Transmission and Distribution Model (NGTDM) is a component of the National Energy Modeling System (NEMS) used to represent the domestic natural gas transmission and distribution system. NEMS is the third in a series of computer-based, midterm energy modeling systems used since 1974 by the Energy Information Administration (EIA) and its predecessor, the Federal Energy Administration, to analyze domestic energy-economy markets and develop projections. This report documents the archived version of NGTDM that was used to produce the natural gas forecasts used in support of the Annual Energy Outlook 1994, DOE/EIA-0383(94). The purpose of this report is to provide a reference document for model analysts, users, and the public that defines the objectives of the model, describes its basic design, provides detail on the methodology employed, and describes the model inputs, outputs, and key assumptions. It is intended to fulfill the legal obligation of the EIA to provide adequate documentation in support of its models (Public Law 94-385, Section 57.b.2). This report represents Volume 1 of a two-volume set. (Volume 2 will report on model performance, detailing convergence criteria and properties, results of sensitivity testing, comparison of model outputs with the literature and/or other model results, and major unresolved issues.) Subsequent chapters of this report provide: (1) an overview of the NGTDM (Chapter 2); (2) a description of the interface between the National Energy Modeling System (NEMS) and the NGTDM (Chapter 3); (3) an overview of the solution methodology of the NGTDM (Chapter 4); (4) the solution methodology for the Annual Flow Module (Chapter 5); (5) the solution methodology for the Distributor Tariff Module (Chapter 6); (6) the solution methodology for the Capacity Expansion Module (Chapter 7); (7) the solution methodology for the Pipeline Tariff Module (Chapter 8); and (8) a description of model assumptions, inputs, and outputs (Chapter 9).
Investigating Secondary School Students’ Difficulties in Modeling Problems PISA-Model Level 5 and 6
Directory of Open Access Journals (Sweden)
Sri Imelda Edo
2013-01-01
Full Text Available The chart of Indonesian student of mathematical ability development inProgram for International Student Assessment (PISA event during thelast 4 periods shows an unstable movement. PISA aims to examine theability of children aged 15 years in reading literacy, mathematicsliteracy, and science literacy. The concept of mathematical literacy is closely related to several other concepts discussed in mathematicseducation. The most important is mathematical modelling and itscomponent processes. Therefore the goal of this research is toinvestigate secondary school students’ difficulties in modelingproblems PISA-model level 5 and 6. Qualitative research was used asan appropriate mean to achieve this research goal. This type of research is a greater emphasizing on holistic description, and phenomenon identified to be studied, students’ difficulties in modelling real world problem in PISA model question level 5 and 6. 26 grade 9 students of SMPN 1 Palembang, 26 grade 9 students of SMPK Frater Xaverius 1 Palembang, and 31 participants of mathematical literacy context event, were involved in this research. The result of investigate showed that student is difficult to; (1 formulating situations mathematically, Such as to representing a situation mathematically, recognizing mathematical structure (including regularities, relationships, and patterns in problems, (2 evaluating the reasonableness of a mathematical solution in the context of a real-world problem. The students have no problem in solve mathematical problem they have constructed.
A distribution planning model for natural gas supply chain: A case study
International Nuclear Information System (INIS)
Hamedi, Maryam; Zanjirani Farahani, Reza; Husseini, Mohammad Moattar; Esmaeilian, Gholam Reza
2009-01-01
In this paper, a real-world case study of a natural gas supply chain is investigated. By using concepts related to natural gas industry and the relations among the components of transmission and distribution network, a six-level supply chain has been introduced and presented schematically. The defined supply chain is a single-objective, multi-period, and single-product problem that is formulated as a mixed integer non-linear programming model, which can easily be linearized. The objective of this model is to minimize direct or indirect distribution costs. There are six groups of constraints including capacity, input and output balancing, demand satisfaction, network flow continuity, and relative constraints to the required binary variables. The solution algorithm of the problem is hierarchical; in each step, one section of the problem is solved using an exact method; the outputs of this section are passed to the next relative section as inputs. Finally, it is shown that the problem has been solved in a reasonable time and desirable results are attained. The use of proposed model and its solution approach have been studied in two gas trunk lines, to present the priority of its cost saving
Febriana Aqidawati, Era; Sutopo, Wahyudi; Hisjam, Muh.
2018-03-01
Newspapers are products with special characteristics which are perishable, have a shorter range of time between the production and distribution, zero inventory, and decreasing sales value along with increasing in time. Generally, the problem of production and distribution in the paper supply chain is the integration of production planning and distribution to minimize the total cost. The approach used in this article to solve the problem is using an analytical model. In this article, several parameters and constraints have been considered in the calculation of the total cost of the integration of production and distribution of newspapers during the determined time horizon. This model can be used by production and marketing managers as decision support in determining the optimal quantity of production and distribution in order to obtain minimum cost so that company's competitiveness level can be increased.
Distributed Hydrologic Modeling of LID in The Woodlands, Texas
Bedient, P.; Doubleday, G.; Sebastian, A.; Fang, N.
2012-12-01
As early as the 1960s, the Woodlands, TX employed stormwater management similar to modern Low Impact Development (LID) design. Innovative for its time, the master drainage plan attempted to minimize adverse impact to the 100-year floodplain and reduce the impact of development on the natural environment. Today, it is Texas's most celebrated master-planned community. This paper employs the use of NEXRAD radar rainfall in the distributed hydrologic model, VfloTM, to evaluate the effectiveness of The Woodlands master drainage design as a stormwater management technique. Three models were created in order to analyze the rainfall-runoff response of The Woodlands watershed under different development conditions: two calibrated, fully distributed hydrologic models to represent the (A) undeveloped and (B) 2006-development conditions and (C) a hypothetical, highly urbanized model, representing Houston-style development. Parameters, such as imperviousness and land cover, were varied in order to represent the different developed conditions. The A and B models were calibrated using NEXRAD radar rainfall for two recent storm events in 2008 and 2009. All three models were used to compare peak flows, discharge volumes and time to peak of hydrographs for the recent radar rainfall events and a historical gaged rainfall event that occurred in 1974. Results show that compared to pre-developed conditions, the construction of The Woodlands resulted in an average increase in peak flows of only 15% during small storms and 27% during a major event. Furthermore, when compared to the highly urbanized model, peak flows are often two to three times smaller for the 2006-model. In the 2006-model, the peak flow of the 100 year event was successfully attenuated, suggesting that the design of The Woodlands effectively protects the development from the 1% occurrence storm event using LID practices and reservoirs. This study uses a calibrated hydrologic distributed-model supported by NEXRAD radar
Modelling the distribution of domestic ducks in Monsoon Asia
Van Bockel, Thomas P.; Prosser, Diann; Franceschini, Gianluca; Biradar, Chandra; Wint, William; Robinson, Tim; Gilbert, Marius
2011-01-01
Domestic ducks are considered to be an important reservoir of highly pathogenic avian influenza (HPAI), as shown by a number of geospatial studies in which they have been identified as a significant risk factor associated with disease presence. Despite their importance in HPAI epidemiology, their large-scale distribution in Monsoon Asia is poorly understood. In this study, we created a spatial database of domestic duck census data in Asia and used it to train statistical distribution models for domestic duck distributions at a spatial resolution of 1km. The method was based on a modelling framework used by the Food and Agriculture Organisation to produce the Gridded Livestock of the World (GLW) database, and relies on stratified regression models between domestic duck densities and a set of agro-ecological explanatory variables. We evaluated different ways of stratifying the analysis and of combining the prediction to optimize the goodness of fit of the predictions. We found that domestic duck density could be predicted with reasonable accuracy (mean RMSE and correlation coefficient between log-transformed observed and predicted densities being 0.58 and 0.80, respectively), using a stratification based on livestock production systems. We tested the use of artificially degraded data on duck distributions in Thailand and Vietnam as training data, and compared the modelled outputs with the original high-resolution data. This showed, for these two countries at least, that these approaches could be used to accurately disaggregate provincial level (administrative level 1) statistical data to provide high resolution model distributions.
Evaluation of water vapor distribution in general circulation models using satellite observations
Soden, Brian J.; Bretherton, Francis P.
1994-01-01
This paper presents a comparison of the water vapor distribution obtained from two general circulation models, the European Centre for Medium-Range Weather Forecasts (ECMWF) model and the National Center for Atmospheric Research (NCAR) Community Climate Model (CCM), with satellite observations of total precipitable water (TPW) from Special Sensor Microwave/Imager (SSM/I) and upper tropospheric relative humidity (UTH) from GOES. Overall, both models are successful in capturing the primary features of the observed water vapor distribution and its seasonal variation. For the ECMWF model, however, a systematic moist bias in TPW is noted over well-known stratocumulus regions in the eastern subtropical oceans. Comparison with radiosonde profiles suggests that this problem is attributable to difficulties in modeling the shallowness of the boundary layer and large vertical water vapor gradients which characterize these regions. In comparison, the CCM is more successful in capturing the low values of TPW in the stratocumulus regions, although it tends to exhibit a dry bias over the eastern half of the subtropical oceans and a corresponding moist bias in the western half. The CCM also significantly overestimates the daily variability of the moisture fields in convective regions, suggesting a problem in simulating the temporal nature of moisture transport by deep convection. Comparison of the monthly mean UTH distribution indicates generally larger discrepancies than were noted for TPW owing to the greater influence of large-scale dynamical processes in determining the distribution of UTH. In particular, the ECMWF model exhibits a distinct dry bias along the Intertropical Convergence Zone (ITCZ) and a moist bias over the subtropical descending branches of the Hadley cell, suggesting an underprediction in the strength of the Hadley circulation. The CCM, on the other hand, demonstrates greater discrepancies in UTH than are observed for the ECMWF model, but none that are as
Directory of Open Access Journals (Sweden)
Wen-Xiang Wu
2014-01-01
Full Text Available The cost-based system optimum problem in networks with continuously distributed value of time is formulated as a path-based form, which cannot be solved by the Frank-Wolfe algorithm. In light of magnitude improvement in the availability of computer memory in recent years, path-based algorithms have been regarded as a viable approach for traffic assignment problems with reasonably large network sizes. We develop a path-based gradient projection algorithm for solving the cost-based system optimum model, based on Goldstein-Levitin-Polyak method which has been successfully applied to solve standard user equilibrium and system optimum problems. The Sioux Falls network tested is used to verify the effectiveness of the algorithm.