Using Model Checking for Analyzing Distributed Power Control Problems
DEFF Research Database (Denmark)
Brihaye, Thomas; Jungers, Marc; Lasaulce, Samson
2010-01-01
Model checking (MC) is a formal verification technique which has been known and still knows a resounding success in the computer science community. Realizing that the distributed power control ( PC) problem can be modeled by a timed game between a given transmitter and its environment, the authors...... objectives a transmitter-receiver pair would like to reach. The network is modeled by a game where transmitters are considered as timed automata interacting with each other. The objectives are then translated into timed alternating-time temporal logic formulae and MC is exploited to know whether the desired...
Using Model Checking for Analyzing Distributed Power Control Problems
Directory of Open Access Journals (Sweden)
Thomas Brihaye
2010-01-01
Full Text Available Model checking (MC is a formal verification technique which has been known and still knows a resounding success in the computer science community. Realizing that the distributed power control (PC problem can be modeled by a timed game between a given transmitter and its environment, the authors wanted to know whether this approach can be applied to distributed PC. It turns out that it can be applied successfully and allows one to analyze realistic scenarios including the case of discrete transmit powers and games with incomplete information. The proposed methodology is as follows. We state some objectives a transmitter-receiver pair would like to reach. The network is modeled by a game where transmitters are considered as timed automata interacting with each other. The objectives are then translated into timed alternating-time temporal logic formulae and MC is exploited to know whether the desired properties are verified and determine a winning strategy.
Predicting weed problems in maize cropping by species distribution modelling
Directory of Open Access Journals (Sweden)
Bürger, Jana
2014-02-01
Full Text Available Increasing maize cultivation and changed cropping practices promote the selection of typical maize weeds that may also profit strongly from climate change. Predicting potential weed problems is of high interest for plant production. Within the project KLIFF, experiments were combined with species distribution modelling for this task in the region of Lower Saxony, Germany. For our study, we modelled ecological and damage niches of nine weed species that are significant and wide spread in maize cropping in a number of European countries. Species distribution models describe the ecological niche of a species, these are the environmental conditions under which a species can maintain a vital population. It is also possible to estimate a damage niche, i.e. the conditions under which a species causes damage in agricultural crops. For this, we combined occurrence data of European national data bases with high resolution climate, soil and land use data. Models were also projected to simulated climate conditions for the time horizon 2070 - 2100 in order to estimate climate change effects. Modelling results indicate favourable conditions for typical maize weed occurrence virtually all over the study region, but only a few species are important in maize cropping. This is in good accordance with the findings of an earlier maize weed monitoring. Reaction to changing climate conditions is species-specific, for some species neutral (E. crus-galli, other species may gain (Polygonum persicaria or loose (Viola arvensis large areas of suitable habitats. All species with damage potential under present conditions will remain important in maize cropping, some more species will gain regional importance (Calystegia sepium, Setara viridis.
Liao, Li; Li, Jianfeng; Wu, Yaohua
2013-01-01
Mathematical models of inventory-distribution routing problem for two-echelon agriculture products distribution network are established, which are based on two management modes, franchise chain and regular chain, one-to-many, interval periodic order, demand depending on inventory, deteriorating treatment cost of agriculture products, start-up costs of vehicles and so forth. Then, a heuristic adaptive genetic algorithm is presented for the model of franchise chain. For the regular chain model,...
Directory of Open Access Journals (Sweden)
Li Liao
2013-01-01
Full Text Available Mathematical models of inventory-distribution routing problem for two-echelon agriculture products distribution network are established, which are based on two management modes, franchise chain and regular chain, one-to-many, interval periodic order, demand depending on inventory, deteriorating treatment cost of agriculture products, start-up costs of vehicles and so forth. Then, a heuristic adaptive genetic algorithm is presented for the model of franchise chain. For the regular chain model, a two-layer genetic algorithm based on oddment modification is proposed, in which the upper layer is to determine the distribution period and quantity and the lower layer is to seek the optimal order cycle, quantity, distribution routes, and the rational oddment modification number for the distributor. By simulation experiments, the validity of the algorithms is demonstrated, and the two management modes are compared.
A Data Flow Model to Solve the Data Distribution Changing Problem in Machine Learning
Directory of Open Access Journals (Sweden)
Shang Bo-Wen
2016-01-01
Full Text Available Continuous prediction is widely used in broad communities spreading from social to business and the machine learning method is an important method in this problem.When we use the machine learning method to predict a problem. We use the data in the training set to fit the model and estimate the distribution of data in the test set.But when we use machine learning to do the continuous prediction we get new data as time goes by and use the data to predict the future data, there may be a problem. As the size of the data set increasing over time, the distribution changes and there will be many garbage data in the training set.We should remove the garbage data as it reduces the accuracy of the prediction. The main contribution of this article is using the new data to detect the timeliness of historical data and remove the garbage data.We build a data flow model to describe how the data flow among the test set, training set, validation set and the garbage set and improve the accuracy of prediction. As the change of the data set, the best machine learning model will change.We design a hybrid voting algorithm to fit the data set better that uses seven machine learning models predicting the same problem and uses the validation set putting different weights on the learning models to give better model more weights. Experimental results show that, when the distribution of the data set changes over time, our time flow model can remove most of the garbage data and get a better result than the traditional method that adds all the data to the data set; our hybrid voting algorithm has a better prediction result than the average accuracy of other predict models
A heterogeneous fleet vehicle routing model for solving the LPG distribution problem: A case study
International Nuclear Information System (INIS)
Onut, S; Kamber, M R; Altay, G
2014-01-01
Vehicle Routing Problem (VRP) is an important management problem in the field of distribution and logistics. In VRPs, routes from a distribution point to geographically distributed points are designed with minimum cost and considering customer demands. All points should be visited only once and by one vehicle in one route. Total demand in one route should not exceed the capacity of the vehicle that assigned to that route. VRPs are varied due to real life constraints related to vehicle types, number of depots, transportation conditions and time periods, etc. Heterogeneous fleet vehicle routing problem is a kind of VRP that vehicles have different capacity and costs. There are two types of vehicles in our problem. In this study, it is used the real world data and obtained from a company that operates in LPG sector in Turkey. An optimization model is established for planning daily routes and assigned vehicles. The model is solved by GAMS and optimal solution is found in a reasonable time
A heterogeneous fleet vehicle routing model for solving the LPG distribution problem: A case study
Onut, S.; Kamber, M. R.; Altay, G.
2014-03-01
Vehicle Routing Problem (VRP) is an important management problem in the field of distribution and logistics. In VRPs, routes from a distribution point to geographically distributed points are designed with minimum cost and considering customer demands. All points should be visited only once and by one vehicle in one route. Total demand in one route should not exceed the capacity of the vehicle that assigned to that route. VRPs are varied due to real life constraints related to vehicle types, number of depots, transportation conditions and time periods, etc. Heterogeneous fleet vehicle routing problem is a kind of VRP that vehicles have different capacity and costs. There are two types of vehicles in our problem. In this study, it is used the real world data and obtained from a company that operates in LPG sector in Turkey. An optimization model is established for planning daily routes and assigned vehicles. The model is solved by GAMS and optimal solution is found in a reasonable time.
DEFF Research Database (Denmark)
Chemi, Tatiana
2016-01-01
This chapter aims to deconstruct some persistent myths about creativity: the myth of individualism and of the genius. By looking at literature that approaches creativity as a participatory and distributed phenomenon and by bringing empirical evidence from artists’ studios, the author presents a p......, what can educators at higher education learn from the ways creative groups solve problems? How can artists contribute to inspiring higher education?......This chapter aims to deconstruct some persistent myths about creativity: the myth of individualism and of the genius. By looking at literature that approaches creativity as a participatory and distributed phenomenon and by bringing empirical evidence from artists’ studios, the author presents...... a perspective that is relevant to higher education. The focus here is on how artists solve problems in distributed paths, and on the elements of creative collaboration. Creative problem-solving will be looked at as an ongoing dialogue that artists engage with themselves, with others, with recipients...
International Nuclear Information System (INIS)
Taylor, C.B.
1984-01-01
The paper assesses the use of the author's data by Rozanski and Sonntag to support a multi-box model of the vertical distribution of deuterium in atmospheric water vapour, in which exchange between vapour and falling precipitation produces a steeper deuterium concentration profile than simpler condensation models. The mean deuterium/altitude profile adopted by Rozanski and Sonntag for this purpose is only one of several very different mean profiles obtainable from the data by arbitrary selection and weighting procedures; although it can be made to match the specified multi-box model calculations for deuterium, there is a wide discrepancy between the actual and model mean mixing ratio profiles which cannot be ignored. Taken together, the mixing ratio and deuterium profiles indicate that mean vapour of the middle troposphere has been subjected to condensation at greater heights and lower temperatures than those considered in the model calculations. When this is taken into account, the data actually fit much better to the simpler condensation models. But the vapour samples represent meteorological situations too remote in time from primary precipitation events to permit definite conclusions on cloud system mechanisms. (Auth.)
Integral equation models for the inverse problem of biological ion channel distributions
International Nuclear Information System (INIS)
French, D A; Groetsch, C W
2007-01-01
Olfactory cilia are thin hair-like filaments that extend from olfactory receptor neurons into the nasal mucus. Transduction of an odor into an electrical signal is accomplished by a depolarizing influx of ions through cyclic-nucleotide-gated channels in the membrane that forms the lateral surface of the cilium. In an experimental procedure developed by S. Kleene, a cilium is detached at its base and drawn into a recording pipette. The cilium base is then immersed in a bath of a channel activating agent (cAMP) which is allowed to diffuse into the cilium interior, opening channels as it goes and initiating a transmembrane current. The total current is recorded as a function of time and serves as data for a nonlinear integral equation of the first kind modeling the spatial distribution of ion channels along the length of the cilium. We discuss some linear Fredholm integral equations that result from simplifications of this model. A numerical procedure is proposed for a class of integral equations suggested by this simplified model and numerical results using simulated and laboratory data are presented
The distributed wireless gathering problem
Bonifaci, V.; Korteweg, P.; Marchetti Spaccamela, A.; Stougie, L.
2011-01-01
We address the problem of data gathering in a wireless network using multi-hop communication; our main goal is the analysis of simple algorithms suitable for implementation in realistic scenarios. We study the performance of distributed algorithms, which do not use any form of local coordination,
Distributed Systems: The Hard Problems
CERN. Geneva
2015-01-01
**Nicholas Bellerophon** works as a client services engineer at Basho Technologies, helping customers setup and run distributed systems at scale in the wild. He has also worked in massively multiplayer games, and recently completed a live scalable simulation engine. He is an avid TED-watcher with interests in many areas of the arts, science, and engineering, including of course high-energy physics.
A Framework for Distributed Problem Solving
Leone, Joseph; Shin, Don G.
1989-03-01
This work explores a distributed problem solving (DPS) approach, namely the AM/AG model, to cooperative memory recall. The AM/AG model is a hierarchic social system metaphor for DPS based on the Mintzberg's model of organizations. At the core of the model are information flow mechanisms, named amplification and aggregation. Amplification is a process of expounding a given task, called an agenda, into a set of subtasks with magnified degree of specificity and distributing them to multiple processing units downward in the hierarchy. Aggregation is a process of combining the results reported from multiple processing units into a unified view, called a resolution, and promoting the conclusion upward in the hierarchy. The combination of amplification and aggregation can account for a memory recall process which primarily relies on the ability of making associations between vast amounts of related concepts, sorting out the combined results, and promoting the most plausible ones. The amplification process is discussed in detail. An implementation of the amplification process is presented. The process is illustrated by an example.
Problem solving environment for distributed interactive applications
Rycerz, K.; Bubak, M.; Sloot, P.; Getov, V.; Gorlatch, S.; Bubak, M.; Priol, T.
2008-01-01
Interactive Problem Solving Environments (PSEs) offer an integrated approach for constructing and running complex systems, such as distributed simulation systems. To achieve efficient execution of High Level Architecture (HLA)-based distributed interactive simulations on the Grid, we introduce a PSE
Problems in Cybersemiotic Modelling
DEFF Research Database (Denmark)
Brier, Søren
2014-01-01
the Peircean theory of the observer as the phaneroscopic foundation. 4. Cobley points out that both models, as they are combined in Cybersemiotics lacks to integrate a theory of interest and power. They are too consensual in their view on communication. This is a general problem in both theories. Still Luhmann...... do work with the power problem in his triple autopoietic communicative system theory as he sees communication specialized into generalized symbolic media, with no controlling center in the modern industrialized media society. Another way to go is Habermas’ critical theory in a social semiotic theory......Going from an empirical to an informational paradigm of cognition and communication, does not really help us to analyze, how the living systems manage to make a meaningful interpretation of environment that is useful for their survival and procreation. Other models are needed. 1. There is von...
Hierarchical species distribution models
Hefley, Trevor J.; Hooten, Mevin B.
2016-01-01
Determining the distribution pattern of a species is important to increase scientific knowledge, inform management decisions, and conserve biodiversity. To infer spatial and temporal patterns, species distribution models have been developed for use with many sampling designs and types of data. Recently, it has been shown that count, presence-absence, and presence-only data can be conceptualized as arising from a point process distribution. Therefore, it is important to understand properties of the point process distribution. We examine how the hierarchical species distribution modeling framework has been used to incorporate a wide array of regression and theory-based components while accounting for the data collection process and making use of auxiliary information. The hierarchical modeling framework allows us to demonstrate how several commonly used species distribution models can be derived from the point process distribution, highlight areas of potential overlap between different models, and suggest areas where further research is needed.
Schwartz distributions in the Lagrange variational problem
International Nuclear Information System (INIS)
Anton, H.; Bahar, L.Y.
1978-01-01
Schwartz distributions are used to eliminate the necessity of imposing a priori conditions on the class of admissible functions in the Lagrange fixed end-point variational problem. This makes it possible to defer the imposition of conditions on the extremals until such conditions become apparent from physical considerations
Distributed optimisation problem with communication delay and external disturbance
Tran, Ngoc-Tu; Xiao, Jiang-Wen; Wang, Yan-Wu; Yang, Wu
2017-12-01
This paper investigates the distributed optimisation problem for the multi-agent systems (MASs) with the simultaneous presence of external disturbance and the communication delay. To solve this problem, a two-step design scheme is introduced. In the first step, based on the internal model principle, the internal model term is constructed to compensate the disturbance asymptotically. In the second step, a distributed optimisation algorithm is designed to solve the distributed optimisation problem based on the MASs with the simultaneous presence of disturbance and communication delay. Moreover, in the proposed algorithm, each agent interacts with its neighbours through the connected topology and the delay occurs during the information exchange. By utilising Lyapunov-Krasovskii functional, the delay-dependent conditions are derived for both slowly and fast time-varying delay, respectively, to ensure the convergence of the algorithm to the optimal solution of the optimisation problem. Several numerical simulation examples are provided to illustrate the effectiveness of the theoretical results.
Integrating packing and distribution problems and optimization through mathematical programming
Directory of Open Access Journals (Sweden)
Fabio Miguel
2016-06-01
Full Text Available This paper analyzes the integration of two combinatorial problems that frequently arise in production and distribution systems. One is the Bin Packing Problem (BPP problem, which involves finding an ordering of some objects of different volumes to be packed into the minimal number of containers of the same or different size. An optimal solution to this NP-Hard problem can be approximated by means of meta-heuristic methods. On the other hand, we consider the Capacitated Vehicle Routing Problem with Time Windows (CVRPTW, which is a variant of the Travelling Salesman Problem (again a NP-Hard problem with extra constraints. Here we model those two problems in a single framework and use an evolutionary meta-heuristics to solve them jointly. Furthermore, we use data from a real world company as a test-bed for the method introduced here.
Distributed Parameter Modelling Applications
DEFF Research Database (Denmark)
Sales-Cruz, Mauricio; Cameron, Ian; Gani, Rafiqul
2011-01-01
and the development of a short-path evaporator. The oil shale processing problem illustrates the interplay amongst particle flows in rotating drums, heat and mass transfer between solid and gas phases. The industrial application considers the dynamics of an Alberta-Taciuk processor, commonly used in shale oil and oil...... the steady state, distributed behaviour of a short-path evaporator....
Simulating quantum correlations as a distributed sampling problem
International Nuclear Information System (INIS)
Degorre, Julien; Laplante, Sophie; Roland, Jeremie
2005-01-01
It is known that quantum correlations exhibited by a maximally entangled qubit pair can be simulated with the help of shared randomness, supplemented with additional resources, such as communication, postselection or nonlocal boxes. For instance, in the case of projective measurements, it is possible to solve this problem with protocols using one bit of communication or making one use of a nonlocal box. We show that this problem reduces to a distributed sampling problem. We give a new method to obtain samples from a biased distribution, starting with shared random variables following a uniform distribution, and use it to build distributed sampling protocols. This approach allows us to derive, in a simpler and unified way, many existing protocols for projective measurements, and extend them to positive operator value measurements. Moreover, this approach naturally leads to a local hidden variable model for Werner states
Cooperated Bayesian algorithm for distributed scheduling problem
Institute of Scientific and Technical Information of China (English)
QIANG Lei; XIAO Tian-yuan
2006-01-01
This paper presents a new distributed Bayesian optimization algorithm (BOA) to overcome the efficiency problem when solving NP scheduling problems.The proposed approach integrates BOA into the co-evolutionary schema,which builds up a concurrent computing environment.A new search strategy is also introduced for local optimization process.It integrates the reinforcement learning(RL) mechanism into the BOA search processes,and then uses the mixed probability information from BOA (post-probability) and RL (pre-probability) to enhance the cooperation between different local controllers,which improves the optimization ability of the algorithm.The experiment shows that the new algorithm does better in both optimization (2.2%) and convergence (11.7%),compared with classic BOA.
Bounding species distribution models
Directory of Open Access Journals (Sweden)
Thomas J. STOHLGREN, Catherine S. JARNEVICH, Wayne E. ESAIAS,Jeffrey T. MORISETTE
2011-10-01
Full Text Available Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for “clamping” model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART and maximum entropy (Maxent models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5: 642–647, 2011].
Bounding Species Distribution Models
Stohlgren, Thomas J.; Jarnevich, Cahterine S.; Morisette, Jeffrey T.; Esaias, Wayne E.
2011-01-01
Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5): 642-647, 2011].
Sen, Sedat
2018-01-01
Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood…
Vaginal drug distribution modeling.
Katz, David F; Yuan, Andrew; Gao, Yajing
2015-09-15
This review presents and applies fundamental mass transport theory describing the diffusion and convection driven mass transport of drugs to the vaginal environment. It considers sources of variability in the predictions of the models. It illustrates use of model predictions of microbicide drug concentration distribution (pharmacokinetics) to gain insights about drug effectiveness in preventing HIV infection (pharmacodynamics). The modeling compares vaginal drug distributions after different gel dosage regimens, and it evaluates consequences of changes in gel viscosity due to aging. It compares vaginal mucosal concentration distributions of drugs delivered by gels vs. intravaginal rings. Finally, the modeling approach is used to compare vaginal drug distributions across species with differing vaginal dimensions. Deterministic models of drug mass transport into and throughout the vaginal environment can provide critical insights about the mechanisms and determinants of such transport. This knowledge, and the methodology that obtains it, can be applied and translated to multiple applications, involving the scientific underpinnings of vaginal drug distribution and the performance evaluation and design of products, and their dosage regimens, that achieve it. Copyright © 2015 Elsevier B.V. All rights reserved.
Modeled ground water age distributions
Woolfenden, Linda R.; Ginn, Timothy R.
2009-01-01
The age of ground water in any given sample is a distributed quantity representing distributed provenance (in space and time) of the water. Conventional analysis of tracers such as unstable isotopes or anthropogenic chemical species gives discrete or binary measures of the presence of water of a given age. Modeled ground water age distributions provide a continuous measure of contributions from different recharge sources to aquifers. A numerical solution of the ground water age equation of Ginn (1999) was tested both on a hypothetical simplified one-dimensional flow system and under real world conditions. Results from these simulations yield the first continuous distributions of ground water age using this model. Complete age distributions as a function of one and two space dimensions were obtained from both numerical experiments. Simulations in the test problem produced mean ages that were consistent with the expected value at the end of the model domain for all dispersivity values tested, although the mean ages for the two highest dispersivity values deviated slightly from the expected value. Mean ages in the dispersionless case also were consistent with the expected mean ages throughout the physical model domain. Simulations under real world conditions for three dispersivity values resulted in decreasing mean age with increasing dispersivity. This likely is a consequence of an edge effect. However, simulations for all three dispersivity values tested were mass balanced and stable demonstrating that the solution of the ground water age equation can provide estimates of water mass density distributions over age under real world conditions.
Algorithms and ordering heuristics for distributed constraint satisfaction problems
Wahbi , Mohamed
2013-01-01
DisCSP (Distributed Constraint Satisfaction Problem) is a general framework for solving distributed problems arising in Distributed Artificial Intelligence.A wide variety of problems in artificial intelligence are solved using the constraint satisfaction problem paradigm. However, there are several applications in multi-agent coordination that are of a distributed nature. In this type of application, the knowledge about the problem, that is, variables and constraints, may be logically or geographically distributed among physical distributed agents. This distribution is mainly due to p
B. Kaynar; S.I. Birbil (Ilker); J.B.G. Frenk (Hans)
2007-01-01
textabstractIn this paper portfolio problems with linear loss functions and multivariate elliptical distributed returns are studied. We consider two risk measures, Value-at-Risk and Conditional-Value-at-Risk, and two types of decision makers, risk neutral and risk averse. For Value-at-Risk, we show
A Generalized Cauchy Distribution Framework for Problems Requiring Robust Behavior
Directory of Open Access Journals (Sweden)
Carrillo RafaelE
2010-01-01
Full Text Available Statistical modeling is at the heart of many engineering problems. The importance of statistical modeling emanates not only from the desire to accurately characterize stochastic events, but also from the fact that distributions are the central models utilized to derive sample processing theories and methods. The generalized Cauchy distribution (GCD family has a closed-form pdf expression across the whole family as well as algebraic tails, which makes it suitable for modeling many real-life impulsive processes. This paper develops a GCD theory-based approach that allows challenging problems to be formulated in a robust fashion. Notably, the proposed framework subsumes generalized Gaussian distribution (GGD family-based developments, thereby guaranteeing performance improvements over traditional GCD-based problem formulation techniques. This robust framework can be adapted to a variety of applications in signal processing. As examples, we formulate four practical applications under this framework: (1 filtering for power line communications, (2 estimation in sensor networks with noisy channels, (3 reconstruction methods for compressed sensing, and (4 fuzzy clustering.
Optimizing Distribution Problems using WinQSB Software
Directory of Open Access Journals (Sweden)
Daniel Mihai Amariei
2015-07-01
Full Text Available In the present paper we are presenting a problem of distribution using the Network Modeling Module of the WinQSB software, were we have 5 athletes which we must assign the optimal sample, function of the obtained time, so as to obtain the maximum output of the athletes. Also we analyzed the case of an accident of 2 athletes, the coupling of 3 athletes with 5 various athletic events causing the maximum coupling, done using the Hungarian algorithm.
Modelling of a collage problem
Directory of Open Access Journals (Sweden)
Abdelaziz Ait Moussa
2006-09-01
Full Text Available In this paper we study the behavior of elastic adherents connected with an adhesive. We use the $Gamma$-convergence method to approximate the problem modelling the assemblage with density energies assumed to be quasiconvex. In particular for the adhesive problem, we assume periodic density energy and some growth conditions with respect to the spherical and deviational components of the gradient. We obtain a problem depending on small parameters linked to the thickness and the stiffness of the adhesive.
Distribution-Preserving Stratified Sampling for Learning Problems.
Cervellera, Cristiano; Maccio, Danilo
2017-06-09
The need for extracting a small sample from a large amount of real data, possibly streaming, arises routinely in learning problems, e.g., for storage, to cope with computational limitations, obtain good training/test/validation sets, and select minibatches for stochastic gradient neural network training. Unless we have reasons to select the samples in an active way dictated by the specific task and/or model at hand, it is important that the distribution of the selected points is as similar as possible to the original data. This is obvious for unsupervised learning problems, where the goal is to gain insights on the distribution of the data, but it is also relevant for supervised problems, where the theory explains how the training set distribution influences the generalization error. In this paper, we analyze the technique of stratified sampling from the point of view of distances between probabilities. This allows us to introduce an algorithm, based on recursive binary partition of the input space, aimed at obtaining samples that are distributed as much as possible as the original data. A theoretical analysis is proposed, proving the (greedy) optimality of the procedure together with explicit error bounds. An adaptive version of the algorithm is also introduced to cope with streaming data. Simulation tests on various data sets and different learning tasks are also provided.
Mathematical problems in meteorological modelling
Csomós, Petra; Faragó, István; Horányi, András; Szépszó, Gabriella
2016-01-01
This book deals with mathematical problems arising in the context of meteorological modelling. It gathers and presents some of the most interesting and important issues from the interaction of mathematics and meteorology. It is unique in that it features contributions on topics like data assimilation, ensemble prediction, numerical methods, and transport modelling, from both mathematical and meteorological perspectives. The derivation and solution of all kinds of numerical prediction models require the application of results from various mathematical fields. The present volume is divided into three parts, moving from mathematical and numerical problems through air quality modelling, to advanced applications in data assimilation and probabilistic forecasting. The book arose from the workshop “Mathematical Problems in Meteorological Modelling” held in Budapest in May 2014 and organized by the ECMI Special Interest Group on Numerical Weather Prediction. Its main objective is to highlight the beauty of the de...
Negotiation as a metaphor for distributed problem solving
Energy Technology Data Exchange (ETDEWEB)
Davis, R.; Smith, R.G.
1983-01-01
The authors describe the concept of distributed problem solving and defines it as the cooperative solution of problems by a decentralized and loosely coupled collection of problem solvers. This approach to problem solving offers the promise of increased performance and provides a useful medium for exploring and developing new problem-solving techniques. A framework is presented called the contract net that specifies communication and control in a distribution problem solver. Task distribution is viewed as an interactive process, a discussion carried on between a node with a task to be executed and a group of nodes that may be able to execute the task. The kinds of information are described that must be passed between nodes during the discussion in order to obtain effective problem-solving behavior. This discussion is the origin of the negotiation metaphor: task distribution is viewed as a form of contract negotiation. 32 references.
Friedman, Robert S.; Deek, Fadi P.
2002-01-01
Discusses how the design and implementation of problem-solving tools used in programming instruction are complementary with both the theories of problem-based learning (PBL), including constructivism, and the practices of distributed education environments. Examines how combining PBL, Web-based distributed education, and a problem-solving…
Achievements and Problems of Conceptual Modelling
Thalheim, Bernhard
Database and information systems technology has substantially changed. Nowadays, content management systems, (information-intensive) web services, collaborating systems, internet databases, OLAP databases etc. have become buzzwords. At the same time, object-relational technology has gained the maturity for being widely applied. Conceptual modelling has not (yet) covered all these novel topics. It has been concentrated for more than two decades around specification of structures. Meanwhile, functionality, interactivity and distribution must be included into conceptual modelling of information systems. Also, some of the open problems that have been already discussed in 1987 [15, 16] still remain to be open. At the same time, novel models such as object-relational models or XML-based models have been developed. They did not overcome all the problems but have been sharpening and extending the variety of open problems. The open problem presented are given for classical areas of database research, i.e., structuring and functionality. The entire are of distribution and interaction is currently an area of very intensive research.
From Logical to Distributional Models
Directory of Open Access Journals (Sweden)
Anne Preller
2014-12-01
Full Text Available The paper relates two variants of semantic models for natural language, logical functional models and compositional distributional vector space models, by transferring the logic and reasoning from the logical to the distributional models. The geometrical operations of quantum logic are reformulated as algebraic operations on vectors. A map from functional models to vector space models makes it possible to compare the meaning of sentences word by word.
Obtaining sparse distributions in 2D inverse problems
Reci, A; Sederman, Andrew John; Gladden, Lynn Faith
2017-01-01
The mathematics of inverse problems has relevance across numerous estimation problems in science and engineering. L1 regularization has attracted recent attention in reconstructing the system properties in the case of sparse inverse problems; i.e., when the true property sought is not adequately described by a continuous distribution, in particular in Compressed Sensing image reconstruction. In this work, we focus on the application of L1 regularization to a class of inverse problems; relaxat...
Distributed Graphs for Solving Co-modal Transport Problems
Karama , Jeribi; Hinda , Mejri; Hayfa , Zgaya; Slim , Hammadi
2011-01-01
International audience; The paper presents a new approach based on a special distributed graphs in order to solve co-modal transport problems. The co-modal transport system consists on combining different transport modes effectively in terms of economic, environmental, service and financial efficiency, etc. However, the problem is that these systems must deal with different distributed information sources stored in different locations and provided by different public and private companies. In...
Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution
Energy Technology Data Exchange (ETDEWEB)
Hamadameen, Abdulqader Othman [Optimization, Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia); Zainuddin, Zaitul Marlizawati [Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia)
2014-06-19
This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.
Numerical models for differential problems
Quarteroni, Alfio
2017-01-01
In this text, we introduce the basic concepts for the numerical modelling of partial differential equations. We consider the classical elliptic, parabolic and hyperbolic linear equations, but also the diffusion, transport, and Navier-Stokes equations, as well as equations representing conservation laws, saddle-point problems and optimal control problems. Furthermore, we provide numerous physical examples which underline such equations. We then analyze numerical solution methods based on finite elements, finite differences, finite volumes, spectral methods and domain decomposition methods, and reduced basis methods. In particular, we discuss the algorithmic and computer implementation aspects and provide a number of easy-to-use programs. The text does not require any previous advanced mathematical knowledge of partial differential equations: the absolutely essential concepts are reported in a preliminary chapter. It is therefore suitable for students of bachelor and master courses in scientific disciplines, an...
Distributed Interior-point Method for Loosely Coupled Problems
DEFF Research Database (Denmark)
Pakazad, Sina Khoshfetrat; Hansson, Anders; Andersen, Martin Skovgaard
2014-01-01
In this paper, we put forth distributed algorithms for solving loosely coupled unconstrained and constrained optimization problems. Such problems are usually solved using algorithms that are based on a combination of decomposition and first order methods. These algorithms are commonly very slow a...
Distribution system modeling and analysis
Kersting, William H
2001-01-01
For decades, distribution engineers did not have the sophisticated tools developed for analyzing transmission systems-often they had only their instincts. Things have changed, and we now have computer programs that allow engineers to simulate, analyze, and optimize distribution systems. Powerful as these programs are, however, without a real understanding of the operating characteristics of a distribution system, engineers using the programs can easily make serious errors in their designs and operating procedures. Distribution System Modeling and Analysis helps prevent those errors. It gives readers a basic understanding of the modeling and operating characteristics of the major components of a distribution system. One by one, the author develops and analyzes each component as a stand-alone element, then puts them all together to analyze a distribution system comprising the various shunt and series devices for power-flow and short-circuit studies. He includes the derivation of all models and includes many num...
Modelling Dynamic Forgetting in Distributed Information Systems
N.F. Höning (Nicolas); M.C. Schut
2010-01-01
htmlabstractWe describe and model a new aspect in the design of distributed information systems. We build upon a previously described problem on the microlevel, which asks how quickly agents should discount (forget) their experience: If they cherish their memories, they can build their reports on
Integrated Production-Distribution Scheduling Problem with Multiple Independent Manufacturers
Directory of Open Access Journals (Sweden)
Jianhong Hao
2015-01-01
Full Text Available We consider the nonstandard parts supply chain with a public service platform for machinery integration in China. The platform assigns orders placed by a machinery enterprise to multiple independent manufacturers who produce nonstandard parts and makes production schedule and batch delivery schedule for each manufacturer in a coordinate manner. Each manufacturer has only one plant with parallel machines and is located at a location far away from other manufacturers. Orders are first processed at the plants and then directly shipped from the plants to the enterprise in order to be finished before a given deadline. We study the above integrated production-distribution scheduling problem with multiple manufacturers to maximize a weight sum of the profit of each manufacturer under the constraints that all orders are finished before the deadline and the profit of each manufacturer is not negative. According to the optimal condition analysis, we formulate the problem as a mixed integer programming model and use CPLEX to solve it.
An Evolving Asymmetric Game for Modeling Interdictor-Smuggler Problems
2016-06-01
ASYMMETRIC GAME FOR MODELING INTERDICTOR-SMUGGLER PROBLEMS by Richard J. Allain June 2016 Thesis Advisor: David L. Alderson Second Reader: W...DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE AN EVOLVING ASYMMETRIC GAME FOR MODELING INTERDICTOR- SMUGGLER PROBLEMS 5. FUNDING NUMBERS 6...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited AN EVOLVING
A DISTRIBUTED HYPERMAP MODEL FOR INTERNET GIS
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
The rapid development of Internet technology makes it possible to integrate GIS with the Internet,forming Internet GIS.Internet GIS is based on a distributed client/server architecture and TCP/IP & IIOP.When constructing and designing Internet GIS,we face the problem of how to express information units of Internet GIS.In order to solve this problem,this paper presents a distributed hypermap model for Internet GIS.This model provides a solution to organize and manage Internet GIS information units.It also illustrates relations between two information units and in an internal information unit both on clients and servers.On the basis of this model,the paper contributes to the expressions of hypermap relations and hypermap operations.The usage of this model is shown in the implementation of a prototype system.
Incorporating uncertainty in predictive species distribution modelling.
Beale, Colin M; Lennon, Jack J
2012-01-19
Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.
B. Kaynar; S.I. Birbil (Ilker); J.B.G. Frenk (Hans)
2007-01-01
textabstractWe discuss a class of risk measures for portfolio optimization with linear loss functions, where the random returns of financial instruments have a multivariate elliptical distribution. Under this setting we pay special attention to two risk measures, Value-at-Risk and
Vehicle Routing Problem Using Genetic Algorithm with Multi Compartment on Vegetable Distribution
Kurnia, Hari; Gustri Wahyuni, Elyza; Cergas Pembrani, Elang; Gardini, Syifa Tri; Kurnia Aditya, Silfa
2018-03-01
The problem that is often gained by the industries of managing and distributing vegetables is how to distribute vegetables so that the quality of the vegetables can be maintained properly. The problems encountered include optimal route selection and little travel time or so-called TSP (Traveling Salesman Problem). These problems can be modeled using the Vehicle Routing Problem (VRP) algorithm with rating ranking, a cross order based crossing, and also order based mutation mutations on selected chromosomes. This study uses limitations using only 20 market points, 2 point warehouse (multi compartment) and 5 vehicles. It is determined that for one distribution, one vehicle can only distribute to 4 market points only from 1 particular warehouse, and also one such vehicle can only accommodate 100 kg capacity.
Organizational problems of Water Distribution in Khorezm, Uzbekistan
Wegerich, K.
2004-01-01
The paper addresses problems of water resource management on the district and provincial level in the Khorezm province, Uzbekistan. The district water organizations are responsible for equitable water distribution to the agricultural users. These organizations do not have the necessary logistical
Impacts of Transportation Cost on Distribution-Free Newsboy Problems
Directory of Open Access Journals (Sweden)
Ming-Hung Shu
2014-01-01
Full Text Available A distribution-free newsboy problem (DFNP has been launched for a vendor to decide a product’s stock quantity in a single-period inventory system to sustain its least maximum-expected profits when combating fierce and diverse market circumstances. Nowadays, impacts of transportation cost on determination of optimal inventory quantity have become attentive, where its influence on the DFNP has not been fully investigated. By borrowing an economic theory from transportation disciplines, in this paper the DFNP is tackled in consideration of the transportation cost formulated as a function of shipping quantity and modeled as a nonlinear regression form from UPS’s on-site shipping-rate data. An optimal solution of the order quantity is computed on the basis of Newton’s approach to ameliorating its complexity of computation. As a result of comparative studies, lower bounds of the maximal expected profit of our proposed methodologies surpass those of existing work. Finally, we extend the analysis to several practical inventory cases including fixed ordering cost, random yield, and a multiproduct condition.
A Distributed Particle Swarm Optimization Zlgorithmfor Flexible Job-hop Scheduling Problem
Directory of Open Access Journals (Sweden)
LIU Sheng--hui
2017-06-01
Full Text Available According to the characteristics of the Flexible job shop scheduling problem the minimum makespan as measures we proposed a distributed particle swarm optimization algorithm aiming to solve flexible job shop scheduling problem. The algorithm adopts the method of distributed ideas to solve problems and we are established for two multi agent particle swarm optimization model in this algorithm it can solve the traditional particle swarm optimization algorithm when making decisions in real time according to the emergencies. Finally some benthmark problems were experimented and the results are compared with the traditional algorithm. Experimental results proved that the developed distributed PSO is enough effective and efficient to solve the FJSP and it also verified the reasonableness of the multi}gent particle swarm optimization model.
Modeling a four-layer location-routing problem
Directory of Open Access Journals (Sweden)
Mohsen Hamidi
2012-01-01
Full Text Available Distribution is an indispensable component of logistics and supply chain management. Location-Routing Problem (LRP is an NP-hard problem that simultaneously takes into consideration location, allocation, and vehicle routing decisions to design an optimal distribution network. Multi-layer and multi-product LRP is even more complex as it deals with the decisions at multiple layers of a distribution network where multiple products are transported within and between layers of the network. This paper focuses on modeling a complicated four-layer and multi-product LRP which has not been tackled yet. The distribution network consists of plants, central depots, regional depots, and customers. In this study, the structure, assumptions, and limitations of the distribution network are defined and the mathematical optimization programming model that can be used to obtain the optimal solution is developed. Presented by a mixed-integer programming model, the LRP considers the location problem at two layers, the allocation problem at three layers, the vehicle routing problem at three layers, and a transshipment problem. The mathematical model locates central and regional depots, allocates customers to plants, central depots, and regional depots, constructs tours from each plant or open depot to customers, and constructs transshipment paths from plants to depots and from depots to other depots. Considering realistic assumptions and limitations such as producing multiple products, limited production capacity, limited depot and vehicle capacity, and limited traveling distances enables the user to capture the real world situations.
Dynamic models for distributed generation resources
Energy Technology Data Exchange (ETDEWEB)
Morched, A.S. [BPR Energie, Sherbrooke, PQ (Canada)
2010-07-01
Distributed resources can impact the performance of host power systems during both normal and abnormal system conditions. This PowerPoint presentation discussed the use of dynamic models for identifying potential interaction problems between interconnected systems. The models were designed to simulate steady state behaviour as well as transient responses to system disturbances. The distributed generators included directly coupled and electronically coupled generators. The directly coupled generator was driven by wind turbines. Simplified models of grid-side inverters, electronically coupled wind generators and doubly-fed induction generators (DFIGs) were presented. The responses of DFIGs to wind variations were evaluated. Synchronous machine and electronically coupled generator responses were compared. The system model components included load models, generators, protection systems, and system equivalents. Frequency responses to islanding events were reviewed. The study demonstrated that accurate simulations are needed to predict the impact of distributed generation resources on the performance of host systems. Advances in distributed generation technology have outpaced the development of models needed for integration studies. tabs., figs.
Stochastic inverse problems: Models and metrics
International Nuclear Information System (INIS)
Sabbagh, Elias H.; Sabbagh, Harold A.; Murphy, R. Kim; Aldrin, John C.; Annis, Charles; Knopp, Jeremy S.
2015-01-01
In past work, we introduced model-based inverse methods, and applied them to problems in which the anomaly could be reasonably modeled by simple canonical shapes, such as rectangular solids. In these cases the parameters to be inverted would be length, width and height, as well as the occasional probe lift-off or rotation. We are now developing a formulation that allows more flexibility in modeling complex flaws. The idea consists of expanding the flaw in a sequence of basis functions, and then solving for the expansion coefficients of this sequence, which are modeled as independent random variables, uniformly distributed over their range of values. There are a number of applications of such modeling: 1. Connected cracks and multiple half-moons, which we have noted in a POD set. Ideally we would like to distinguish connected cracks from one long shallow crack. 2. Cracks of irregular profile and shape which have appeared in cold work holes during bolt-hole eddy-current inspection. One side of such cracks is much deeper than other. 3. L or C shaped crack profiles at the surface, examples of which have been seen in bolt-hole cracks. By formulating problems in a stochastic sense, we are able to leverage the stochastic global optimization algorithms in NLSE, which is resident in VIC-3D®, to answer questions of global minimization and to compute confidence bounds using the sensitivity coefficient that we get from NLSE. We will also address the issue of surrogate functions which are used during the inversion process, and how they contribute to the quality of the estimation of the bounds
Stochastic inverse problems: Models and metrics
Sabbagh, Elias H.; Sabbagh, Harold A.; Murphy, R. Kim; Aldrin, John C.; Annis, Charles; Knopp, Jeremy S.
2015-03-01
In past work, we introduced model-based inverse methods, and applied them to problems in which the anomaly could be reasonably modeled by simple canonical shapes, such as rectangular solids. In these cases the parameters to be inverted would be length, width and height, as well as the occasional probe lift-off or rotation. We are now developing a formulation that allows more flexibility in modeling complex flaws. The idea consists of expanding the flaw in a sequence of basis functions, and then solving for the expansion coefficients of this sequence, which are modeled as independent random variables, uniformly distributed over their range of values. There are a number of applications of such modeling: 1. Connected cracks and multiple half-moons, which we have noted in a POD set. Ideally we would like to distinguish connected cracks from one long shallow crack. 2. Cracks of irregular profile and shape which have appeared in cold work holes during bolt-hole eddy-current inspection. One side of such cracks is much deeper than other. 3. L or C shaped crack profiles at the surface, examples of which have been seen in bolt-hole cracks. By formulating problems in a stochastic sense, we are able to leverage the stochastic global optimization algorithms in NLSE, which is resident in VIC-3D®, to answer questions of global minimization and to compute confidence bounds using the sensitivity coefficient that we get from NLSE. We will also address the issue of surrogate functions which are used during the inversion process, and how they contribute to the quality of the estimation of the bounds.
Modeling error distributions of growth curve models through Bayesian methods.
Zhang, Zhiyong
2016-06-01
Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.
Video distribution system cost model
Gershkoff, I.; Haspert, J. K.; Morgenstern, B.
1980-01-01
A cost model that can be used to systematically identify the costs of procuring and operating satellite linked communications systems is described. The user defines a network configuration by specifying the location of each participating site, the interconnection requirements, and the transmission paths available for the uplink (studio to satellite), downlink (satellite to audience), and voice talkback (between audience and studio) segments of the network. The model uses this information to calculate the least expensive signal distribution path for each participating site. Cost estimates are broken downy by capital, installation, lease, operations and maintenance. The design of the model permits flexibility in specifying network and cost structure.
PROBLEM IDENTIFICATION OF FOREIGN TOURIST DISTRIBUTION IN INDONESIA
Directory of Open Access Journals (Sweden)
Supriono
2017-07-01
Full Text Available Indonesia should be able to distribute evenly the visits of foreign tourists so that the visit is not merely focused on certain places. It is expected that all the tourism objects in Indonesia can attract and be visited by foreign tourists with the same quantity or number in every tourist destination. In the first year, this study aimed to identify the motivation of foreign tourists visiting Indonesia and identify the problems of distribution of foreign tourists in Indonesia. The study sites were in DKI Jakarta, Batam, and Bali. In the second year later, a distribution channel strategy will be developed in order to create competiveness of tourism. This study was conducted using qualitative research methods with descriptive analysis. The data were collected using in-depth interviews with tourism stakeholders (the Government, International Travelers, and Tourism Bureau/Travel Agencies. The research results show that the motivation of foreign tourists visiting Indonesia was related to business and purely on vacation. Additionally, the problems of foreign tourist distribution in Indonesia emerged because of some aspects, including limited entrance of foreign tourists to Indonesia, lack of connectivity between airports in Indonesia and international flights, lack of inter-regional cooperation between tourism actors, lack of infrastructure, and the ignorance of foreign tourists to all tourist destinations in Indonesia due to less effective and efficient promotion activities.
Turboelectric Distributed Propulsion System Modelling
Liu, Chengyuan
2013-01-01
The Blended-Wing-Body is a conceptual aircraft design with rear-mounted, over wing engines. Turboelectric distributed propulsion system with boundary layer ingestion has been considered for this aircraft. It uses electricity to transmit power from the core turbine to the fans, therefore dramatically increases bypass ratio to reduce fuel consumption and noise. This dissertation presents methods on designing the TeDP system, evaluating effects of boundary layer ingestion, modelling engine perfo...
Brane world model and hierarchy problem
International Nuclear Information System (INIS)
Alba, V.
2007-01-01
In this paper I wrote description of Kaluza-Klein model. Also I wrote how we can solve the hierarchy problem in Randall-Sundrum model. In fact, it's my motivation to study this part of theoretical physics
Crack problem in superconducting cylinder with exponential distribution of critical-current density
Zhao, Yufeng; Xu, Chi; Shi, Liang
2018-04-01
The general problem of a center crack in a long cylindrical superconductor with inhomogeneous critical-current distribution is studied based on the extended Bean model for zero-field cooling (ZFC) and field cooling (FC) magnetization processes, in which the inhomogeneous parameter η is introduced for characterizing the critical-current density distribution in inhomogeneous superconductor. The effect of the inhomogeneous parameter η on both the magnetic field distribution and the variations of the normalized stress intensity factors is also obtained based on the plane strain approach and J-integral theory. The numerical results indicate that the exponential distribution of critical-current density will lead a larger trapped field inside the inhomogeneous superconductor and cause the center of the cylinder to fracture more easily. In addition, it is worth pointing out that the nonlinear field distribution is unique to the Bean model by comparing the curve shapes of the magnetization loop with homogeneous and inhomogeneous critical-current distribution.
A Multivariate Model of Physics Problem Solving
Taasoobshirazi, Gita; Farley, John
2013-01-01
A model of expertise in physics problem solving was tested on undergraduate science, physics, and engineering majors enrolled in an introductory-level physics course. Structural equation modeling was used to test hypothesized relationships among variables linked to expertise in physics problem solving including motivation, metacognitive planning,…
Water Distribution and Removal Model
International Nuclear Information System (INIS)
Y. Deng; N. Chipman; E.L. Hardin
2005-01-01
The design of the Yucca Mountain high level radioactive waste repository depends on the performance of the engineered barrier system (EBS). To support the total system performance assessment (TSPA), the Engineered Barrier System Degradation, Flow, and Transport Process Model Report (EBS PMR) is developed to describe the thermal, mechanical, chemical, hydrological, biological, and radionuclide transport processes within the emplacement drifts, which includes the following major analysis/model reports (AMRs): (1) EBS Water Distribution and Removal (WD and R) Model; (2) EBS Physical and Chemical Environment (P and CE) Model; (3) EBS Radionuclide Transport (EBS RNT) Model; and (4) EBS Multiscale Thermohydrologic (TH) Model. Technical information, including data, analyses, models, software, and supporting documents will be provided to defend the applicability of these models for their intended purpose of evaluating the postclosure performance of the Yucca Mountain repository system. The WD and R model ARM is important to the site recommendation. Water distribution and removal represents one component of the overall EBS. Under some conditions, liquid water will seep into emplacement drifts through fractures in the host rock and move generally downward, potentially contacting waste packages. After waste packages are breached by corrosion, some of this seepage water will contact the waste, dissolve or suspend radionuclides, and ultimately carry radionuclides through the EBS to the near-field host rock. Lateral diversion of liquid water within the drift will occur at the inner drift surface, and more significantly from the operation of engineered structures such as drip shields and the outer surface of waste packages. If most of the seepage flux can be diverted laterally and removed from the drifts before contacting the wastes, the release of radionuclides from the EBS can be controlled, resulting in a proportional reduction in dose release at the accessible environment
Water Distribution and Removal Model
Energy Technology Data Exchange (ETDEWEB)
Y. Deng; N. Chipman; E.L. Hardin
2005-08-26
The design of the Yucca Mountain high level radioactive waste repository depends on the performance of the engineered barrier system (EBS). To support the total system performance assessment (TSPA), the Engineered Barrier System Degradation, Flow, and Transport Process Model Report (EBS PMR) is developed to describe the thermal, mechanical, chemical, hydrological, biological, and radionuclide transport processes within the emplacement drifts, which includes the following major analysis/model reports (AMRs): (1) EBS Water Distribution and Removal (WD&R) Model; (2) EBS Physical and Chemical Environment (P&CE) Model; (3) EBS Radionuclide Transport (EBS RNT) Model; and (4) EBS Multiscale Thermohydrologic (TH) Model. Technical information, including data, analyses, models, software, and supporting documents will be provided to defend the applicability of these models for their intended purpose of evaluating the postclosure performance of the Yucca Mountain repository system. The WD&R model ARM is important to the site recommendation. Water distribution and removal represents one component of the overall EBS. Under some conditions, liquid water will seep into emplacement drifts through fractures in the host rock and move generally downward, potentially contacting waste packages. After waste packages are breached by corrosion, some of this seepage water will contact the waste, dissolve or suspend radionuclides, and ultimately carry radionuclides through the EBS to the near-field host rock. Lateral diversion of liquid water within the drift will occur at the inner drift surface, and more significantly from the operation of engineered structures such as drip shields and the outer surface of waste packages. If most of the seepage flux can be diverted laterally and removed from the drifts before contacting the wastes, the release of radionuclides from the EBS can be controlled, resulting in a proportional reduction in dose release at the accessible environment. The purposes
Modeling of uncertainties in statistical inverse problems
International Nuclear Information System (INIS)
Kaipio, Jari
2008-01-01
In all real world problems, the models that tie the measurements to the unknowns of interest, are at best only approximations for reality. While moderate modeling and approximation errors can be tolerated with stable problems, inverse problems are a notorious exception. Typical modeling errors include inaccurate geometry, unknown boundary and initial data, properties of noise and other disturbances, and simply the numerical approximations of the physical models. In principle, the Bayesian approach to inverse problems, in which all uncertainties are modeled as random variables, is capable of handling these uncertainties. Depending on the type of uncertainties, however, different strategies may be adopted. In this paper we give an overview of typical modeling errors and related strategies within the Bayesian framework.
The Aalborg Model and The Problem
DEFF Research Database (Denmark)
Qvist, Palle
To know the definition of a problem in is an important implication for the possibility to identify and formulate the problem1, the starting point of the learning process in the Aalborg Model2 3. For certification it has been suggested that: A problem grows out of students’ wondering within differ...... – a wondering - that something is different from what is expected, something novel and unexpected or inexplicable; astonishment mingled with perplexity or bewildered curiosity?...
Directory of Open Access Journals (Sweden)
Joan C. Durrance
2006-01-01
Full Text Available Introduction. This article results from a qualitative study of 1 information behavior in community problem-solving framed as a distributed information use environment and 2 approaches used by a best-practice library to anticipate information needs associated with community problem solving. Method. Several approaches to data collection were used - focus groups, interviews, observation of community and library meetings, and analysis of supporting documents. We focused first on the information behaviour of community groups. Finding that the library supported these activities we sought to understand its approach. Analysis. Data were coded thematically for both information behaviour concepts and themes germane to problem-solving activity. A grounded theory approach was taken to capture aspects of the library staff's practice. Themes evolved from the data; supporting documentation - reports, articles and library communication - was also coded. Results. The study showed 1 how information use environment components (people, setting, problems, problem resolutions combine in this distributed information use environment to determine specific information needs and uses; and 2 how the library contributed to the viability of this distributed information use environment. Conclusion. Community problem solving, here explicated as a distributed IUE, is likely to be seen in multiple communities. The library model presented demonstrates that by reshaping its information practice within the framework of an information use environment, a library can anticipate community information needs as they are generated and where they are most relevant.
Distribution-valued weak solutions to a parabolic problem arising in financial mathematics
Directory of Open Access Journals (Sweden)
Michael Eydenberg
2009-07-01
Full Text Available We study distribution-valued solutions to a parabolic problem that arises from a model of the Black-Scholes equation in option pricing. We give a minor generalization of known existence and uniqueness results for solutions in bounded domains $Omega subset mathbb{R}^{n+1}$ to give existence of solutions for certain classes of distributions $fin mathcal{D}'(Omega$. We also study growth conditions for smooth solutions of certain parabolic equations on $mathbb{R}^nimes (0,T$ that have initial values in the space of distributions.
Bezier Curve Modeling for Neutrosophic Data Problem
Directory of Open Access Journals (Sweden)
Ferhat Tas
2017-02-01
Full Text Available Neutrosophic set concept is defined with membership, non-membership and indeterminacy degrees. This concept is the solution and representation of the problems with various fields. In this paper, a geometric model is introduced for Neutrosophic data problem for the first time. This model is based on neutrosophic sets and neutrosophic relations. Neutrosophic control points are defined according to these points, resulting in neutrosophic Bezier curves.
Directional Overcurrent Relays Coordination Problems in Distributed Generation Systems
Directory of Open Access Journals (Sweden)
Jakub Ehrenberger
2017-09-01
Full Text Available This paper proposes a new approach to the distributed generation system protection coordination based on directional overcurrent protections with inverse-time characteristics. The key question of protection coordination is the determination of correct values of all inverse-time characteristics coefficients. The coefficients must be correctly chosen considering the sufficiently short tripping times and the sufficiently long selectivity times. In the paper a new approach to protection coordination is designed, in which not only some, but all the required types of short-circuit contributions are taken into account. In radial systems, if the pickup currents are correctly chosen, protection coordination for maximum contributions is enough to ensure selectivity times for all the required short-circuit types. In distributed generation systems, due to different contributions flowing through the primary and selective protections, coordination for maximum contributions is not enough, but all the short-circuit types must be taken into account, and the protection coordination becomes a complex problem. A possible solution to the problem, based on an appropriately designed optimization, has been proposed in the paper. By repeating a simple optimization considering only one short-circuit type, the protection coordination considering all the required short-circuit types has been achieved. To show the importance of considering all the types of short-circuit contributions, setting optimizations with one (the highest and all the types of short-circuit contributions have been performed. Finally, selectivity time values are explored throughout the entire protected section, and both the settings are compared.
Distributional equity problems at the proposed Yucca Mountain facility
International Nuclear Information System (INIS)
Kasperson, R.E.; Abdollahzadeh, S.
1988-07-01
This paper addresses one quite specific part of this broad range of issues -- the distribution of impacts to the state of Nevada and to the nation likely to be associated with the proposed Yucca Mountain repository. As such, it is one of four needed analyses of the overall equity problems and needs to be read in conjunction with our proposed overall framework for equity studies. The objective of this report is to consider how an analysis might be made of the distribution of projected outcomes between the state and nation. At the same time, it needs to be clear that no attempt will be made actually to implement the analysis that is proposed. What follows is a conceptual statement that identifies the analytical issues and pro poses an approach for overcoming them. Significantly, it must also be noted that this report will not address procedural equity issues between the state and nation for this is the subject of a separate analysis. 14 refs., 8 figs., 3 tabs
Real-time modeling of heat distributions
Hamann, Hendrik F.; Li, Hongfei; Yarlanki, Srinivas
2018-01-02
Techniques for real-time modeling temperature distributions based on streaming sensor data are provided. In one aspect, a method for creating a three-dimensional temperature distribution model for a room having a floor and a ceiling is provided. The method includes the following steps. A ceiling temperature distribution in the room is determined. A floor temperature distribution in the room is determined. An interpolation between the ceiling temperature distribution and the floor temperature distribution is used to obtain the three-dimensional temperature distribution model for the room.
Improved Testing of Distributed Lag Model in Presence of ...
African Journals Online (AJOL)
The finite distributed lag models (DLM) are often used in econometrics and statistics. Application of the ordinary least square (OLS) directly on the DLM for estimation may have serious problems. To overcome these problems, some alternative estimation procedures are available in the literature. One popular method to ...
Multi-choice stochastic transportation problem involving general form of distributions.
Quddoos, Abdul; Ull Hasan, Md Gulzar; Khalid, Mohammad Masood
2014-01-01
Many authors have presented studies of multi-choice stochastic transportation problem (MCSTP) where availability and demand parameters follow a particular probability distribution (such as exponential, weibull, cauchy or extreme value). In this paper an MCSTP is considered where availability and demand parameters follow general form of distribution and a generalized equivalent deterministic model (GMCSTP) of MCSTP is obtained. It is also shown that all previous models obtained by different authors can be deduced with the help of GMCSTP. MCSTP with pareto, power function or burr-XII distributions are also considered and equivalent deterministic models are obtained. To illustrate the proposed model two numerical examples are presented and solved using LINGO 13.0 software package.
Economic problems of the distribution of gas. Ekonomicheskie problemy raspredeleniya gaza
Energy Technology Data Exchange (ETDEWEB)
Ryps, G S
1978-01-01
Statistical data are presented which characterize the distribution and utilization of gas for the period 1965-1975. Basic trends for gas distribution in the 10th five-year plan up to 1980 are also defined. Principal attention is given to the use of quantitative methods for solving economic problems of gas resource distribution. A method is suggested for solving problems by the use of mathematical modeling wherein the criterion of optimality is the maximum of national economic effectiveness. The systems approach toward the study of the energy-economic phenomenon and the hierarchical principle make it possible to affect an exchange of information between tasks of a variable territorial and sector level. A special chapter is concerned with designing and use of closing outlays for fuel and its refinement products. Examples are given that illustrate the solution of the problem on the distribution of gas and other types of fuels within an economic region. A determination is made of the consistency of gas-line and branch-line construction and the effective depth for the gasification of populated areas. The book is intended for personnel at design-research organizations and economic-planning bodies which are engaged in problems concerned with the distribution of energy carriers. It will also be useful to students at institutions of higher learning. 78 references, 13 figures, 61 tables.
Directory of Open Access Journals (Sweden)
Mi Gan
2018-01-01
Full Text Available The rapid growth of logistics distribution highlights the problems including the imperfect infrastructure of logistics distribution network, the serious shortage of distribution capacity of each individual enterprise, and the high cost of distribution in China. While the development of sharing economy makes it possible to achieve the integration of whole social logistic resources, big data technology can grasp customer’s logistics demand accurately on the basis of analyzing the customer’s logistics distribution preference, which contributes to the integration and optimization of the whole logistics resources. This paper proposes a kind of intensive distribution logistics network considering sharing economy, which assumes that all the social logistics suppliers build a strategic alliance, and individual idle logistics resources are also used to deal with distribution needs. Analyzing customer shopping behavior by the big data technology to determine customer’s logistics preference on the basis of dividing the customer’s logistics preference into high speed, low cost, and low pollution and then constructing the corresponding objective function model according to different logistics preferences, we obtain the intensive distribution logistics network model and solve it with heuristic algorithm. Furthermore, this paper analyzes the mechanism of interest distribution of the participants in the distribution network and puts forward an improved interval Shapley value method considering both satisfaction and contribution, with case verifying the feasibility and effectiveness of the model. The results showed that, compared with the traditional Shapley method, distribution coefficient calculated by the improved model could be fairer, improve stakeholder satisfaction, and promote the sustainable development of the alliance as well.
Modeling visual problem solving as analogical reasoning.
Lovett, Andrew; Forbus, Kenneth
2017-01-01
We present a computational model of visual problem solving, designed to solve problems from the Raven's Progressive Matrices intelligence test. The model builds on the claim that analogical reasoning lies at the heart of visual problem solving, and intelligence more broadly. Images are compared via structure mapping, aligning the common relational structure in 2 images to identify commonalities and differences. These commonalities or differences can themselves be reified and used as the input for future comparisons. When images fail to align, the model dynamically rerepresents them to facilitate the comparison. In our analysis, we find that the model matches adult human performance on the Standard Progressive Matrices test, and that problems which are difficult for the model are also difficult for people. Furthermore, we show that model operations involving abstraction and rerepresentation are particularly difficult for people, suggesting that these operations may be critical for performing visual problem solving, and reasoning more generally, at the highest level. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Problem Solving Model for Science Learning
Alberida, H.; Lufri; Festiyed; Barlian, E.
2018-04-01
This research aims to develop problem solving model for science learning in junior high school. The learning model was developed using the ADDIE model. An analysis phase includes curriculum analysis, analysis of students of SMP Kota Padang, analysis of SMP science teachers, learning analysis, as well as the literature review. The design phase includes product planning a science-learning problem-solving model, which consists of syntax, reaction principle, social system, support system, instructional impact and support. Implementation of problem-solving model in science learning to improve students' science process skills. The development stage consists of three steps: a) designing a prototype, b) performing a formative evaluation and c) a prototype revision. Implementation stage is done through a limited trial. A limited trial was conducted on 24 and 26 August 2015 in Class VII 2 SMPN 12 Padang. The evaluation phase was conducted in the form of experiments at SMPN 1 Padang, SMPN 12 Padang and SMP National Padang. Based on the development research done, the syntax model problem solving for science learning at junior high school consists of the introduction, observation, initial problems, data collection, data organization, data analysis/generalization, and communicating.
A Hybrid Autonomic Computing-Based Approach to Distributed Constraint Satisfaction Problems
Directory of Open Access Journals (Sweden)
Abhishek Bhatia
2015-03-01
Full Text Available Distributed constraint satisfaction problems (DisCSPs are among the widely endeavored problems using agent-based simulation. Fernandez et al. formulated sensor and mobile tracking problem as a DisCSP, known as SensorDCSP In this paper, we adopt a customized ERE (environment, reactive rules and entities algorithm for the SensorDCSP, which is otherwise proven as a computationally intractable problem. An amalgamation of the autonomy-oriented computing (AOC-based algorithm (ERE and genetic algorithm (GA provides an early solution of the modeled DisCSP. Incorporation of GA into ERE facilitates auto-tuning of the simulation parameters, thereby leading to an early solution of constraint satisfaction. This study further contributes towards a model, built up in the NetLogo simulation environment, to infer the efficacy of the proposed approach.
Problems In Indoor Mapping and Modelling
Zlatanova, S.; Sithole, G.; Nakagawa, M.; Zhu, Q.
2013-11-01
Research in support of indoor mapping and modelling (IMM) has been active for over thirty years. This research has come in the form of As-Built surveys, Data structuring, Visualisation techniques, Navigation models and so forth. Much of this research is founded on advancements in photogrammetry, computer vision and image analysis, computer graphics, robotics, laser scanning and many others. While IMM used to be the privy of engineers, planners, consultants, contractors, and designers, this is no longer the case as commercial enterprises and individuals are also beginning to apply indoor models in their business process and applications. There are three main reasons for this. Firstly, the last two decades have seen greater use of spatial information by enterprises and the public. Secondly, IMM has been complimented by advancements in mobile computing and internet communications, making it easier than ever to access and interact with spatial information. Thirdly, indoor modelling has been advanced geometrically and semantically, opening doors for developing user-oriented, context-aware applications. This reshaping of the public's attitude and expectations with regards to spatial information has realised new applications and spurred demand for indoor models and the tools to use them. This paper examines the present state of IMM and considers the research areas that deserve attention in the future. In particular the paper considers problems in IMM that are relevant to commercial enterprises and the general public, groups this paper expects will emerge as the greatest users IMM. The subject of indoor modelling and mapping is discussed here in terms of Acquisitions and Sensors, Data Structures and Modelling, Visualisation, Applications, Legal Issues and Standards. Problems are discussed in terms of those that exist and those that are emerging. Existing problems are those that are currently being researched. Emerging problems are those problems or demands that are
Finding Multiple Optimal Solutions to Optimal Load Distribution Problem in Hydropower Plant
Directory of Open Access Journals (Sweden)
Xinhao Jiang
2012-05-01
Full Text Available Optimal load distribution (OLD among generator units of a hydropower plant is a vital task for hydropower generation scheduling and management. Traditional optimization methods for solving this problem focus on finding a single optimal solution. However, many practical constraints on hydropower plant operation are very difficult, if not impossible, to be modeled, and the optimal solution found by those models might be of limited practical uses. This motivates us to find multiple optimal solutions to the OLD problem, which can provide more flexible choices for decision-making. Based on a special dynamic programming model, we use a modified shortest path algorithm to produce multiple solutions to the problem. It is shown that multiple optimal solutions exist for the case study of China’s Geheyan hydropower plant, and they are valuable for assessing the stability of generator units, showing the potential of reducing occurrence times of units across vibration areas.
Comparative Distributions of Hazard Modeling Analysis
Directory of Open Access Journals (Sweden)
Rana Abdul Wajid
2006-07-01
Full Text Available In this paper we present the comparison among the distributions used in hazard analysis. Simulation technique has been used to study the behavior of hazard distribution modules. The fundamentals of Hazard issues are discussed using failure criteria. We present the flexibility of the hazard modeling distribution that approaches to different distributions.
One-dimensional computational modeling on nuclear reactor problems
International Nuclear Information System (INIS)
Alves Filho, Hermes; Baptista, Josue Costa; Trindade, Luiz Fernando Santos; Heringer, Juan Diego dos Santos
2013-01-01
In this article, we present a computational modeling, which gives us a dynamic view of some applications of Nuclear Engineering, specifically in the power distribution and the effective multiplication factor (keff) calculations. We work with one-dimensional problems of deterministic neutron transport theory, with the linearized Boltzmann equation in the discrete ordinates (SN) formulation, independent of time, with isotropic scattering and then built a software (Simulator) for modeling computational problems used in a typical calculations. The program used in the implementation of the simulator was Matlab, version 7.0. (author)
Problems with models of the radiation belts
International Nuclear Information System (INIS)
Daly, E.J.; Lemaire, J.; Heynderickx, D.; Rodgers, D.J.
1996-01-01
The current standard models of the radiation-belt environment have many shortcomings, not the least of which is their extreme age. Most of the data used for them were acquired in the 1960's and early 1970's. Problems with the present models, and the ways in which data from more recent missions are being or can be used to create new models with improved functionality, are described. The phenomenology of the radiation belts, the effects on space systems, and geomagnetic coordinates and modeling are discussed. Errors found in present models, their functional limitations, and problems with their implementation and use are detailed. New modeling must address problems at low altitudes with the south Atlantic anomaly, east-west asymmetries and solar cycle variations and at high altitudes with the highly dynamic electron environment. The important issues in space environment modeling from the point of view of usability and relationship with effects evaluation are presented. New sources of data are discussed. Future requirements in the data, models, and analysis tools areas are presented
Directory of Open Access Journals (Sweden)
Kenan Karagül
2014-07-01
Full Text Available In this study, Fleet Size and Mix Vehicle Routing Problem is considered in order to optimize the distribution of the tourists who have traveled between the airport and the hotels in the shortest distance by using the minimum cost. The initial solution space for the related methods are formed as a combination of Savings algorithm, Sweep algorithm and random permutation alignment. Then, two well-known solution methods named as Standard Genetic Algorithms and random search algorithms are used for changing the initial solutions. Computational power of the machine and heuristic algorithms are used instead of human experience and human intuition in order to solve the distribution problem of tourists coming to hotels in Alanya region from Antalya airport. For this case study, daily data of tourist distributions performed by an agency operating in Alanya region are considered. These distributions are then modeled as Vehicle Routing Problem to calculate the solutions for various applications. From the comparisons with the decision of a human expert, it is seen that the proposed methods produce better solutions with respect to human experience and insight. Random search method produces a solution more favorable in terms of time. As a conclusion, it is seen that, owing to the distribution plans offered by the obtained solutions, the agencies may reduce the costs by achieving savings up to 35%.
Implementing Problem Resolution Models in Remedy
Marquina, M A; Ramos, R
2000-01-01
This paper defines the concept of Problem Resolution Model (PRM) and describes the current implementation made by the User Support unit at CERN. One of the main challenges of User Support services in any High Energy Physics institute/organization is to address solving of the computing-relatedproblems faced by their researchers. The User Support group at CERN is the IT unit in charge of modeling the operations of the Help Desk and acts as asecond level support to some of the support lines whose problems are receptioned at the Help Desk. The motivation behind the use of a PRM is to provide well defined procedures and methods to react in an efficient way to a request for solving a problem,providing advice, information etc. A PRM is materialized on a workflow which has a set of defined states in which a problem can be. Problems move from onestate to another according to actions as decided by the person who is handling them. A PRM can be implemented by a computer application, generallyreferred to as Problem Report...
Optimizing a Biobjective Production-Distribution Planning Problem Using a GRASP
Directory of Open Access Journals (Sweden)
Martha-Selene Casas-Ramírez
2018-01-01
Full Text Available This paper addresses a biobjective production-distribution planning problem. The problem is formulated as a mixed integer programming problem with two objectives. The objectives are to minimize the total costs and to balance the total workload of the supply chain, which consist of plants and depots, considering that it represents a company vertically integrated. In order to solve the model, we propose an adapted biobjective GRASP to obtain an approximation of the Pareto front. To evaluate the performance of the proposed algorithm, numerical experimentations are conducted over a set of instances used for similar problems. Results indicate that the proposed GRASP obtains a relatively small number of nondominated solutions for each tested instance in very short computational time. The approximated Pareto fronts are discontinuous and nonconvex. Moreover, the solutions clearly show the compromise between both objective functions.
Electric Power Distribution System Model Simplification Using Segment Substitution
Energy Technology Data Exchange (ETDEWEB)
Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat; Reed, Gregory F.
2018-05-01
Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers model bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). In contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.
Electric Power Distribution System Model Simplification Using Segment Substitution
International Nuclear Information System (INIS)
Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat; Reed, Gregory F.
2017-01-01
Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers model bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.
Solving Vertex Cover Problem Using DNA Tile Assembly Model
Directory of Open Access Journals (Sweden)
Zhihua Chen
2013-01-01
Full Text Available DNA tile assembly models are a class of mathematically distributed and parallel biocomputing models in DNA tiles. In previous works, tile assembly models have been proved be Turing-universal; that is, the system can do what Turing machine can do. In this paper, we use tile systems to solve computational hard problem. Mathematically, we construct three tile subsystems, which can be combined together to solve vertex cover problem. As a result, each of the proposed tile subsystems consists of Θ(1 types of tiles, and the assembly process is executed in a parallel way (like DNA’s biological function in cells; thus the systems can generate the solution of the problem in linear time with respect to the size of the graph.
Analysis and Comparison of Typical Models within Distribution Network Design
DEFF Research Database (Denmark)
Jørgensen, Hans Jacob; Larsen, Allan; Madsen, Oli B.G.
Efficient and cost effective transportation and logistics plays a vital role in the supply chains of the modern world’s manufacturers. Global distribution of goods is a very complicated matter as it involves many different distinct planning problems. The focus of this presentation is to demonstrate...... a number of important issues which have been identified when addressing the Distribution Network Design problem from a modelling angle. More specifically, we present an analysis of the research which has been performed in utilizing operational research in developing and optimising distribution systems....
Lamdjaya, T.; Jobiliong, E.
2017-01-01
PT Anugrah Citra Boga is a food processing industry that produces meatballs as their main product. The distribution system of the products must be considered, because it needs to be more efficient in order to reduce the shipment cost. The purpose of this research is to optimize the distribution time by simulating the distribution channels with capacitated vehicle routing problem method. Firstly, the distribution route is observed in order to calculate the average speed, time capacity and shipping costs. Then build the model using AIMMS software. A few things that are required to simulate the model are customer locations, distances, and the process time. Finally, compare the total distribution cost obtained by the simulation and the historical data. It concludes that the company can reduce the shipping cost around 4.1% or Rp 529,800 per month. By using this model, the utilization rate can be more optimal. The current value for the first vehicle is 104.6% and after the simulation it becomes 88.6%. Meanwhile, the utilization rate of the second vehicle is increase from 59.8% to 74.1%. The simulation model is able to produce the optimal shipping route with time restriction, vehicle capacity, and amount of vehicle.
Mathematical model in economic environmental problems
Energy Technology Data Exchange (ETDEWEB)
Nahorski, Z. [Polish Academy of Sciences, Systems Research Inst. (Poland); Ravn, H.F. [Risoe National Lab. (Denmark)
1996-12-31
The report contains a review of basic models and mathematical tools used in economic regulation problems. It starts with presentation of basic models of capital accumulation, resource depletion, pollution accumulation, and population growth, as well as construction of utility functions. Then the one-state variable model is discussed in details. The basic mathematical methods used consist of application of the maximum principle and phase plane analysis of the differential equations obtained as the necessary conditions of optimality. A summary of basic results connected with these methods is given in appendices. (au) 13 ills.; 17 refs.
Distributions with given marginals and statistical modelling
Fortiana, Josep; Rodriguez-Lallena, José
2002-01-01
This book contains a selection of the papers presented at the meeting `Distributions with given marginals and statistical modelling', held in Barcelona (Spain), July 17-20, 2000. In 24 chapters, this book covers topics such as the theory of copulas and quasi-copulas, the theory and compatibility of distributions, models for survival distributions and other well-known distributions, time series, categorical models, definition and estimation of measures of dependence, monotonicity and stochastic ordering, shape and separability of distributions, hidden truncation models, diagonal families, orthogonal expansions, tests of independence, and goodness of fit assessment. These topics share the use and properties of distributions with given marginals, this being the fourth specialised text on this theme. The innovative aspect of the book is the inclusion of statistical aspects such as modelling, Bayesian statistics, estimation, and tests.
Problem-Solving Methods for the Prospective Development of Urban Power Distribution Network
Directory of Open Access Journals (Sweden)
A. P. Karpenko
2014-01-01
Full Text Available This article succeeds the former A. P. K nko’ and A. I. Kuzmina’ ubl t on titled "A mathematical model of urban distribution electro-network considering its future development" (electronic scientific and technical magazine "Science and education" No. 5, 2014.The article offers a model of urban power distribution network as a set of transformer and distribution substations and cable lines. All elements of the network and new consumers are determined owing to vectors of parameters consistent with them.A problem of the urban power distribution network design, taking into account a prospective development of the city, is presented as a problem of discrete programming. It is in deciding on the optimal option to connect new consumers to the power supply network, on the number and sites to build new substations, and on the option to include them in the power supply network.Two methods, namely a reduction method for a set the nested tasks of global minimization and a decomposition method are offered to solve the problem.In reduction method the problem of prospective development of power supply network breaks into three subtasks of smaller dimension: a subtask to define the number and sites of new transformer and distribution substations, a subtask to define the option to connect new consumers to the power supply network, and a subtask to include new substations in the power supply network. The vector of the varied parameters is broken into three subvectors consistent with the subtasks. Each subtask is solved using an area of admissible vector values of the varied parameters at the fixed components of the subvectors obtained when solving the higher subtasks.In decomposition method the task is presented as a set of three, similar to reduction method, reductions of subtasks and a problem of coordination. The problem of coordination specifies a sequence of the subtasks solution, defines the moment of calculation termination. Coordination is realized by
Solving Dynamic Battlespace Movement Problems Using Dynamic Distributed Computer Networks
National Research Council Canada - National Science Library
Bradford, Robert
2000-01-01
.... The thesis designs a system using this architecture that invokes operations research network optimization algorithms to solve problems involving movement of people and equipment over dynamic road networks...
A problem of optimal control and observation for distributed homogeneous multi-agent system
Kruglikov, Sergey V.
2017-12-01
The paper considers the implementation of a algorithm for controlling a distributed complex of several mobile multi-robots. The concept of a unified information space of the controlling system is applied. The presented information and mathematical models of participants and obstacles, as real agents, and goals and scenarios, as virtual agents, create the base forming the algorithmic and software background for computer decision support system. The controlling scheme assumes the indirect management of the robotic team on the basis of optimal control and observation problem predicting intellectual behavior in a dynamic, hostile environment. A basic content problem is a compound cargo transportation by a group of participants in the case of a distributed control scheme in the terrain with multiple obstacles.
Distributing Flexibility to Enhance Robustness in Task Scheduling Problems
Wilmer, D.; Klos, T.B.; Wilson, M.
2013-01-01
Temporal scheduling problems occur naturally in many diverse application domains such as manufacturing, transportation, health and education. A scheduling problem arises if we have a set of temporal events (or variables) and some constraints on those events, and we have to find a schedule, which is
H-infinity Tracking Problems for a Distributed Parameter System
DEFF Research Database (Denmark)
Larsen, Mikael
1997-01-01
The thesis considers the problem of finding a finite dimensional controller for an infinite dimensional system (A tunnel Pasteurizer) combinedwith a rubustness analysis.......The thesis considers the problem of finding a finite dimensional controller for an infinite dimensional system (A tunnel Pasteurizer) combinedwith a rubustness analysis....
A Distributional Representation Model For Collaborative Filtering
Junlin, Zhang; Heng, Cai; Tongwen, Huang; Huiping, Xue
2015-01-01
In this paper, we propose a very concise deep learning approach for collaborative filtering that jointly models distributional representation for users and items. The proposed framework obtains better performance when compared against current state-of-art algorithms and that made the distributional representation model a promising direction for further research in the collaborative filtering.
Rapid Prototyping of Formally Modelled Distributed Systems
Buchs, Didier; Buffo, Mathieu; Titsworth, Frances M.
1999-01-01
This paper presents various kinds of prototypes, used in the prototyping of formally modelled distributed systems. It presents the notions of prototyping techniques and prototype evolution, and shows how to relate them to the software life-cycle. It is illustrated through the use of the formal modelling language for distributed systems CO-OPN/2.
Antamoshkin, O. A.; Kilochitskaya, T. R.; Ontuzheva, G. A.; Stupina, A. A.; Tynchenko, V. S.
2018-05-01
This study reviews the problem of allocation of resources in the heterogeneous distributed information processing systems, which may be formalized in the form of a multicriterion multi-index problem with the linear constraints of the transport type. The algorithms for solution of this problem suggest a search for the entire set of Pareto-optimal solutions. For some classes of hierarchical systems, it is possible to significantly speed up the procedure of verification of a system of linear algebraic inequalities for consistency due to the reducibility of them to the stream models or the application of other solution schemes (for strongly connected structures) that take into account the specifics of the hierarchies under consideration.
Shao, Zhongshi; Pi, Dechang; Shao, Weishi
2017-11-01
This article proposes an extended continuous estimation of distribution algorithm (ECEDA) to solve the permutation flow-shop scheduling problem (PFSP). In ECEDA, to make a continuous estimation of distribution algorithm (EDA) suitable for the PFSP, the largest order value rule is applied to convert continuous vectors to discrete job permutations. A probabilistic model based on a mixed Gaussian and Cauchy distribution is built to maintain the exploration ability of the EDA. Two effective local search methods, i.e. revolver-based variable neighbourhood search and Hénon chaotic-based local search, are designed and incorporated into the EDA to enhance the local exploitation. The parameters of the proposed ECEDA are calibrated by means of a design of experiments approach. Simulation results and comparisons based on some benchmark instances show the efficiency of the proposed algorithm for solving the PFSP.
New light on an old problem: Reflections on barriers and enablers of distributed energy
International Nuclear Information System (INIS)
Szatow, Anthony; Quezada, George; Lilley, Bill
2012-01-01
This viewpoint article, New light on an Old Problem, aims to stimulate thought and discussion on pathways to rapid emission reduction trajectories. It considers briefly the history of the Australian energy system and recent attempts to support emerging, distributed energy supply systems, before exploring the importance of new energy supply models and how they may emerge organically, ahead of further policy and regulatory shifts in Australia. The article is shaped by extensive primary research, literature review and engagement with policy makers, industry and community organisations, energy market institutions, colleagues and others over a period of four years. It outlines how new business models may reduce emissions ahead of policy and regulation, and the importance of keeping an open mind when considering ‘barriers’ to distributed energy. We hope this article will spark interest and dialogue with colleagues who may be experiencing and grappling with similar challenges. - Research highlights: ► We discuss documented barriers to distributed energy. ► We draw on socio-technical system literature and our research experience to outline a possible solution to distributed energy barriers. ► We describe a hypothetical energy service business model, led by the property sector, as a catalyst for energy market change. ► We outline reasons for our confidence in this property sector led energy services model.
Rationalisation of distribution functions for models of nanoparticle magnetism
International Nuclear Information System (INIS)
El-Hilo, M.; Chantrell, R.W.
2012-01-01
A formalism is presented which reconciles the use of different distribution functions of particle diameter in analytical models of the magnetic properties of nanoparticle systems. For the lognormal distribution a transformation is derived which shows that a distribution of volume fraction transforms into a lognormal distribution of particle number albeit with a modified median diameter. This transformation resolves an apparent discrepancy reported in Tournus and Tamion [Journal of Magnetism and Magnetic Materials 323 (2011) 1118]. - Highlights: ► We resolve a problem resulting from the misunderstanding of the nature. ► The nature of dispersion functions in models of nanoparticle magnetism. ► The derived transformation between distributions will be of benefit in comparing models and experimental results.
Problems in Modelling Charge Output Accelerometers
Directory of Open Access Journals (Sweden)
Tomczyk Krzysztof
2016-12-01
Full Text Available The paper presents major issues associated with the problem of modelling change output accelerometers. The presented solutions are based on the weighted least squares (WLS method using transformation of the complex frequency response of the sensors. The main assumptions of the WLS method and a mathematical model of charge output accelerometers are presented in first two sections of this paper. In the next sections applying the WLS method to estimation of the accelerometer model parameters is discussed and the associated uncertainties are determined. Finally, the results of modelling a PCB357B73 charge output accelerometer are analysed in the last section of this paper. All calculations were executed using the MathCad software program. The main stages of these calculations are presented in Appendices A−E.
towards solving the problem of transmission and distribution of ...
African Journals Online (AJOL)
DISTRIBUTION OF ELECTRIC POWER IN NIGERIA VIA. SUPERCONDUCTOR POWER ... that Nigerian power transmission network is characterized by prolonged and .... (a) The design of superconducting cables generally includes flexibility ...
Smith, C. W.
1992-01-01
The adaptation of the frozen stress photoelastic method to the determination of the distribution of stress intensity factors in three dimensional problems is briefly reviewed. The method is then applied to several engineering problems of practical significance.
Business Models and Regulation | Distributed Generation Interconnection
Collaborative | NREL Business Models and Regulation Business Models and Regulation Subscribe to new business models and approaches. The growing role of distributed resources in the electricity system is leading to a shift in business models and regulation for electric utilities. These
An Improved Distribution Policy with a Maintenance Aspect for an Urban Logistic Problem
Directory of Open Access Journals (Sweden)
Nadia Ndhaief
2017-07-01
Full Text Available In this paper, we present an improved distribution plan supporting an urban distribution center (UDC to solve the last mile problem of urban freight. This is motivated by the need of UDCs to satisfy daily demand in time under a high service level in allocated urban areas. Moreover, these demands could not be satisfied in individual cases because the delivery rate can be less than daily demand and/or affected by random failure or maintenance actions of vehicles. The scope of our work is to focus on a UDC, which needs to satisfy demands in a finite horizon. To that end, we consider a distribution policy on two sequential plans, a distribution plan correlated to a maintenance plan using a subcontracting strategy with several potential urban distribution centers (UDCs and performing preventive maintenance to ensure deliveries for their allocated urban area. The choice of subcontractor will depend on distance, environmental and availability criteria. In doing so, we define a mathematical model for searching the best distribution and maintenance plans using a subcontracting strategy. Moreover, we consider delay for the next periods with an expensive penalty. Finally, we present a numerical example illustrating the benefits of our approach.
Modeling crowdsourcing as collective problem solving
Guazzini, Andrea; Vilone, Daniele; Donati, Camillo; Nardi, Annalisa; Levnajić, Zoran
2015-11-01
Crowdsourcing is a process of accumulating the ideas, thoughts or information from many independent participants, with aim to find the best solution for a given challenge. Modern information technologies allow for massive number of subjects to be involved in a more or less spontaneous way. Still, the full potentials of crowdsourcing are yet to be reached. We introduce a modeling framework through which we study the effectiveness of crowdsourcing in relation to the level of collectivism in facing the problem. Our findings reveal an intricate relationship between the number of participants and the difficulty of the problem, indicating the optimal size of the crowdsourced group. We discuss our results in the context of modern utilization of crowdsourcing.
Optimization model for the design of distributed wastewater treatment networks
Directory of Open Access Journals (Sweden)
Ibrić Nidret
2012-01-01
Full Text Available In this paper we address the synthesis problem of distributed wastewater networks using mathematical programming approach based on the superstructure optimization. We present a generalized superstructure and optimization model for the design of the distributed wastewater treatment networks. The superstructure includes splitters, treatment units, mixers, with all feasible interconnections including water recirculation. Based on the superstructure the optimization model is presented. The optimization model is given as a nonlinear programming (NLP problem where the objective function can be defined to minimize the total amount of wastewater treated in treatment operations or to minimize the total treatment costs. The NLP model is extended to a mixed integer nonlinear programming (MINLP problem where binary variables are used for the selection of the wastewater treatment technologies. The bounds for all flowrates and concentrations in the wastewater network are specified as general equations. The proposed models are solved using the global optimization solvers (BARON and LINDOGlobal. The application of the proposed models is illustrated on the two wastewater network problems of different complexity. First one is formulated as the NLP and the second one as the MINLP. For the second one the parametric and structural optimization is performed at the same time where optimal flowrates, concentrations as well as optimal technologies for the wastewater treatment are selected. Using the proposed model both problems are solved to global optimality.
A Formal Model and Verification Problems for Software Defined Networks
Directory of Open Access Journals (Sweden)
V. A. Zakharov
2013-01-01
Full Text Available Software-defined networking (SDN is an approach to building computer networks that separate and abstract data planes and control planes of these systems. In a SDN a centralized controller manages a distributed set of switches. A set of open commands for packet forwarding and flow-table updating was defined in the form of a protocol known as OpenFlow. In this paper we describe an abstract formal model of SDN, introduce a tentative language for specification of SDN forwarding policies, and set up formally model-checking problems for SDN.
Directory of Open Access Journals (Sweden)
J.S. Pahwa
2006-01-01
Full Text Available In the Biodiversity World (BDW project we have created a flexible and extensible Web Services-based Grid environment for biodiversity researchers to solve problems in biodiversity and analyse biodiversity patterns. In this environment, heterogeneous and globally distributed biodiversity-related resources such as data sets and analytical tools are made available to be accessed and assembled by users into workflows to perform complex scientific experiments. One such experiment is bioclimatic modelling of the geographical distribution of individual species using climate variables in order to explain past and future climate-related changes in species distribution. Data sources and analytical tools required for such analysis of species distribution are widely dispersed, available on heterogeneous platforms, present data in different formats and lack inherent interoperability. The present BDW system brings all these disparate units together so that the user can combine tools with little thought as to their original availability, data formats and interoperability. The new prototype BDW system architecture not only brings together heterogeneous resources but also enables utilisation of computational resources and provides a secure access to BDW resources via a federated security model. We describe features of the new BDW system and its security model which enable user authentication from a workflow application as part of workflow execution.
Distributed modeling for road authorities
Luiten, G.T.; Bõhms, H.M.; Nederveen, S. van; Bektas, E.
2013-01-01
A great challenge for road authorities is to improve the effectiveness and efficiency of their core processes by improving data exchange and sharing using new technologies such as building information modeling (BIM). BIM has already been successfully implemented in other sectors, such as
New trends in species distribution modelling
Zimmermann, Niklaus E.; Edwards, Thomas C.; Graham, Catherine H.; Pearman, Peter B.; Svenning, Jens-Christian
2010-01-01
Species distribution modelling has its origin in the late 1970s when computing capacity was limited. Early work in the field concentrated mostly on the development of methods to model effectively the shape of a species' response to environmental gradients (Austin 1987, Austin et al. 1990). The methodology and its framework were summarized in reviews 10–15 yr ago (Franklin 1995, Guisan and Zimmermann 2000), and these syntheses are still widely used as reference landmarks in the current distribution modelling literature. However, enormous advancements have occurred over the last decade, with hundreds – if not thousands – of publications on species distribution model (SDM) methodologies and their application to a broad set of conservation, ecological and evolutionary questions. With this special issue, originating from the third of a set of specialized SDM workshops (2008 Riederalp) entitled 'The Utility of Species Distribution Models as Tools for Conservation Ecology', we reflect on current trends and the progress achieved over the last decade.
Energy-momentum distribution: A crucial problem in general relativity
Sharif, M.; Fatima, T.
2005-01-01
This paper is aimed to elaborate the problem of energy–momentum in general relativity. In this connection, we use the prescriptions of Einstein, Landau–Lifshitz, Papapetrou and Möller to compute the energy–momentum densities for two exact solutions of Einstein field equations. The space–times under
Economic Models and Algorithms for Distributed Systems
Neumann, Dirk; Altmann, Jorn; Rana, Omer F
2009-01-01
Distributed computing models for sharing resources such as Grids, Peer-to-Peer systems, or voluntary computing are becoming increasingly popular. This book intends to discover fresh avenues of research and amendments to existing technologies, aiming at the successful deployment of commercial distributed systems
Distributionally Robust Joint Chance Constrained Problem under Moment Uncertainty
Directory of Open Access Journals (Sweden)
Ke-wei Ding
2014-01-01
Full Text Available We discuss and develop the convex approximation for robust joint chance constraints under uncertainty of first- and second-order moments. Robust chance constraints are approximated by Worst-Case CVaR constraints which can be reformulated by a semidefinite programming. Then the chance constrained problem can be presented as semidefinite programming. We also find that the approximation for robust joint chance constraints has an equivalent individual quadratic approximation form.
Distributional, differential and integral problems: Equivalence and existence results
Czech Academy of Sciences Publication Activity Database
Monteiro, Giselle Antunes; Satco, B. R.
2017-01-01
Roč. 2017, č. 7 (2017), s. 1-26 ISSN 1417-3875 Institutional support: RVO:67985840 Keywords : derivative with respect to functions * distribution * Kurzweil-Stieltjes integral Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.926, year: 2016 http://www.math.u-szeged.hu/ejqtde/periodica.html?periodica=1¶mtipus_ertek= publication ¶m_ertek=4753
Directional Overcurrent Relays Coordination Problems in Distributed Generation Systems
Jakub Ehrenberger; Jan Švec
2017-01-01
This paper proposes a new approach to the distributed generation system protection coordination based on directional overcurrent protections with inverse-time characteristics. The key question of protection coordination is the determination of correct values of all inverse-time characteristics coefficients. The coefficients must be correctly chosen considering the sufficiently short tripping times and the sufficiently long selectivity times. In the paper a new approach to protection coordinat...
Distributed simulation a model driven engineering approach
Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent
2016-01-01
Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.
Ebola Virus Infection Modelling and Identifiability Problems
Directory of Open Access Journals (Sweden)
Van-Kinh eNguyen
2015-04-01
Full Text Available The recent outbreaks of Ebola virus (EBOV infections have underlined the impact of the virus as a major threat for human health. Due to the high biosafety classification of EBOV (level 4, basic research is very limited. Therefore, the development of new avenues of thinking to advance quantitative comprehension of the virus and its interaction with the host cells is urgently neededto tackle this lethal disease. Mathematical modelling of the EBOV dynamics can be instrumental to interpret Ebola infection kinetics on quantitative grounds. To the best of our knowledge, a mathematical modelling approach to unravel the interaction between EBOV and the host cells isstill missing. In this paper, a mathematical model based on differential equations is used to represent the basic interactions between EBOV and wild-type Vero cells in vitro. Parameter sets that represent infectivity of pathogens are estimated for EBOV infection and compared with influenza virus infection kinetics. The average infecting time of wild-type Vero cells in EBOV is slower than in influenza infection. Simulation results suggest that the slow infecting time of EBOV could be compensated by its efficient replication. This study reveals several identifiability problems and what kind of experiments are necessary to advance the quantification of EBOV infection. A first mathematical approach of EBOV dynamics and the estimation of standard parametersin viral infections kinetics is the key contribution of this work, paving the way for future modelling work on EBOV infection.
Sample sizes and model comparison metrics for species distribution models
B.B. Hanberry; H.S. He; D.C. Dey
2012-01-01
Species distribution models use small samples to produce continuous distribution maps. The question of how small a sample can be to produce an accurate model generally has been answered based on comparisons to maximum sample sizes of 200 observations or fewer. In addition, model comparisons often are made with the kappa statistic, which has become controversial....
Mathematical problems in modeling artificial heart
Directory of Open Access Journals (Sweden)
Ahmed N. U.
1995-01-01
Full Text Available In this paper we discuss some problems arising in mathematical modeling of artificial hearts. The hydrodynamics of blood flow in an artificial heart chamber is governed by the Navier-Stokes equation, coupled with an equation of hyperbolic type subject to moving boundary conditions. The flow is induced by the motion of a diaphragm (membrane inside the heart chamber attached to a part of the boundary and driven by a compressor (pusher plate. On one side of the diaphragm is the blood and on the other side is the compressor fluid. For a complete mathematical model it is necessary to write the equation of motion of the diaphragm and all the dynamic couplings that exist between its position, velocity and the blood flow in the heart chamber. This gives rise to a system of coupled nonlinear partial differential equations; the Navier-Stokes equation being of parabolic type and the equation for the membrane being of hyperbolic type. The system is completed by introducing all the necessary static and dynamic boundary conditions. The ultimate objective is to control the flow pattern so as to minimize hemolysis (damage to red blood cells by optimal choice of geometry, and by optimal control of the membrane for a given geometry. The other clinical problems, such as compatibility of the material used in the construction of the heart chamber, and the membrane, are not considered in this paper. Also the dynamics of the valve is not considered here, though it is also an important element in the overall design of an artificial heart. We hope to model the valve dynamics in later paper.
Coastal erosion problem, modelling and protection
Yılmaz, Nihal; Balas, Lale; İnan, Asu
2015-09-01
Göksu Delta, located in the south of Silifke County of Mersin on the coastal plain formed by Göksu River, is one of the Specially Protected Areas in Turkey. Along the coastal area of the Delta, coastline changes at significant rates are observed, concentrating especially at four regions; headland of İncekum, coast of Paradeniz Lagoon, river mouth of Göksu and coast of Altınkum. The coast of Paradeniz Lagoon is suffering significantly from erosion and the consequent coastal retreating problem. Therefore, the narrow barrier beach which separates Paradeniz Lagoon from the Mediterranean Sea is getting narrower, creating a risk of uniting with the sea, thus causing the disappearance of the Lagoon. The aim of this study was to understand the coastal transport processes along the coastal area of Göksu Delta to determine the coastal sediment transport rates, and accordingly, to propose solutions to prevent the loss of coastal lands in the Delta. To this end, field measurements of currents and sediment grain sizes were carried out, and wind climate, wave climate, circulation patterns and longshore sediment transport rates were numerically modeled by HYDROTAM-3D, which is a three dimensional hydrodynamic transport model. Finally, considering its special importance as an environmentally protected region, some coastal structures of gabions were proposed as solutions against the coastal erosion problems of the Delta. The effects of proposed structures on future coastline changes were also modeled, and the coastlines predicted for the year 2017 are presented and discussed in the paper.
Modelling refrigerant distribution in minichannel evaporators
DEFF Research Database (Denmark)
Brix, Wiebke
of the liquid and vapour in the inlet manifold. Combining non-uniform airflow and non-uniform liquid and vapour distribution shows that a non-uniform airflow distribution to some degree can be compensated by a suitable liquid and vapour distribution. Controlling the superheat out of the individual channels...... to be equal, results in a cooling capacity very close to the optimum. A sensitivity study considering parameter changes shows that the course of the pressure gradient in the channel is significant, considering the magnitude of the capacity reductions due to non-uniform liquid and vapour distribution and non......This thesis is concerned with numerical modelling of flow distribution in a minichannel evaporator for air-conditioning. The study investigates the impact of non-uniform airflow and non-uniform distribution of the liquid and vapour phases in the inlet manifold on the refrigerant mass flow...
Heuristic for solving capacitor allocation problems in electric energy radial distribution networks
Directory of Open Access Journals (Sweden)
Maria A. Biagio
2012-04-01
Full Text Available The goal of the capacitor allocation problem in radial distribution networks is to minimize technical losses with consequential positive impacts on economic and environmental areas. The main objective is to define the size and location of the capacitors while considering load variations in a given horizon. The mathematical formulation for this planning problem is given by an integer nonlinear mathematical programming model that demands great computational effort to be solved. With the goal of solving this problem, this paper proposes a methodology that is composed of heuristics and Tabu Search procedures. The methodology presented explores network system characteristics of the network system reactive loads for identifying regions where procedures of local and intensive searches should be performed. A description of the proposed methodology and an analysis of computational results obtained which are based on several test systems including actual systems are presented. The solutions reached are as good as or better than those indicated by well referenced methodologies. The technique proposed is simple in its use and does not require calibrating an excessive amount of parameters, making it an attractive alternative for companies involved in the planning of radial distribution networks.
On the Inverse EEG Problem for a 1D Current Distribution
Directory of Open Access Journals (Sweden)
George Dassios
2014-01-01
Full Text Available Albanese and Monk (2006 have shown that, it is impossible to recover the support of a three-dimensional current distribution within a conducting medium from the knowledge of the electric potential outside the conductor. On the other hand, it is possible to obtain the support of a current which lives in a subspace of dimension lower than three. In the present work, we actually demonstrate this possibility by assuming a one-dimensional current distribution supported on a small line segment having arbitrary location and orientation within a uniform spherical conductor. The immediate representation of this problem refers to the inverse problem of electroencephalography (EEG with a linear current distribution and the spherical model of the brain-head system. It is shown that the support is identified through the solution of a nonlinear algebraic system which is investigated thoroughly. Numerical tests show that this system has exactly one real solution. Exact solutions are analytically obtained for a couple of special cases.
Fells, Stephanie
2012-01-01
The design of online or distributed problem-based learning (dPBL) is a nascent, complex design problem. Instructional designers are challenged to effectively unite the constructivist principles of problem-based learning (PBL) with appropriate media in order to create quality dPBL environments. While computer-mediated communication (CMC) tools and…
Mathematical Models for Room Air Distribution
DEFF Research Database (Denmark)
Nielsen, Peter V.
1982-01-01
A number of different models on the air distribution in rooms are introduced. This includes the throw model, a model on penetration length of a cold wall jet and a model for maximum velocity in the dimensioning of an air distribution system in highly loaded rooms and shows that the amount of heat...... removed from the room at constant penetration length is proportional to the cube of the velocities in the occupied zone. It is also shown that a large number of diffusers increases the amount of heat which may be removed without affecting the thermal conditions. Control strategies for dual duct and single...... duct systems are given and the paper is concluded by mentioning a computer-based prediction method which gives the velocity and temperature distribution in the whole room....
Mathematical Models for Room Air Distribution - Addendum
DEFF Research Database (Denmark)
Nielsen, Peter V.
1982-01-01
A number of different models on the air distribution in rooms are introduced. This includes the throw model, a model on penetration length of a cold wall jet and a model for maximum velocity in the dimensioning of an air distribution system in highly loaded rooms and shows that the amount of heat...... removed from the room at constant penetration length is proportional to the cube of the velocities in the occupied zone. It is also shown that a large number of diffusers increases the amount of heat which may be removed without affecting the thermal conditions. Control strategies for dual duct and single...... duct systems are given and the paper is concluded by mentioning a computer-based prediction method which gives the velocity and temperature distribution in the whole room....
Estimating Predictive Variance for Statistical Gas Distribution Modelling
International Nuclear Information System (INIS)
Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo
2009-01-01
Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.
Benchmark problems for repository siting models
International Nuclear Information System (INIS)
Ross, B.; Mercer, J.W.; Thomas, S.D.; Lester, B.H.
1982-12-01
This report describes benchmark problems to test computer codes used in siting nuclear waste repositories. Analytical solutions, field problems, and hypothetical problems are included. Problems are included for the following types of codes: ground-water flow in saturated porous media, heat transport in saturated media, ground-water flow in saturated fractured media, heat and solute transport in saturated porous media, solute transport in saturated porous media, solute transport in saturated fractured media, and solute transport in unsaturated porous media
Convergence diagnostics for Eigenvalue problems with linear regression model
International Nuclear Information System (INIS)
Shi, Bo; Petrovic, Bojan
2011-01-01
Although the Monte Carlo method has been extensively used for criticality/Eigenvalue problems, a reliable, robust, and efficient convergence diagnostics method is still desired. Most methods are based on integral parameters (multiplication factor, entropy) and either condense the local distribution information into a single value (e.g., entropy) or even disregard it. We propose to employ the detailed cycle-by-cycle local flux evolution obtained by using mesh tally mechanism to assess the source and flux convergence. By applying a linear regression model to each individual mesh in a mesh tally for convergence diagnostics, a global convergence criterion can be obtained. We exemplify this method on two problems and obtain promising diagnostics results. (author)
A Hierarchy Model of Income Distribution
Fix, Blair
2018-01-01
Based on worldly experience, most people would agree that firms are hierarchically organized, and that pay tends to increase as one moves up the hierarchy. But how this hierarchical structure affects income distribution has not been widely studied. To remedy this situation, this paper presents a new model of income distribution that explores the effects of social hierarchy. This ‘hierarchy model’ takes the limited available evidence on the structure of firm hierarchies and generalizes it to c...
The process model of problem solving difficulty
Pala, O.; Rouwette, E.A.J.A.; Vennix, J.A.M.
2002-01-01
Groups and organizations, or in general multi-actor decision-making groups, frequently come across complex problems in which neither the problem definition nor the interrelations of parts that make up the problem are well defined. In these kinds of situations, members of a decision-making group
Modeling a Distribution of Mortgage Credit Losses
Czech Academy of Sciences Publication Activity Database
Gapko, Petr; Šmíd, Martin
2012-01-01
Roč. 60, č. 10 (2012), s. 1005-1023 ISSN 0013-3035 R&D Projects: GA ČR GD402/09/H045; GA ČR(CZ) GBP402/12/G097 Grant - others:Univerzita Karlova(CZ) 46108 Institutional research plan: CEZ:AV0Z10750506 Institutional support: RVO:67985556 Keywords : credit risk * mortgage * delinquency rate * generalized hyperbolic distribution * normal distribution Subject RIV: AH - Economics Impact factor: 0.194, year: 2012 http://library.utia.cas.cz/separaty/2013/E/smid-modeling a distribution of mortgage credit losses.pdf
Modeling a Distribution of Mortgage Credit Losses
Czech Academy of Sciences Publication Activity Database
Gapko, Petr; Šmíd, Martin
2010-01-01
Roč. 23, č. 23 (2010), s. 1-23 R&D Projects: GA ČR GA402/09/0965; GA ČR GD402/09/H045 Grant - others:Univerzita Karlova - GAUK(CZ) 46108 Institutional research plan: CEZ:AV0Z10750506 Keywords : Credit Risk * Mortgage * Delinquency Rate * Generalized Hyperbolic Distribution * Normal Distribution Subject RIV: AH - Economics http://library.utia.cas.cz/separaty/2010/E/gapko-modeling a distribution of mortgage credit losses-ies wp.pdf
Advanced Distribution Network Modelling with Distributed Energy Resources
O'Connell, Alison
The addition of new distributed energy resources, such as electric vehicles, photovoltaics, and storage, to low voltage distribution networks means that these networks will undergo major changes in the future. Traditionally, distribution systems would have been a passive part of the wider power system, delivering electricity to the customer and not needing much control or management. However, the introduction of these new technologies may cause unforeseen issues for distribution networks, due to the fact that they were not considered when the networks were originally designed. This thesis examines different types of technologies that may begin to emerge on distribution systems, as well as the resulting challenges that they may impose. Three-phase models of distribution networks are developed and subsequently utilised as test cases. Various management strategies are devised for the purposes of controlling distributed resources from a distribution network perspective. The aim of the management strategies is to mitigate those issues that distributed resources may cause, while also keeping customers' preferences in mind. A rolling optimisation formulation is proposed as an operational tool which can manage distributed resources, while also accounting for the uncertainties that these resources may present. Network sensitivities for a particular feeder are extracted from a three-phase load flow methodology and incorporated into an optimisation. Electric vehicles are the focus of the work, although the method could be applied to other types of resources. The aim is to minimise the cost of electric vehicle charging over a 24-hour time horizon by controlling the charge rates and timings of the vehicles. The results demonstrate the advantage that controlled EV charging can have over an uncontrolled case, as well as the benefits provided by the rolling formulation and updated inputs in terms of cost and energy delivered to customers. Building upon the rolling optimisation, a
Amallynda, I.; Santosa, B.
2017-11-01
This paper proposes a new generalization of the distributed parallel machine and assembly scheduling problem (DPMASP) with eligibility constraints referred to as the modified distributed parallel machine and assembly scheduling problem (MDPMASP) with eligibility constraints. Within this generalization, we assume that there are a set non-identical factories or production lines, each one with a set unrelated parallel machine with different speeds in processing them disposed to a single assembly machine in series. A set of different products that are manufactured through an assembly program of a set of components (jobs) according to the requested demand. Each product requires several kinds of jobs with different sizes. Beside that we also consider to the multi-objective problem (MOP) of minimizing mean flow time and the number of tardy products simultaneously. This is known to be NP-Hard problem, is important to practice, as the former criterions to reflect the customer's demand and manufacturer's perspective. This is a realistic and complex problem with wide range of possible solutions, we propose four simple heuristics and two metaheuristics to solve it. Various parameters of the proposed metaheuristic algorithms are discussed and calibrated by means of Taguchi technique. All proposed algorithms are tested by Matlab software. Our computational experiments indicate that the proposed problem and fourth proposed algorithms are able to be implemented and can be used to solve moderately-sized instances, and giving efficient solutions, which are close to optimum in most cases.
Distributed models coupling soakaways, urban drainage and groundwater
DEFF Research Database (Denmark)
Roldin, Maria Kerstin
in receiving waters, urban flooding etc. WSUD structures are generally small, decentralized systems intended to manage stormwater near the source. Many of these alternative techniques are based on infiltration which can affect both the urban sewer system and urban groundwater levels if widely implemented......Alternative methods for stormwater management in urban areas, also called Water Sensitive Urban Design (WSUD) methods, have become increasingly important for the mitigation of urban stormwater management problems such as high runoff volumes, combined sewage overflows, poor water quality......, and how these can be modeled in an integrated environment with distributed urban drainage and groundwater flow models. The thesis: 1. Identifies appropriate models of soakaways for use in an integrated and distributed urban water and groundwater modeling system 2. Develops a modeling concept that is able...
The effects of model and data complexity on predictions from species distributions models
DEFF Research Database (Denmark)
García-Callejas, David; Bastos, Miguel
2016-01-01
How complex does a model need to be to provide useful predictions is a matter of continuous debate across environmental sciences. In the species distributions modelling literature, studies have demonstrated that more complex models tend to provide better fits. However, studies have also shown...... that predictive performance does not always increase with complexity. Testing of species distributions models is challenging because independent data for testing are often lacking, but a more general problem is that model complexity has never been formally described in such studies. Here, we systematically...
Distributed model predictive control made easy
Negenborn, Rudy
2014-01-01
The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems. This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...
Applications of species distribution modeling to paleobiology
DEFF Research Database (Denmark)
Svenning, Jens-Christian; Fløjgaard, Camilla; Marske, Katharine Ann
2011-01-01
-Pleistocene megafaunal extinctions, past community assembly, human paleobiogeography, Holocene paleoecology, and even deep-time biogeography (notably, providing insights into biogeographic dynamics >400 million years ago). We discuss important assumptions and uncertainties that affect the SDM approach to paleobiology......Species distribution modeling (SDM: statistical and/or mechanistic approaches to the assessment of range determinants and prediction of species occurrence) offers new possibilities for estimating and studying past organism distributions. SDM complements fossil and genetic evidence by providing (i......) quantitative and potentially high-resolution predictions of the past organism distributions, (ii) statistically formulated, testable ecological hypotheses regarding past distributions and communities, and (iii) statistical assessment of range determinants. In this article, we provide an overview...
The distribution of prime numbers and associated problems in number theory
International Nuclear Information System (INIS)
Nair, M.
1991-01-01
Some problems in number theory, namely the gaps between consecutive primes, the distribution of primes in arithmetic progressions, Brun-Titchmarsh theorem, Fermat's last theorem, The Thue equation, the gaps between square-free numbers are discussed
Modeling Word Burstiness Using the Dirichlet Distribution
DEFF Research Database (Denmark)
Madsen, Rasmus Elsborg; Kauchak, David; Elkan, Charles
2005-01-01
Multinomial distributions are often used to model text documents. However, they do not capture well the phenomenon that words in a document tend to appear in bursts: if a word appears once, it is more likely to appear again. In this paper, we propose the Dirichlet compound multinomial model (DCM......) as an alternative to the multinomial. The DCM model has one additional degree of freedom, which allows it to capture burstiness. We show experimentally that the DCM is substantially better than the multinomial at modeling text data, measured by perplexity. We also show using three standard document collections...
Overhead distribution line models for harmonics studies
Energy Technology Data Exchange (ETDEWEB)
Nagpal, M.; Xu, W.; Dommel, H.W.
1994-01-01
Carson's formulae and Maxwell's potential coefficients are used for calculating the per unit length series impedances and shunt capacitances of the overhead lines. The per unit length values are then used for building the models, nominal pi-circuit, and equivalent pi-circuit at the harmonic frequencies. This paper studies the accuracy of these models for presenting the overhead distribution lines in steady-state harmonic solutions at frequencies up to 5 kHz. The models are verified with a field test on a 25 kV distribution line and the sensitivity of the models to ground resistivity, skin effect, and multiple grounding is reported.
Directory of Open Access Journals (Sweden)
Amir Salehipour
2012-01-01
Full Text Available This paper presents a novel application of operations research to support decision making in blood distribution management. The rapid and dynamic increasing demand, criticality of the product, storage, handling, and distribution requirements, and the different geographical locations of hospitals and medical centers have made blood distribution a complex and important problem. In this study, a real blood distribution problem containing 24 hospitals was tackled by the authors, and an exact approach was presented. The objective of the problem is to distribute blood and its products among hospitals and medical centers such that the total waiting time of those requiring the product is minimized. Following the exact solution, a hybrid heuristic algorithm is proposed. Computational experiments showed the optimal solutions could be obtained for medium size instances, while for larger instances the proposed hybrid heuristic is very competitive.
Halldane, J. F.
1972-01-01
Technology is considered as a culture for changing a physical world and technology assessment questions the inherent cultural capability to modify power and material in support of living organisms. A comprehensive goal-parameter-synthesis-criterion specification is presented as a basis for a rational assessment of technology. The thesis queries the purpose of the assessed problems, the factors considered, the relationships between factors, and the values assigned those factors to accomplish the appropriate purpose. Stationary and sequential evaluation of enviro-organismic systems are delegated to the responsible personalities involved in design; from promoter/designer through contractor to occupant. Discussion includes design goals derived from organismic factors, definitions of human responses which establish viable criteria and relevant correlation models, linking stimulus parameters, and parallel problem-discipline centered design organization. A consistent concept of impedance, as a degradation in the performance of a specified parameter, is introduced to overcome the arbitrary inoperative connotations of terms like noise, discomfort, and glare. Applications of the evaluative specification are illustrated through design problems related to auditory impedance and sound distribution.
Programming model for distributed intelligent systems
Sztipanovits, J.; Biegl, C.; Karsai, G.; Bogunovic, N.; Purves, B.; Williams, R.; Christiansen, T.
1988-01-01
A programming model and architecture which was developed for the design and implementation of complex, heterogeneous measurement and control systems is described. The Multigraph Architecture integrates artificial intelligence techniques with conventional software technologies, offers a unified framework for distributed and shared memory based parallel computational models and supports multiple programming paradigms. The system can be implemented on different hardware architectures and can be adapted to strongly different applications.
Comparison of sparse point distribution models
DEFF Research Database (Denmark)
Erbou, Søren Gylling Hemmingsen; Vester-Christensen, Martin; Larsen, Rasmus
2010-01-01
This paper compares several methods for obtaining sparse and compact point distribution models suited for data sets containing many variables. These are evaluated on a database consisting of 3D surfaces of a section of the pelvic bone obtained from CT scans of 33 porcine carcasses. The superior m...
A Distributive Model of Treatment Acceptability
Carter, Stacy L.
2008-01-01
A model of treatment acceptability is proposed that distributes overall treatment acceptability into three separate categories of influence. The categories are comprised of societal influences, consultant influences, and influences associated with consumers of treatments. Each of these categories are defined and their inter-relationships within…
Finessing atlas data for species distribution models
Niamir, A.; Skidmore, A.K.; Toxopeus, A.G.; Munoz, A.R.; Real, R.
2011-01-01
Aim The spatial resolution of species atlases and therefore resulting model predictions are often too coarse for local applications. Collecting distribution data at a finer resolution for large numbers of species requires a comprehensive sampling effort, making it impractical and expensive. This
Distributionally Robust Return-Risk Optimization Models and Their Applications
Directory of Open Access Journals (Sweden)
Li Yang
2014-01-01
Full Text Available Based on the risk control of conditional value-at-risk, distributionally robust return-risk optimization models with box constraints of random vector are proposed. They describe uncertainty in both the distribution form and moments (mean and covariance matrix of random vector. It is difficult to solve them directly. Using the conic duality theory and the minimax theorem, the models are reformulated as semidefinite programming problems, which can be solved by interior point algorithms in polynomial time. An important theoretical basis is therefore provided for applications of the models. Moreover, an application of the models to a practical example of portfolio selection is considered, and the example is evaluated using a historical data set of four stocks. Numerical results show that proposed methods are robust and the investment strategy is safe.
Distributed Solutions for Loosely Coupled Feasibility Problems Using Proximal Splitting Methods
DEFF Research Database (Denmark)
Pakazad, Sina Khoshfetrat; Andersen, Martin Skovgaard; Hansson, Anders
2014-01-01
In this paper,we consider convex feasibility problems (CFPs) where the underlying sets are loosely coupled, and we propose several algorithms to solve such problems in a distributed manner. These algorithms are obtained by applying proximal splitting methods to convex minimization reformulations ...
The stress distribution in shell bodies and wings as an equilibrium problem
Wagner, H
1937-01-01
This report treats the stress distribution in shell-shaped airplane components (fuselage, wings) as an equilibrium problem; it includes both cylindrical and non-cylindrical shells. In particular, it treats the stress distribution at the point of stress application and at cut-out points.
PROGRAMMING OF METHODS FOR THE NEEDS OF LOGISTICS DISTRIBUTION SOLVING PROBLEMS
Directory of Open Access Journals (Sweden)
Andrea Štangová
2014-06-01
Full Text Available Logistics has become one of the dominant factors which is affecting the successful management, competitiveness and mentality of the global economy. Distribution logistics materializes the connesciton of production and consumer marke. It uses different methodology and methods of multicriterial evaluation and allocation. This thesis adresses the problem of the costs of securing the distribution of product. It was therefore relevant to design a software product thet would be helpful in solvin the problems related to distribution logistics. Elodis – electronic distribution logistics program was designed on the basis of theoretical analysis of the issue of distribution logistics and on the analysis of the software products market. The program uses a multicriterial evaluation methods to deremine the appropriate type and mathematical and geometrical method to determine an appropriate allocation of the distribution center, warehouse and company.
Analysis of Jingdong Mall Logistics Distribution Model
Shao, Kang; Cheng, Feng
In recent years, the development of electronic commerce in our country to speed up the pace. The role of logistics has been highlighted, more and more electronic commerce enterprise are beginning to realize the importance of logistics in the success or failure of the enterprise. In this paper, the author take Jingdong Mall for example, performing a SWOT analysis of their current situation of self-built logistics system, find out the problems existing in the current Jingdong Mall logistics distribution and give appropriate recommendations.
EFFECT OF PROBLEM BASED LEARNING AND MODEL CRITICAL THINKING ABILITY TO PROBLEM SOLVING SKILLS
Directory of Open Access Journals (Sweden)
Unita S. Zuliani Nasution
2016-12-01
Full Text Available The purposes of this research were to analyze the different between physic resolving problem ability by using problem based learning model and direct instruction model, the different of physic resolving problem ability between the students that have critical thinking ability upper the average and the students that have critical thinking ability under the average, and the interaction of problem based learning model toward critical thinking ability and students’ physic resolving problem ability. This research was quasy experimental research that use critical thinking ability tests and physic resolving problem ability tests as the instruments. Result of the research showed that the students’ physic resolving problem ability by using problem based learning model was better than by using direct instruction model, students’ physic resolving problem ability and critical thinking ability upper the average showed better different and result than students’ critical thinking ability under the average, besides there was an interaction between problem based learning model and critical thinking ability in improving students’ physic resolving problem ability.
Modelling simple helically delivered dose distributions
International Nuclear Information System (INIS)
Fenwick, John D; Tome, Wolfgang A; Kissick, Michael W; Mackie, T Rock
2005-01-01
In a previous paper, we described quality assurance procedures for Hi-Art helical tomotherapy machines. Here, we develop further some ideas discussed briefly in that paper. Simple helically generated dose distributions are modelled, and relationships between these dose distributions and underlying characteristics of Hi-Art treatment systems are elucidated. In particular, we describe the dependence of dose levels along the central axis of a cylinder aligned coaxially with a Hi-Art machine on fan beam width, couch velocity and helical delivery lengths. The impact on these dose levels of angular variations in gantry speed or output per linear accelerator pulse is also explored
A Conceptual Model for Solving Percent Problems.
Bennett, Albert B., Jr.; Nelson, L. Ted
1994-01-01
Presents an alternative method to teaching percent problems which uses a 10x10 grid to help students visualize percents. Offers a means of representing information and suggests different approaches for finding solutions. Includes reproducible student worksheet. (MKR)
A void distribution model-flashing flow
International Nuclear Information System (INIS)
Riznic, J.; Ishii, M.; Afgan, N.
1987-01-01
A new model for flashing flow based on wall nucleations is proposed here and the model predictions are compared with some experimental data. In order to calculate the bubble number density, the bubble number transport equation with a distributed source from the wall nucleation sites was used. Thus it was possible to avoid the usual assumption of a constant bubble number density. Comparisons of the model with the data shows that the model based on the nucleation site density correlation appears to be acceptable to describe the vapor generation in the flashing flow. For the limited data examined, the comparisons show rather satisfactory agreement without using a floating parameter to adjust the model. This result indicated that, at least for the experimental conditions considered here, the mechanistic predictions of the flashing phenomenon is possible on the present wall nucleation based model
A Reference Model for Distribution Grid Control in the 21st Century
Energy Technology Data Exchange (ETDEWEB)
Taft, Jeffrey D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); De Martini, Paul [California Inst. of Technology (CalTech), Pasadena, CA (United States); Kristov, Lorenzo [California Independent System Operator, Folsom, CA (United States)
2015-07-01
Intensive changes in the structure of the grid due to the penetration of new technologies, coupled with changing societal needs are outpacing the capabilities of traditional grid control systems. The gap is widening at an accelerating rate with the biggest impacts occurring at the distribution level due to the widespread adoption of diverse distribution-connected energy resources (DER) . This paper outlines the emerging distribution grid control environment, defines the new distribution control problem, and provides a distribution control reference model. The reference model offers a schematic representation of the problem domain to inform development of system architecture and control solutions for the high-DER electric system.
Problem of uniqueness in the renewal process generated by the uniform distribution
Directory of Open Access Journals (Sweden)
D. Ugrin-parac
1992-01-01
Full Text Available The renewal process generated by the uniform distribution, when interpreted as a transformation of the uniform distribution into a discrete distribution, gives rise to the question of uniqueness of the inverse image. The paper deals with a particular problem from the described domain, that arose in the construction of a complex stochastic test intended to evaluate pseudo-random number generators. The connection of the treated problem with the question of a unique integral representation of Gamma-function is also mentioned.
Distributed parallel computing in stochastic modeling of groundwater systems.
Dong, Yanhui; Li, Guomin; Xu, Haizhen
2013-03-01
Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.
The Higgs transverse momentum distribution in gluon fusion as multiscale problem
International Nuclear Information System (INIS)
Bagnaschi, E.; Vicini, A.
2015-05-01
We consider Higgs production in gluon fusion and in particular the prediction of the Higgs transverse momentum distribution. We discuss the ambiguities affecting the matching procedure between fixed order matrix elements and the resummation to all orders of the terms enhanced by log(p H T /m H ) factors. Following a recent proposal (Grazzini et al., hep-ph/1306.4581), we argue that the gluon fusion process, computed considering two active quark flavors, is a multiscale problem from the point of view of the resummation of the collinear singular terms. We perform an analysis at parton level of the collinear behavior of the real emission amplitudes and we derive an upper limit to the range of transverse momenta where the collinear approximation is valid. This scale is then used as the value of the resummation scale in the analytic resummation framework or as the value of the h parameter in the POWHEG-BOX code. Finally, we provide a phenomenological analysis in the Standard Model, in the Two Higgs Doublet Model and in the Minimal Supersymmetric Standard Model. In the two latter cases, we provide an ansatz for the central value of the matching parameters not only for a Standard Model-like Higgs boson, but also for heavy scalars and in scenarios where the bottom quark may play the dominant role.
A Multiple Period Problem in Distributed Energy Management Systems Considering CO2 Emissions
Muroda, Yuki; Miyamoto, Toshiyuki; Mori, Kazuyuki; Kitamura, Shoichi; Yamamoto, Takaya
Consider a special district (group) which is composed of multiple companies (agents), and where each agent responds to an energy demand and has a CO2 emission allowance imposed. A distributed energy management system (DEMS) optimizes energy consumption of a group through energy trading in the group. In this paper, we extended the energy distribution decision and optimal planning problem in DEMSs from a single period problem to a multiple periods one. The extension enabled us to consider more realistic constraints such as demand patterns, the start-up cost, and minimum running/outage times of equipment. At first, we extended the market-oriented programming (MOP) method for deciding energy distribution to the multiple periods problem. The bidding strategy of each agent is formulated by a 0-1 mixed non-linear programming problem. Secondly, we proposed decomposing the problem into a set of single period problems in order to solve it faster. In order to decompose the problem, we proposed a CO2 emission allowance distribution method, called an EP method. We confirmed that the proposed method was able to produce solutions whose group costs were close to lower-bound group costs by computational experiments. In addition, we verified that reduction in computational time was achieved without losing the quality of solutions by using the EP method.
On the formulation and numerical simulation of distributed-order fractional optimal control problems
Zaky, M. A.; Machado, J. A. Tenreiro
2017-11-01
In a fractional optimal control problem, the integer order derivative is replaced by a fractional order derivative. The fractional derivative embeds implicitly the time delays in an optimal control process. The order of the fractional derivative can be distributed over the unit interval, to capture delays of distinct sources. The purpose of this paper is twofold. Firstly, we derive the generalized necessary conditions for optimal control problems with dynamics described by ordinary distributed-order fractional differential equations (DFDEs). Secondly, we propose an efficient numerical scheme for solving an unconstrained convex distributed optimal control problem governed by the DFDE. We convert the problem under consideration into an optimal control problem governed by a system of DFDEs, using the pseudo-spectral method and the Jacobi-Gauss-Lobatto (J-G-L) integration formula. Next, we present the numerical solutions for a class of optimal control problems of systems governed by DFDEs. The convergence of the proposed method is graphically analyzed showing that the proposed scheme is a good tool for the simulation of distributed control problems governed by DFDEs.
Ji, Yu; Sheng, Wanxing; Jin, Wei; Wu, Ming; Liu, Haitao; Chen, Feng
2018-02-01
A coordinated optimal control method of active and reactive power of distribution network with distributed PV cluster based on model predictive control is proposed in this paper. The method divides the control process into long-time scale optimal control and short-time scale optimal control with multi-step optimization. The models are transformed into a second-order cone programming problem due to the non-convex and nonlinear of the optimal models which are hard to be solved. An improved IEEE 33-bus distribution network system is used to analyse the feasibility and the effectiveness of the proposed control method
Mechanistic model for void distribution in flashing flow
International Nuclear Information System (INIS)
Riznic, J.; Ishii, M.; Afgan, N.
1987-01-01
A problem of discharging of an initially subcooled liquid from a high pressure condition into a low pressure environment is quite important in several industrial systems such as nuclear reactors and chemical reactors. A new model for the flashing process is proposed here based on the wall nucleation theory, bubble growth model and drift-flux bubble transport model. In order to calculate the bubble number density, the bubble number transport equation with a distributed source from the wall nucleation sites is used. The model predictions in terms of the void fraction are compared to Moby Dick and BNL experimental data. It shows that satisfactory agreements could be obtained from the present model without any floating parameter to be adjusted with data. This result indicates that, at least for the experimental conditions considered here, the mechanistic prediction of the flashing phenomenon is possible based on the present wall nucleation based model. 43 refs., 4 figs
Modelling refrigerant distribution in microchannel evaporators
DEFF Research Database (Denmark)
Brix, Wiebke; Kærn, Martin Ryhl; Elmegaard, Brian
2009-01-01
of the refrigerant distribution is carried out for two channels in parallel and for two different cases. In the first case maldistribution of the inlet quality into the channels is considered, and in the second case a non-uniform airflow on the secondary side is considered. In both cases the total mixed superheat...... out of the evaporator is kept constant. It is shown that the cooling capacity of the evaporator is reduced significantly, both in the case of unevenly distributed inlet quality and for the case of non-uniform airflow on the outside of the channels.......The effects of refrigerant maldistribution in parallel evaporator channels on the heat exchanger performance are investigated numerically. For this purpose a 1D steady state model of refrigerant R134a evaporating in a microchannel tube is built and validated against other evaporator models. A study...
Fuzzy Approximate Model for Distributed Thermal Solar Collectors Control
Elmetennani, Shahrazed
2014-07-01
This paper deals with the problem of controlling concentrated solar collectors where the objective consists of making the outlet temperature of the collector tracking a desired reference. The performance of the novel approximate model based on fuzzy theory, which has been introduced by the authors in [1], is evaluated comparing to other methods in the literature. The proposed approximation is a low order state representation derived from the physical distributed model. It reproduces the temperature transfer dynamics through the collectors accurately and allows the simplification of the control design. Simulation results show interesting performance of the proposed controller.
Quantifying Distributional Model Risk via Optimal Transport
Blanchet, Jose; Murthy, Karthyek R. A.
2016-01-01
This paper deals with the problem of quantifying the impact of model misspecification when computing general expected values of interest. The methodology that we propose is applicable in great generality, in particular, we provide examples involving path dependent expectations of stochastic processes. Our approach consists in computing bounds for the expectation of interest regardless of the probability measure used, as long as the measure lies within a prescribed tolerance measured in terms ...
Language and modeling word problems in mathematics among bilinguals.
Bernardo, Allan B I
2005-09-01
The study was conducted to determine whether the language of math word problems would affect how Filipino-English bilingual problem solvers would model the structure of these word problems. Modeling the problem structure was studied using the problem-completion paradigm, which involves presenting problems without the question. The paradigm assumes that problem solvers can infer the appropriate question of a word problem if they correctly grasp its problem structure. Arithmetic word problems in Filipino and English were given to bilingual students, some of whom had Filipino as a first language and others who had English as a first language. The problem-completion data and solution data showed similar results. The language of the problem had no effect on problem-structure modeling. The results were discussed in relation to a more circumscribed view about the role of language in word problem solving among bilinguals. In particular, the results of the present study showed that linguistic factors do not affect the more mathematically abstract components of word problem solving, although they may affect the other components such as those related to reading comprehension and understanding.
Statistical models based on conditional probability distributions
International Nuclear Information System (INIS)
Narayanan, R.S.
1991-10-01
We present a formulation of statistical mechanics models based on conditional probability distribution rather than a Hamiltonian. We show that it is possible to realize critical phenomena through this procedure. Closely linked with this formulation is a Monte Carlo algorithm, in which a configuration generated is guaranteed to be statistically independent from any other configuration for all values of the parameters, in particular near the critical point. (orig.)
The Effect of Problem Solving and Problem Posing Models and Innate Ability to Students Achievement
Directory of Open Access Journals (Sweden)
Ratna Kartika Irawati
2015-04-01
Full Text Available Pengaruh Model Problem Solving dan Problem Posing serta Kemampuan Awal terhadap Hasil Belajar Siswa Abstract: Chemistry concepts understanding features abstract quality and requires higher order thinking skills. Yet, the learning on chemistry has not boost the higher order thinking skills of the students. The use of the learning model of Problem Solving and Problem Posing in observing the innate ability of the student is expected to resolve the issue. This study aims to determine the learning model which is effective to improve the study of the student with different level of innate ability. This study used the quasi-experimental design. The research data used in this research is the quiz/test of the class which consist of 14 multiple choice questions and 5 essay questions. The data analysis used is ANOVA Two Ways. The results showed that Problem Posing is more effective to improve the student compared to Problem Solving, students with high level of innate ability have better outcomes in learning rather than the students with low level of innate ability after being applied with the Problem solving and Problem posing model, further, Problem Solving and Problem Posing is more suitable to be applied to the students with high level of innate ability. Key Words: problem solving, problem posing, higher order thinking skills, innate ability, learning outcomes Abstrak: Pemahaman konsep-konsep kimia yang bersifat abstrak membutuhkan keterampilan berpikir tingkat tinggi. Pembelajaran kimia belum mendorong siswa melakukan keterampilan berpikir tingkat tinggi. Penggunaan model pembelajaran Problem Solving dan Problem Posing dengan memperhatikan kemampuan awal siswa diduga dapat mengatasi masalah tersebut. Penelitian ini bertujuan untuk mengetahui model pembelajaran yang efektif dalam meningkatkan hasil belajar dengan kemampuan awal siswa yang berbeda. Penelitian ini menggunakan rancangan eksperimen semu. Data penelitian menggunakan tes hasil belajar
Nourifar, Raheleh; Mahdavi, Iraj; Mahdavi-Amiri, Nezam; Paydar, Mohammad Mahdi
2017-09-01
Decentralized supply chain management is found to be significantly relevant in today's competitive markets. Production and distribution planning is posed as an important optimization problem in supply chain networks. Here, we propose a multi-period decentralized supply chain network model with uncertainty. The imprecision related to uncertain parameters like demand and price of the final product is appropriated with stochastic and fuzzy numbers. We provide mathematical formulation of the problem as a bi-level mixed integer linear programming model. Due to problem's convolution, a structure to solve is developed that incorporates a novel heuristic algorithm based on Kth-best algorithm, fuzzy approach and chance constraint approach. Ultimately, a numerical example is constructed and worked through to demonstrate applicability of the optimization model. A sensitivity analysis is also made.
International Nuclear Information System (INIS)
Guilani, Pedram Pourkarim; Azimi, Parham; Niaki, S.T.A.; Niaki, Seyed Armin Akhavan
2016-01-01
The redundancy allocation problem (RAP) is a useful method to enhance system reliability. In most works involving RAP, failure rates of the system components are assumed to follow either exponential or k-Erlang distributions. In real world problems however, many systems have components with increasing failure rates. This indicates that as time passes by, the failure rates of the system components increase in comparison to their initial failure rates. In this paper, the redundancy allocation problem of a series–parallel system with components having an increasing failure rate based on Weibull distribution is investigated. An optimization method via simulation is proposed for modeling and a genetic algorithm is developed to solve the problem. - Highlights: • The redundancy allocation problem of a series–parallel system is aimed. • Components possess an increasing failure rate based on Weibull distribution. • An optimization method via simulation is proposed for modeling. • A genetic algorithm is developed to solve the problem.
Problems in physical modeling of magnetic materials
International Nuclear Information System (INIS)
Della Torre, E.
2004-01-01
Physical modeling of magnetic materials should give insights into the basic processes involved and should be able to extrapolate results to new situations that the models were not necessarily intended to solve. Thus, for example, if a model is designed to describe a static magnetization curve, it should also be able to describe aspects of magnetization dynamics. Both micromagnetic modeling and Preisach modeling, the two most popular magnetic models, fulfill this requirement, but in the process of fulfilling this requirement, they both had to be modified in some ways. Hence, we should view physical modeling as an iterative process whereby we start with some simple assumptions and refine them as reality requires. In the process of refining these assumptions, we should try to appeal to physical arguments for the modifications, if we are to come up with good models. If we consider phenomenological models, on the other hand, that is as axiomatic models requiring no physical justification, we can follow them logically to see the end and examine the consequences of their assumptions. In this way, we can learn the properties, limitations and achievements of the particular model. Physical and phenomenological models complement each other in furthering our understanding of the behavior of magnetic materials
Ranking multivariate GARCH models by problem dimension
M. Caporin (Massimiliano); M.J. McAleer (Michael)
2010-01-01
textabstractIn the last 15 years, several Multivariate GARCH (MGARCH) models have appeared in the literature. The two most widely known and used are the Scalar BEKK model of Engle and Kroner (1995) and Ding and Engle (2001), and the DCC model of Engle (2002). Some recent research has begun to
Shao, Zhongshi; Pi, Dechang; Shao, Weishi
2018-05-01
This article presents an effective estimation of distribution algorithm, named P-EDA, to solve the blocking flow-shop scheduling problem (BFSP) with the makespan criterion. In the P-EDA, a Nawaz-Enscore-Ham (NEH)-based heuristic and the random method are combined to generate the initial population. Based on several superior individuals provided by a modified linear rank selection, a probabilistic model is constructed to describe the probabilistic distribution of the promising solution space. The path relinking technique is incorporated into EDA to avoid blindness of the search and improve the convergence property. A modified referenced local search is designed to enhance the local exploitation. Moreover, a diversity-maintaining scheme is introduced into EDA to avoid deterioration of the population. Finally, the parameters of the proposed P-EDA are calibrated using a design of experiments approach. Simulation results and comparisons with some well-performing algorithms demonstrate the effectiveness of the P-EDA for solving BFSP.
Modeling the Structure and Complexity of Engineering Routine Design Problems
Jauregui Becker, Juan Manuel; Wits, Wessel Willems; van Houten, Frederikus J.A.M.
2011-01-01
This paper proposes a model to structure routine design problems as well as a model of its design complexity. The idea is that having a proper model of the structure of such problems enables understanding its complexity, and likewise, a proper understanding of its complexity enables the development
The two-model problem in rational decision making
Boumans, Marcel
2011-01-01
A model of a decision problem frames that problem in three dimensions: sample space, target probability and information structure. Each specific model imposes a specific rational decision. As a result, different models may impose different, even contradictory, rational decisions, creating choice
Nonparametric Estimation of Distributions in Random Effects Models
Hart, Jeffrey D.
2011-01-01
We propose using minimum distance to obtain nonparametric estimates of the distributions of components in random effects models. A main setting considered is equivalent to having a large number of small datasets whose locations, and perhaps scales, vary randomly, but which otherwise have a common distribution. Interest focuses on estimating the distribution that is common to all datasets, knowledge of which is crucial in multiple testing problems where a location/scale invariant test is applied to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article, including R-code and a dataset, are available online. © 2011 American Statistical Association.
A distributed snow-evolution modeling system (SnowModel)
Glen E. Liston; Kelly. Elder
2006-01-01
SnowModel is a spatially distributed snow-evolution modeling system designed for application in landscapes, climates, and conditions where snow occurs. It is an aggregation of four submodels: MicroMet defines meteorological forcing conditions, EnBal calculates surface energy exchanges, SnowPack simulates snow depth and water-equivalent evolution, and SnowTran-3D...
Modeling Coordination Problems in a Music Ensemble
DEFF Research Database (Denmark)
Frimodt-Møller, Søren R.
2008-01-01
This paper considers in general terms, how musicians are able to coordinate through rational choices in a situation of (temporary) doubt in an ensemble performance. A fictitious example involving a 5-bar development in an unknown piece of music is analyzed in terms of epistemic logic, more...... to coordinate. Such coordination can be described in terms of Michael Bacharach's theory of variable frames as an aid to solve game theoretic coordination problems....
Optimal consumption problem in the Vasicek model
Directory of Open Access Journals (Sweden)
Jakub Trybuła
2015-01-01
Full Text Available We consider the problem of an optimal consumption strategy on the infinite time horizon based on the hyperbolic absolute risk aversion utility when the interest rate is an Ornstein-Uhlenbeck process. Using the method of subsolution and supersolution we obtain the existence of solutions of the dynamic programming equation. We illustrate the paper with a numerical example of the optimal consumption strategy and the value function.
Krohling, Renato A; Coelho, Leandro dos Santos
2006-12-01
In this correspondence, an approach based on coevolutionary particle swarm optimization to solve constrained optimization problems formulated as min-max problems is presented. In standard or canonical particle swarm optimization (PSO), a uniform probability distribution is used to generate random numbers for the accelerating coefficients of the local and global terms. We propose a Gaussian probability distribution to generate the accelerating coefficients of PSO. Two populations of PSO using Gaussian distribution are used on the optimization algorithm that is tested on a suite of well-known benchmark constrained optimization problems. Results have been compared with the canonical PSO (constriction factor) and with a coevolutionary genetic algorithm. Simulation results show the suitability of the proposed algorithm in terms of effectiveness and robustness.
Directory of Open Access Journals (Sweden)
Hao Zhang
2017-01-01
Full Text Available The problem of locating distribution centers for delivering fresh food as a part of electronic commerce is a strategic decision problem for enterprises. This paper establishes a model for locating distribution centers that considers the uncertainty of customer demands for fresh goods in terms of time-sensitiveness and freshness. Based on the methodology of robust optimization in dealing with uncertain problems, this paper optimizes the location model in discrete demand probabilistic scenarios. In this paper, an improved fruit fly optimization algorithm is proposed to solve the distribution center location problem. An example is given to show that the proposed model and algorithm are robust and can effectively handle the complications caused by uncertain demand. The model proposed in this paper proves valuable both theoretically and practically in the selection of locations of distribution centers.
Li, Zejing
2012-01-01
This dissertation is mainly devoted to the research of two problems - the continuous-time portfolio optimization in different Wishart models and the effects of discrete rebalancing on portfolio wealth distribution and optimal portfolio strategy.
Directory of Open Access Journals (Sweden)
S. M. J. Mirzapour Al-e-Hashem
2011-01-01
Full Text Available A multi-objective two stage stochastic programming model is proposed to deal with a multi-period multi-product multi-site production-distribution planning problem for a midterm planning horizon. The presented model involves majority of supply chain cost parameters such as transportation cost, inventory holding cost, shortage cost, production cost. Moreover some respects as lead time, outsourcing, employment, dismissal, workers productivity and training are considered. Due to the uncertain nature of the supply chain, it is assumed that cost parameters and demand fluctuations are random variables and follow from a pre-defined probability distribution. To develop a robust stochastic model, an additional objective functions is added to the traditional production-distribution-planning problem. So, our multi-objective model includes (i the minimization of the expected total cost of supply chain, (ii the minimization of the variance of the total cost of supply chain and (iii the maximization of the workers productivity through training courses that could be held during the planning horizon. Then, the proposed model is solved applying a hybrid algorithm that is a combination of Monte Carlo sampling method, modified -constraint method and L-shaped method. Finally, a numerical example is solved to demonstrate the validity of the model as well as the efficiency of the hybrid algorithm.
BAYESIAN MODELS FOR SPECIES DISTRIBUTION MODELLING WITH ONLY-PRESENCE RECORDS
Directory of Open Access Journals (Sweden)
Bartolo de JesÃºs Villar-HernÃ¡ndez
2015-08-01
Full Text Available One of the central issues in ecology is the study of geographical distribution of species of flora and fauna through Species Distribution Models (SDM. Recently, scientific interest has focused on presence-only records. Two recent approaches have been proposed for this problem: a model based on maximum likelihood method (Maxlike and an inhomogeneous poisson process model (IPP. In this paper we discussed two bayesian approaches called MaxBayes and IPPBayes based on Maxlike and IPP model, respectively. To illustrate these proposals, we implemented two study examples: (1 both models were implemented on a simulated dataset, and (2 we modeled the potencial distribution of genus Dalea in the Tehuacan-CuicatlÃ¡n biosphere reserve with both models, the results was compared with that of Maxent. The results show that both models, MaxBayes and IPPBayes, are viable alternatives when species distribution are modeled with only-presence records. For simulated dataset, MaxBayes achieved prevalence estimation, even when the number of records was small. In the real dataset example, both models predict similar potential distributions like Maxent does. Â
AbdulJabbar, Mustafa Abdulmajeed
2017-05-11
Reduction of communication and efficient partitioning are key issues for achieving scalability in hierarchical N-Body algorithms like Fast Multipole Method (FMM). In the present work, we propose three independent strategies to improve partitioning and reduce communication. First, we show that the conventional wisdom of using space-filling curve partitioning may not work well for boundary integral problems, which constitute a significant portion of FMM’s application user base. We propose an alternative method that modifies orthogonal recursive bisection to relieve the cell-partition misalignment that has kept it from scaling previously. Secondly, we optimize the granularity of communication to find the optimal balance between a bulk-synchronous collective communication of the local essential tree and an RDMA per task per cell. Finally, we take the dynamic sparse data exchange proposed by Hoefler et al. [1] and extend it to a hierarchical sparse data exchange, which is demonstrated at scale to be faster than the MPI library’s MPI_Alltoallv that is commonly used.
Reconsideration of mass-distribution models
Directory of Open Access Journals (Sweden)
Ninković S.
2014-01-01
Full Text Available The mass-distribution model proposed by Kuzmin and Veltmann (1973 is revisited. It is subdivided into two models which have a common case. Only one of them is subject of the present study. The study is focused on the relation between the density ratio (the central one to that corresponding to the core radius and the total-mass fraction within the core radius. The latter one is an increasing function of the former one, but it cannot exceed one quarter, which takes place when the density ratio tends to infinity. Therefore, the model is extended by representing the density as a sum of two components. The extension results into possibility of having a correspondence between the infinite density ratio and 100% total-mass fraction. The number of parameters in the extended model exceeds that of the original model. Due to this, in the extended model, the correspondence between the density ratio and total-mass fraction is no longer one-to-one; several values of the total-mass fraction can correspond to the same value for the density ratio. In this way, the extended model could explain the contingency of having two, or more, groups of real stellar systems (subsystems in the diagram total-mass fraction versus density ratio. [Projekat Ministarstva nauke Republike Srbije, br. 176011: Dynamics and Kinematics of Celestial Bodies and Systems
Anger in Middle School: The Solving Problems Together Model
Hall, Kimberly R.; Rushing, Jeri L.; Owens, Rachel B.
2009-01-01
Problem-focused interventions are considered to be one of the most effective group counseling strategies with adolescents. This article describes a problem-focused group counseling model, Solving Problems Together (SPT), with a small group of adolescent African American boys struggling with anger management. Adapted from the teaching philosophy of…
A Problem-Solving Model for Literacy Coaching Practice
Toll, Cathy A.
2017-01-01
Literacy coaches are more effective when they have a clear plan for their collaborations with teachers. This article provides details of such a plan, which involves identifying a problem, understanding the problem, deciding what to do differently, and trying something different. For each phase of the problem-solving model, there are key tasks for…
Ballistic model to estimate microsprinkler droplet distribution
Directory of Open Access Journals (Sweden)
Conceição Marco Antônio Fonseca
2003-01-01
Full Text Available Experimental determination of microsprinkler droplets is difficult and time-consuming. This determination, however, could be achieved using ballistic models. The present study aimed to compare simulated and measured values of microsprinkler droplet diameters. Experimental measurements were made using the flour method, and simulations using a ballistic model adopted by the SIRIAS computational software. Drop diameters quantified in the experiment varied between 0.30 mm and 1.30 mm, while the simulated between 0.28 mm and 1.06 mm. The greatest differences between simulated and measured values were registered at the highest radial distance from the emitter. The model presented a performance classified as excellent for simulating microsprinkler drop distribution.
Wright, Adam; Bates, David W
2010-01-01
BACKGROUND: Many natural phenomena demonstrate power-law distributions, where very common items predominate. Problems, medications and lab results represent some of the most important data elements in medicine, but their overall distribution has not been reported. OBJECTIVE: Our objective is to determine whether problems, medications and lab results demonstrate a power law distribution. METHODS: Retrospective review of electronic medical record data for 100,000 randomly selected patients seen at least twice in 2006 and 2007 at the Brigham and Women's Hospital in Boston and its affiliated medical practices. RESULTS: All three data types exhibited a power law distribution. The 12.5% most frequently used problems account for 80% of all patient problems, the top 11.8% of medications account for 80% of all medication orders and the top 4.5% of lab result types account for all lab results. CONCLUSION: These three data elements exhibited power law distributions with a small number of common items representing a substantial proportion of all orders and observations, which has implications for electronic health record design.
Simulation model of load balancing in distributed computing systems
Botygin, I. A.; Popov, V. N.; Frolov, S. G.
2017-02-01
The availability of high-performance computing, high speed data transfer over the network and widespread of software for the design and pre-production in mechanical engineering have led to the fact that at the present time the large industrial enterprises and small engineering companies implement complex computer systems for efficient solutions of production and management tasks. Such computer systems are generally built on the basis of distributed heterogeneous computer systems. The analytical problems solved by such systems are the key models of research, but the system-wide problems of efficient distribution (balancing) of the computational load and accommodation input, intermediate and output databases are no less important. The main tasks of this balancing system are load and condition monitoring of compute nodes, and the selection of a node for transition of the user’s request in accordance with a predetermined algorithm. The load balancing is one of the most used methods of increasing productivity of distributed computing systems through the optimal allocation of tasks between the computer system nodes. Therefore, the development of methods and algorithms for computing optimal scheduling in a distributed system, dynamically changing its infrastructure, is an important task.
Solid mechanics theory, modeling, and problems
Bertram, Albrecht
2015-01-01
This textbook offers an introduction to modeling the mechanical behavior of solids within continuum mechanics and thermodynamics. To illustrate the fundamental principles, the book starts with an overview of the most important models in one dimension. Tensor calculus, which is called for in three-dimensional modeling, is concisely presented in the second part of the book. Once the reader is equipped with these essential mathematical tools, the third part of the book develops the foundations of continuum mechanics right from the beginning. Lastly, the book’s fourth part focuses on modeling the mechanics of materials and in particular elasticity, viscoelasticity and plasticity. Intended as an introductory textbook for students and for professionals interested in self-study, it also features numerous worked-out examples to aid in understanding.
A Probabilistic Model for Uncertain Problem Solving
National Research Council Canada - National Science Library
Farley, Arthur M
1981-01-01
... and provide pragmatic focusing. Search methods are generalized to produce tree-structured plans incorporating the use of such operators. Several application domains for the model also are discussed.
Distributed Bayesian Networks for User Modeling
DEFF Research Database (Denmark)
Tedesco, Roberto; Dolog, Peter; Nejdl, Wolfgang
2006-01-01
The World Wide Web is a popular platform for providing eLearning applications to a wide spectrum of users. However – as users differ in their preferences, background, requirements, and goals – applications should provide personalization mechanisms. In the Web context, user models used by such ada......The World Wide Web is a popular platform for providing eLearning applications to a wide spectrum of users. However – as users differ in their preferences, background, requirements, and goals – applications should provide personalization mechanisms. In the Web context, user models used...... by such adaptive applications are often partial fragments of an overall user model. The fragments have then to be collected and merged into a global user profile. In this paper we investigate and present algorithms able to cope with distributed, fragmented user models – based on Bayesian Networks – in the context...... of Web-based eLearning platforms. The scenario we are tackling assumes learners who use several systems over time, which are able to create partial Bayesian Networks for user models based on the local system context. In particular, we focus on how to merge these partial user models. Our merge mechanism...
Data-Driven Model Order Reduction for Bayesian Inverse Problems
Cui, Tiangang; Youssef, Marzouk; Willcox, Karen
2014-01-01
One of the major challenges in using MCMC for the solution of inverse problems is the repeated evaluation of computationally expensive numerical models. We develop a data-driven projection- based model order reduction technique to reduce
Discrete and Continuous Models for Partitioning Problems
Lellmann, Jan
2013-04-11
Recently, variational relaxation techniques for approximating solutions of partitioning problems on continuous image domains have received considerable attention, since they introduce significantly less artifacts than established graph cut-based techniques. This work is concerned with the sources of such artifacts. We discuss the importance of differentiating between artifacts caused by discretization and those caused by relaxation and provide supporting numerical examples. Moreover, we consider in depth the consequences of a recent theoretical result concerning the optimality of solutions obtained using a particular relaxation method. Since the employed regularizer is quite tight, the considered relaxation generally involves a large computational cost. We propose a method to significantly reduce these costs in a fully automatic way for a large class of metrics including tree metrics, thus generalizing a method recently proposed by Strekalovskiy and Cremers (IEEE conference on computer vision and pattern recognition, pp. 1905-1911, 2011). © 2013 Springer Science+Business Media New York.
Modeling and Solving the Train Pathing Problem
Directory of Open Access Journals (Sweden)
Chuen-Yih Chen
2009-04-01
Full Text Available In a railroad system, train pathing is concerned with the assignment of trains to links and tracks, and train timetabling allocates time slots to trains. In this paper, we present an optimization heuristic to solve the train pathing and timetabling problem. This heuristic allows the dwell time of trains in a station or link to be dependent on the assigned tracks. It also allows the minimum clearance time between the trains to depend on their relative status. The heuristic generates a number of alternative paths for each train service in the initialization phase. Then it uses a neighborhood search approach to find good feasible combinations of these paths. A linear program is developed to evaluate the quality of each combination that is encountered. Numerical examples are provided.
Model of bidirectional reflectance distribution function for metallic materials
International Nuclear Information System (INIS)
Wang Kai; Zhu Jing-Ping; Liu Hong; Hou Xun
2016-01-01
Based on the three-component assumption that the reflection is divided into specular reflection, directional diffuse reflection, and ideal diffuse reflection, a bidirectional reflectance distribution function (BRDF) model of metallic materials is presented. Compared with the two-component assumption that the reflection is composed of specular reflection and diffuse reflection, the three-component assumption divides the diffuse reflection into directional diffuse and ideal diffuse reflection. This model effectively resolves the problem that constant diffuse reflection leads to considerable error for metallic materials. Simulation and measurement results validate that this three-component BRDF model can improve the modeling accuracy significantly and describe the reflection properties in the hemisphere space precisely for the metallic materials. (paper)
Model of bidirectional reflectance distribution function for metallic materials
Wang, Kai; Zhu, Jing-Ping; Liu, Hong; Hou, Xun
2016-09-01
Based on the three-component assumption that the reflection is divided into specular reflection, directional diffuse reflection, and ideal diffuse reflection, a bidirectional reflectance distribution function (BRDF) model of metallic materials is presented. Compared with the two-component assumption that the reflection is composed of specular reflection and diffuse reflection, the three-component assumption divides the diffuse reflection into directional diffuse and ideal diffuse reflection. This model effectively resolves the problem that constant diffuse reflection leads to considerable error for metallic materials. Simulation and measurement results validate that this three-component BRDF model can improve the modeling accuracy significantly and describe the reflection properties in the hemisphere space precisely for the metallic materials.
Ghosh, Diptesh; Chakrabarti, Anindya S.
2017-10-01
In this paper, we study a large-scale distributed coordination problem and propose efficient adaptive strategies to solve the problem. The basic problem is to allocate finite number of resources to individual agents in the absence of a central planner such that there is as little congestion as possible and the fraction of unutilized resources is reduced as far as possible. In the absence of a central planner and global information, agents can employ adaptive strategies that uses only a finite knowledge about the competitors. In this paper, we show that a combination of finite information sets and reinforcement learning can increase the utilization fraction of resources substantially.
Research on consumable distribution mode of shipbuilder’s shop based on vehicle routing problem
Directory of Open Access Journals (Sweden)
Xiang Su
2017-02-01
Full Text Available A distribution vehicle optimization is established with considerations for the problem of long period of requisition and high shop costs due to the existing consumable requisition mode in shipbuilder’s shops for the requirements of shops for consumables. The shortest traveling distance of distribution vehicles are calculated with the genetic algorithm (GA. Explorations are made into a shop consumable distribution mode for shipbuilders to help them to effectively save their production logistics costs, enhance their internal material management level and provide reference for shipbuilder’s change in traditional ways and realization of just-in-time (JIT production.
Inverse Modelling Problems in Linear Algebra Undergraduate Courses
Martinez-Luaces, Victor E.
2013-01-01
This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…
Following the Template: Transferring Modeling Skills to Nonstandard Problems
Tyumeneva, Yu. A.; Goncharova, M. V.
2017-01-01
This study seeks to analyze how students apply a mathematical modeling skill that was previously learned by solving standard word problems to the solution of word problems with nonstandard contexts. During the course of an experiment involving 106 freshmen, we assessed how well they were able to transfer the mathematical modeling skill that is…
Facilitating Change to a Problem-based Model
DEFF Research Database (Denmark)
Kolmos, Anette
2002-01-01
The paper presents the barriers which arise during the change process from a traditional educational system to a problem-based educational model.......The paper presents the barriers which arise during the change process from a traditional educational system to a problem-based educational model....
An ontological framework for model-based problem-solving
Scholten, H.; Beulens, A.J.M.
2012-01-01
Multidisciplinary projects to solve real world problems of increasing complexity are more and more plagued by obstacles such as miscommunication between modellers with different disciplinary backgrounds and bad modelling practices. To tackle these difficulties, a body of knowledge on problems, on
Broadband model of the distribution network
DEFF Research Database (Denmark)
Jensen, Martin Høgdahl
for circular conductors involving Bessel series. The two methods show equal values of resistance, but there is considerable difference in the values of internal inductance. A method for calculation of proximity effect is derived for a two-conductor configuration. This method is expanded to the use...... of frequency up to 200 kHz. The square wave measurements reveal the complete capacitance matrice at a frequency of approximately 12.5 MHz as well as the series inductance between the four conductors. The influence of non-ideal ground could not be measured due to the high impedance of the grounding device...... measurement and simulation, once the Phase model is used. No explanation is found on why the new material properties cause error in the Phase model. At the kyndby 10 kV test site a non-linear load is inserted on the secondary side of normal distribution transformer and the phase voltage and current...
Characteristics-based modelling of flow problems
International Nuclear Information System (INIS)
Saarinen, M.
1994-02-01
The method of characteristics is an exact way to proceed to the solution of hyperbolic partial differential equations. The numerical solutions, however, are obtained in the fixed computational grid where interpolations of values between the mesh points cause numerical errors. The Piecewise Linear Interpolation Method, PLIM, the utilization of which is based on the method of characteristics, has been developed to overcome these deficiencies. The thesis concentrates on the computer simulation of the two-phase flow. The main topics studied are: (1) the PLIM method has been applied to study the validity of the numerical scheme through solving various flow problems to achieve knowledge for the further development of the method, (2) the mathematical and physical validity and applicability of the two-phase flow equations based on the SFAV (Separation of the two-phase Flow According to Velocities) approach has been studied, and (3) The SFAV approach has been further developed for particular cases such as stratified horizontal two-phase flow. (63 refs., 4 figs.)
1991-06-01
Proceedings of The National Conference on Artificial Intelligence , pages 181-184, The American Association for Aritificial Intelligence , Pittsburgh...Intermediary Resource: Intelligent Executive Computer Communication John Lyman and Carla J. Conaway University of California at Los Angeles for Contracting...Include Security Classification) Interim Report: Distributed Problem Solving: Adaptive Networks With a Computer Intermediary Resource: Intelligent
Modelling the joint distribution of competing risks survival times using copula functions
Kaishev, V. K.; Haberman, S.; Dimitrova, D. S.
2005-01-01
The problem of modelling the joint distribution of survival times in a competing risks model, using copula functions is considered. In order to evaluate this joint distribution and the related overall survival function, a system of non-linear differential equations is solved, which relates the crude and net survival functions of the modelled competing risks, through the copula. A similar approach to modelling dependent multiple decrements was applied by Carriere (1994) who used a Gaussian cop...
How can model comparison help improving species distribution models?
Directory of Open Access Journals (Sweden)
Emmanuel Stephan Gritti
Full Text Available Today, more than ever, robust projections of potential species range shifts are needed to anticipate and mitigate the impacts of climate change on biodiversity and ecosystem services. Such projections are so far provided almost exclusively by correlative species distribution models (correlative SDMs. However, concerns regarding the reliability of their predictive power are growing and several authors call for the development of process-based SDMs. Still, each of these methods presents strengths and weakness which have to be estimated if they are to be reliably used by decision makers. In this study we compare projections of three different SDMs (STASH, LPJ and PHENOFIT that lie in the continuum between correlative models and process-based models for the current distribution of three major European tree species, Fagussylvatica L., Quercusrobur L. and Pinussylvestris L. We compare the consistency of the model simulations using an innovative comparison map profile method, integrating local and multi-scale comparisons. The three models simulate relatively accurately the current distribution of the three species. The process-based model performs almost as well as the correlative model, although parameters of the former are not fitted to the observed species distributions. According to our simulations, species range limits are triggered, at the European scale, by establishment and survival through processes primarily related to phenology and resistance to abiotic stress rather than to growth efficiency. The accuracy of projections of the hybrid and process-based model could however be improved by integrating a more realistic representation of the species resistance to water stress for instance, advocating for pursuing efforts to understand and formulate explicitly the impact of climatic conditions and variations on these processes.
Model Reduction using Vorobyev Moment Problem
Czech Academy of Sciences Publication Activity Database
Strakoš, Zdeněk
2009-01-01
Roč. 51, č. 3 (2009), s. 363-379 ISSN 1017-1398 R&D Projects: GA AV ČR IAA100300802 Institutional research plan: CEZ:AV0Z10300504 Keywords : matching moments * model reduction * Krylov subspace methods * conjugate gradient method * Lanczos method * Arnoldi method * Gauss-Christoffel quadrature * scattering amplitude Subject RIV: BA - General Mathematics Impact factor: 0.716, year: 2009
Cost Optimisation in Freight Distribution with Cross-Docking: N-Echelon Location Routing Problem
Directory of Open Access Journals (Sweden)
Jesus Gonzalez-Feliu
2012-03-01
Full Text Available Freight transportation constitutes one of the main activities that influence the economy and society, as it assures a vital link between suppliers and customers and represents a major source of employment. Multi-echelon distribution is one of the most common strategies adopted by the transportation companies in an aim of cost reduction. Although vehicle routing problems are very common in operational research, they are essentially related to single-echelon cases. This paper presents the main concepts of multi-echelon distribution with cross-docks and a unified notation for the N-echelon location routing problem. A literature review is also presented, in order to list the main problems and methods that can be helpful for scientists and transportation practitioners.
Our evolving conceptual model of the coastal eutrophication problem
Cloern, James E.
2001-01-01
A primary focus of coastal science during the past 3 decades has been the question: How does anthropogenic nutrient enrichment cause change in the structure or function of nearshore coastal ecosystems? This theme of environmental science is recent, so our conceptual model of the coastal eutrophication problem continues to change rapidly. In this review, I suggest that the early (Phase I) conceptual model was strongly influenced by limnologists, who began intense study of lake eutrophication by the 1960s. The Phase I model emphasized changing nutrient input as a signal, and responses to that signal as increased phytoplankton biomass and primary production, decomposition of phytoplankton-derived organic matter, and enhanced depletion of oxygen from bottom waters. Coastal research in recent decades has identified key differences in the responses of lakes and coastal-estuarine ecosystems to nutrient enrichment. The contemporary (Phase II) conceptual model reflects those differences and includes explicit recognition of (1) system-specific attributes that act as a filter to modulate the responses to enrichment (leading to large differences among estuarine-coastal systems in their sensitivity to nutrient enrichment); and (2) a complex suite of direct and indirect responses including linked changes in: water transparency, distribution of vascular plants and biomass of macroalgae, sediment biogeochemistry and nutrient cycling, nutrient ratios and their regulation of phytoplankton community composition, frequency of toxic/harmful algal blooms, habitat quality for metazoans, reproduction/growth/survival of pelagic and benthic invertebrates, and subtle changes such as shifts in the seasonality of ecosystem functions. Each aspect of the Phase II model is illustrated here with examples from coastal ecosystems around the world. In the last section of this review I present one vision of the next (Phase III) stage in the evolution of our conceptual model, organized around 5
Decoding Problem Gamblers' Signals: A Decision Model for Casino Enterprises.
Ifrim, Sandra
2015-12-01
The aim of the present study is to offer a validated decision model for casino enterprises. The model enables those users to perform early detection of problem gamblers and fulfill their ethical duty of social cost minimization. To this end, the interpretation of casino customers' nonverbal communication is understood as a signal-processing problem. Indicators of problem gambling recommended by Delfabbro et al. (Identifying problem gamblers in gambling venues: final report, 2007) are combined with Viterbi algorithm into an interdisciplinary model that helps decoding signals emitted by casino customers. Model output consists of a historical path of mental states and cumulated social costs associated with a particular client. Groups of problem and non-problem gamblers were simulated to investigate the model's diagnostic capability and its cost minimization ability. Each group consisted of 26 subjects and was subsequently enlarged to 100 subjects. In approximately 95% of the cases, mental states were correctly decoded for problem gamblers. Statistical analysis using planned contrasts revealed that the model is relatively robust to the suppression of signals performed by casino clientele facing gambling problems as well as to misjudgments made by staff regarding the clients' mental states. Only if the last mentioned source of error occurs in a very pronounced manner, i.e. judgment is extremely faulty, cumulated social costs might be distorted.
Effectiveness of discovery learning model on mathematical problem solving
Herdiana, Yunita; Wahyudin, Sispiyati, Ririn
2017-08-01
This research is aimed to describe the effectiveness of discovery learning model on mathematical problem solving. This research investigate the students' problem solving competency before and after learned by using discovery learning model. The population used in this research was student in grade VII in one of junior high school in West Bandung Regency. From nine classes, class VII B were randomly selected as the sample of experiment class, and class VII C as control class, which consist of 35 students every class. The method in this research was quasi experiment. The instrument in this research is pre-test, worksheet and post-test about problem solving of mathematics. Based on the research, it can be conclude that the qualification of problem solving competency of students who gets discovery learning model on level 80%, including in medium category and it show that discovery learning model effective to improve mathematical problem solving.
Collaborative problem solving with a total quality model.
Volden, C M; Monnig, R
1993-01-01
A collaborative problem-solving system committed to the interests of those involved complies with the teachings of the total quality management movement in health care. Deming espoused that any quality system must become an integral part of routine activities. A process that is used consistently in dealing with problems, issues, or conflicts provides a mechanism for accomplishing total quality improvement. The collaborative problem-solving process described here results in quality decision-making. This model incorporates Ishikawa's cause-and-effect (fishbone) diagram, Moore's key causes of conflict, and the steps of the University of North Dakota Conflict Resolution Center's collaborative problem solving model.
Belkina, T. A.; Konyukhova, N. B.; Kurochkin, S. V.
2016-01-01
Previous and new results are used to compare two mathematical insurance models with identical insurance company strategies in a financial market, namely, when the entire current surplus or its constant fraction is invested in risky assets (stocks), while the rest of the surplus is invested in a risk-free asset (bank account). Model I is the classical Cramér-Lundberg risk model with an exponential claim size distribution. Model II is a modification of the classical risk model (risk process with stochastic premiums) with exponential distributions of claim and premium sizes. For the survival probability of an insurance company over infinite time (as a function of its initial surplus), there arise singular problems for second-order linear integrodifferential equations (IDEs) defined on a semiinfinite interval and having nonintegrable singularities at zero: model I leads to a singular constrained initial value problem for an IDE with a Volterra integral operator, while II model leads to a more complicated nonlocal constrained problem for an IDE with a non-Volterra integral operator. A brief overview of previous results for these two problems depending on several positive parameters is given, and new results are presented. Additional results are concerned with the formulation, analysis, and numerical study of "degenerate" problems for both models, i.e., problems in which some of the IDE parameters vanish; moreover, passages to the limit with respect to the parameters through which we proceed from the original problems to the degenerate ones are singular for small and/or large argument values. Such problems are of mathematical and practical interest in themselves. Along with insurance models without investment, they describe the case of surplus completely invested in risk-free assets, as well as some noninsurance models of surplus dynamics, for example, charity-type models.
Storage Solutions for Power Quality Problems in Cyprus Electricity Distribution Network
Directory of Open Access Journals (Sweden)
Andreas Poullikkas
2014-01-01
Full Text Available In this work, a prediction of the effects of introducing energy storage systems on the network stability of the distribution network of Cyprus and a comparison in terms of cost with a traditional solution is carried out. In particular, for solving possible overvoltage problems, several scenarios of storage units' installation are used and compared with the alternative solution of extra cable connection between the node with the lowest voltage and the node with the highest voltage of the distribution network. For the comparison, a case study of a typical LV distribution feeder in the power system of Cyprus is used. The results indicated that the performance indicator of each solution depends on the type, the size and the position of installation of the storage unit. Also, as more storage units are installed the better the performance indicator and the more attractive is the investment in storage units to solve power quality problems in the distribution network. In the case where the technical requirements in voltage limitations according to distribution regulations are satisfied with one storage unit, the installation of an additional storage unit will only increase the final cost. The best solution, however, still remains the alternative solution of extra cable connection between the node with the lowest voltage and the node with the highest voltage of the distribution network, due to the lower investment costs compared to that of the storage units.
Nasution, M. L.; Yerizon, Y.; Gusmiyanti, R.
2018-04-01
One of the purpose mathematic learning is to develop problem solving abilities. Problem solving is obtained through experience in questioning non-routine. Improving students’ mathematical problem-solving abilities required an appropriate strategy in learning activities one of them is models problem based learning (PBL). Thus, the purpose of this research is to determine whether the problem solving abilities of mathematical students’ who learn to use PBL better than on the ability of students’ mathematical problem solving by applying conventional learning. This research included quasi experiment with static group design and population is students class XI MIA SMAN 1 Lubuk Alung. Class experiment in the class XI MIA 5 and class control in the class XI MIA 6. The instrument of final test students’ mathematical problem solving used essay form. The result of data final test in analyzed with t-test. The result is students’ mathematical problem solving abilities with PBL better then on the ability of students’ mathematical problem solving by applying conventional learning. It’s seen from the high percentage achieved by the group of students who learn to use PBL for each indicator of students’ mathematical problem solving.
Regularized multivariate regression models with skew-t error distributions
Chen, Lianfu
2014-06-01
We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both the regression coefficient and inverse scale matrices simultaneously. The sparsity is introduced through penalizing the negative log-likelihood by adding L1-penalties on the entries of the two matrices. Taking advantage of the hierarchical representation of skew-t distributions, and using the expectation conditional maximization (ECM) algorithm, we reduce the problem to penalized normal likelihood and develop a procedure to minimize the ensuing objective function. Using a simulation study the performance of the method is assessed, and the methodology is illustrated using a real data set with a 24-dimensional response vector. © 2014 Elsevier B.V.
Model Predictive Control for Distributed Microgrid Battery Energy Storage Systems
DEFF Research Database (Denmark)
Morstyn, Thomas; Hredzak, Branislav; Aguilera, Ricardo P.
2018-01-01
, and converter current constraints to be addressed. In addition, nonlinear variations in the charge and discharge efficiencies of lithium ion batteries are analyzed and included in the control strategy. Real-time digital simulations were carried out for an islanded microgrid based on the IEEE 13 bus prototypical......This brief proposes a new convex model predictive control (MPC) strategy for dynamic optimal power flow between battery energy storage (ES) systems distributed in an ac microgrid. The proposed control strategy uses a new problem formulation, based on a linear $d$ – $q$ reference frame voltage...... feeder, with distributed battery ES systems and intermittent photovoltaic generation. It is shown that the proposed control strategy approaches the performance of a strategy based on nonconvex optimization, while reducing the required computation time by a factor of 1000, making it suitable for a real...
A Distributed Snow Evolution Modeling System (SnowModel)
Liston, G. E.; Elder, K.
2004-12-01
A spatially distributed snow-evolution modeling system (SnowModel) has been specifically designed to be applicable over a wide range of snow landscapes, climates, and conditions. To reach this goal, SnowModel is composed of four sub-models: MicroMet defines the meteorological forcing conditions, EnBal calculates surface energy exchanges, SnowMass simulates snow depth and water-equivalent evolution, and SnowTran-3D accounts for snow redistribution by wind. While other distributed snow models exist, SnowModel is unique in that it includes a well-tested blowing-snow sub-model (SnowTran-3D) for application in windy arctic, alpine, and prairie environments where snowdrifts are common. These environments comprise 68% of the seasonally snow-covered Northern Hemisphere land surface. SnowModel also accounts for snow processes occurring in forested environments (e.g., canopy interception related processes). SnowModel is designed to simulate snow-related physical processes occurring at spatial scales of 5-m and greater, and temporal scales of 1-hour and greater. These include: accumulation from precipitation; wind redistribution and sublimation; loading, unloading, and sublimation within forest canopies; snow-density evolution; and snowpack ripening and melt. To enhance its wide applicability, SnowModel includes the physical calculations required to simulate snow evolution within each of the global snow classes defined by Sturm et al. (1995), e.g., tundra, taiga, alpine, prairie, maritime, and ephemeral snow covers. The three, 25-km by 25-km, Cold Land Processes Experiment (CLPX) mesoscale study areas (MSAs: Fraser, North Park, and Rabbit Ears) are used as SnowModel simulation examples to highlight model strengths, weaknesses, and features in forested, semi-forested, alpine, and shrubland environments.
Benchmark problems for numerical implementations of phase field models
International Nuclear Information System (INIS)
Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; Warren, J.; Heinonen, O. G.
2016-01-01
Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verify new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.
Hierarchical Model Predictive Control for Resource Distribution
DEFF Research Database (Denmark)
Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob
2010-01-01
units. The approach is inspired by smart-grid electric power production and consumption systems, where the flexibility of a large number of power producing and/or power consuming units can be exploited in a smart-grid solution. The objective is to accommodate the load variation on the grid, arising......This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...... on one hand from varying consumption, on the other hand by natural variations in power production e.g. from wind turbines. The approach presented is based on quadratic optimization and possess the properties of low algorithmic complexity and of scalability. In particular, the proposed design methodology...
Modeling Complex Chemical Systems: Problems and Solutions
van Dijk, Jan
2016-09-01
Non-equilibrium plasmas in complex gas mixtures are at the heart of numerous contemporary technologies. They typically contain dozens to hundreds of species, involved in hundreds to thousands of reactions. Chemists and physicists have always been interested in what are now called chemical reduction techniques (CRT's). The idea of such CRT's is that they reduce the number of species that need to be considered explicitly without compromising the validity of the model. This is usually achieved on the basis of an analysis of the reaction time scales of the system under study, which identifies species that are in partial equilibrium after a given time span. The first such CRT that has been widely used in plasma physics was developed in the 1960's and resulted in the concept of effective ionization and recombination rates. It was later generalized to systems in which multiple levels are effected by transport. In recent years there has been a renewed interest in tools for chemical reduction and reaction pathway analysis. An example of the latter is the PumpKin tool. Another trend is that techniques that have previously been developed in other fields of science are adapted as to be able to handle the plasma state of matter. Examples are the Intrinsic Low Dimension Manifold (ILDM) method and its derivatives, which originate from combustion engineering, and the general-purpose Principle Component Analysis (PCA) technique. In this contribution we will provide an overview of the most common reduction techniques, then critically assess the pros and cons of the methods that have gained most popularity in recent years. Examples will be provided for plasmas in argon and carbon dioxide.
Unleashing spatially distributed ecohydrology modeling using Big Data tools
Miles, B.; Idaszak, R.
2015-12-01
Physically based spatially distributed ecohydrology models are useful for answering science and management questions related to the hydrology and biogeochemistry of prairie, savanna, forested, as well as urbanized ecosystems. However, these models can produce hundreds of gigabytes of spatial output for a single model run over decadal time scales when run at regional spatial scales and moderate spatial resolutions (~100-km2+ at 30-m spatial resolution) or when run for small watersheds at high spatial resolutions (~1-km2 at 3-m spatial resolution). Numerical data formats such as HDF5 can store arbitrarily large datasets. However even in HPC environments, there are practical limits on the size of single files that can be stored and reliably backed up. Even when such large datasets can be stored, querying and analyzing these data can suffer from poor performance due to memory limitations and I/O bottlenecks, for example on single workstations where memory and bandwidth are limited, or in HPC environments where data are stored separately from computational nodes. The difficulty of storing and analyzing spatial data from ecohydrology models limits our ability to harness these powerful tools. Big Data tools such as distributed databases have the potential to surmount the data storage and analysis challenges inherent to large spatial datasets. Distributed databases solve these problems by storing data close to computational nodes while enabling horizontal scalability and fault tolerance. Here we present the architecture of and preliminary results from PatchDB, a distributed datastore for managing spatial output from the Regional Hydro-Ecological Simulation System (RHESSys). The initial version of PatchDB uses message queueing to asynchronously write RHESSys model output to an Apache Cassandra cluster. Once stored in the cluster, these data can be efficiently queried to quickly produce both spatial visualizations for a particular variable (e.g. maps and animations), as well
APPROXIMATION OF PROBABILITY DISTRIBUTIONS IN QUEUEING MODELS
Directory of Open Access Journals (Sweden)
T. I. Aliev
2013-03-01
Full Text Available For probability distributions with variation coefficient, not equal to unity, mathematical dependences for approximating distributions on the basis of first two moments are derived by making use of multi exponential distributions. It is proposed to approximate distributions with coefficient of variation less than unity by using hypoexponential distribution, which makes it possible to generate random variables with coefficient of variation, taking any value in a range (0; 1, as opposed to Erlang distribution, having only discrete values of coefficient of variation.
Tradeable CO{sub 2} emission permits: initial distribution as a justice problem
Energy Technology Data Exchange (ETDEWEB)
Kverndokk, S. [Stiftelsen for Samfunns- og Naeringslivsforskning, Oslo (Norway)
1992-11-01
Tradeable emission permits are one of the most discussed policy instruments to implement international agreements on CO{sub 2} emission reductions. One characteristic of this instrument is that it separates the questions of efficiency and justice; in an idealised world, efficiency is achieved no matter how the permits are distributed. By assuming separability of inter- and intragenerational justice, the author can discuss the initial distribution of permits as an intragenerational distributive justice problem. In contrast to efficiency, where Pareto Optimality is an overall accepted principle, there is no consensus on a ``best`` equity principle. Different principles lead to different rules for distribution. The framework is to consider what the author believe to be metaprinciples of theories of justice; ethical individualism and presentism, as well as a generally accepted principle of avoiding morally arbitrary components as standards for distribution. Using these principles in an exclusionary way, working with a list of alternative allocation rules, a distribution proportional to population is recommended. Arguments against this rule are discussed, and special attention is paid to political feasibility. Justice and political feasibility may contrast, so also in this case. Even if a distribution based only on population may be politically unacceptable, there may be prospects to use this criterion in combination with other rules, as well as to put more weight on it in the future. 26 refs.
Tradeable CO[sub 2] emission permits: initial distribution as a justice problem
Energy Technology Data Exchange (ETDEWEB)
Kverndokk, S. (Stiftelsen for Samfunns- og Naeringslivsforskning, Oslo (Norway))
1992-11-01
Tradeable emission permits are one of the most discussed policy instruments to implement international agreements on CO[sub 2] emission reductions. One characteristic of this instrument is that it separates the questions of efficiency and justice; in an idealised world, efficiency is achieved no matter how the permits are distributed. By assuming separability of inter- and intragenerational justice, the author can discuss the initial distribution of permits as an intragenerational distributive justice problem. In contrast to efficiency, where Pareto Optimality is an overall accepted principle, there is no consensus on a ''best'' equity principle. Different principles lead to different rules for distribution. The framework is to consider what the author believe to be metaprinciples of theories of justice; ethical individualism and presentism, as well as a generally accepted principle of avoiding morally arbitrary components as standards for distribution. Using these principles in an exclusionary way, working with a list of alternative allocation rules, a distribution proportional to population is recommended. Arguments against this rule are discussed, and special attention is paid to political feasibility. Justice and political feasibility may contrast, so also in this case. Even if a distribution based only on population may be politically unacceptable, there may be prospects to use this criterion in combination with other rules, as well as to put more weight on it in the future. 26 refs.
Models for the discrete berth allocation problem: A computational comparison
DEFF Research Database (Denmark)
Buhrkal, Katja Frederik; Zuglian, Sara; Røpke, Stefan
2011-01-01
In this paper we consider the problem of allocating arriving ships to discrete berth locations at container terminals. This problem is recognized as one of the most important processes for any container terminal. We review and describe three main models of the discrete dynamic berth allocation...
Models for the Discrete Berth Allocation Problem: A Computational Comparison
DEFF Research Database (Denmark)
Buhrkal, Katja; Zuglian, Sara; Røpke, Stefan
In this paper we consider the problem of allocating arriving ships to discrete berth locations at container terminals. This problem is recognized as one of the most important processes for any container terminal. We review and describe the three main models of the discrete dynamic berth allocation...
Problem Resolution through Electronic Mail: A Five-Step Model.
Grandgenett, Neal; Grandgenett, Don
2001-01-01
Discusses the use of electronic mail within the general resolution and management of administrative problems and emphasizes the need for careful attention to problem definition and clarity of language. Presents a research-based five-step model for the effective use of electronic mail based on experiences at the University of Nebraska at Omaha.…
A descriptive model of information problem solving while using internet
Brand-Gruwel, Saskia; Wopereis, Iwan; Walraven, Amber
2009-01-01
This paper presents the IPS-I-model: a model that describes the process of information problem solving (IPS) in which the Internet (I) is used to search information. The IPS-I-model is based on three studies, in which students in secondary and (post) higher education were asked to solve information
Centrifuge Modelling of Two Civil-Environmental Problems
National Research Council Canada - National Science Library
Goodings, Deborah
2001-01-01
Research Problem 1: Frost heave and thaw induced settlement in silt and silty clay developing over a year have been modelled correctly using a geotechnical centrifuge with tests requiring less than a day...
Spreadsheet-Enhanced Problem Solving in Context as Modeling
Directory of Open Access Journals (Sweden)
Sergei Abramovich
2003-07-01
development through situated mathematical problem solving. Modeling activities described in this paper support the epistemological position regarding the interplay that exists between the development of mathematical concepts and available methods of calculation. The spreadsheet used is Microsoft Excel 2001
International Nuclear Information System (INIS)
Grscic, Z.
1989-01-01
Models for solving transport and dispersion problems of radioactive pollutants through atmosphere are briefly shown. These models are the base for solving and some special problems such as: estimating effective and physical heights of radioactive sources, computation of radioactive concentration distribution from multiple sources etc (author)
Solution of the strong CP problem in models with scalars
International Nuclear Information System (INIS)
Dimopoulos, S.
1978-01-01
A possible solution to the strong CP problem is pointed out within the context of a Weinberg--Salam model with two Higgs fields coupled in a Peccei--Quinn symmetric fashion. This is done by extending the colour group to a bigger simple group which is broken at some very high energy. The model contains a heavy axion. No old or new U(1) problem re-emerges. 31 references
An analog computer method for solving flux distribution problems in multi region nuclear reactors
Energy Technology Data Exchange (ETDEWEB)
Radanovic, L; Bingulac, S; Lazarevic, B; Matausek, M [Boris Kidric Institute of Nuclear Sciences Vinca, Beograd (Yugoslavia)
1963-04-15
The paper describes a method developed for determining criticality conditions and plotting flux distribution curves in multi region nuclear reactors on a standard analog computer. The method, which is based on the one-dimensional two group treatment, avoids iterative procedures normally used for boundary value problems and is practically insensitive to errors in initial conditions. The amount of analog equipment required is reduced to a minimum and is independent of the number of core regions and reflectors. (author)
The Consensus String Problem and the Complexity of Comparing Hidden Markov Models
DEFF Research Database (Denmark)
Lyngsø, Rune Bang; Pedersen, Christian Nørgaard Storm
2002-01-01
The basic theory of hidden Markov models was developed and applied to problems in speech recognition in the late 1960s, and has since then been applied to numerous problems, e.g. biological sequence analysis. Most applications of hidden Markov models are based on efficient algorithms for computing......-norms. We discuss the applicability of the technique used for proving the hardness of comparing two hidden Markov models under the L1-norm to other measures of distance between probability distributions. In particular, we show that it cannot be used for proving NP-hardness of determining the Kullback...
Brevoort, Maurice J.
1937-01-01
In the design of a cowling a certain pressure drop across the cylinders of a radial air-cooled engine is made available. Baffles are designed to make use of this available pressure drop for cooling. The problem of cooling an air-cooled engine cylinder has been treated, for the most part, from considerations of a large heat-transfer coefficient. The knowledge of the precise cylinder characteristics that give a maximum heat-transfer coefficient should be the first consideration. The next problem is to distribute this ability to cool so that the cylinder cools uniformly. This report takes up the problem of the design of a baffle for a model cylinder. A study has been made of the important principles involved in the operation of a baffle for an engine cylinder and shows that the cooling can be improved 20% by using a correctly designed baffle. Such a gain is as effective in cooling the cylinder with the improved baffle as a 65% increase in pressure drop across the standard baffle and fin tips.
Optimal Water-Power Flow Problem: Formulation and Distributed Optimal Solution
Energy Technology Data Exchange (ETDEWEB)
Dall-Anese, Emiliano [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhao, Changhong [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zamzam, Admed S. [University of Minnesota; Sidiropoulos, Nicholas D. [University of Minnesota; Taylor, Josh A. [University of Toronto
2018-01-12
This paper formalizes an optimal water-power flow (OWPF) problem to optimize the use of controllable assets across power and water systems while accounting for the couplings between the two infrastructures. Tanks and pumps are optimally managed to satisfy water demand while improving power grid operations; {for the power network, an AC optimal power flow formulation is augmented to accommodate the controllability of water pumps.} Unfortunately, the physics governing the operation of the two infrastructures and coupling constraints lead to a nonconvex (and, in fact, NP-hard) problem; however, after reformulating OWPF as a nonconvex, quadratically-constrained quadratic problem, a feasible point pursuit-successive convex approximation approach is used to identify feasible and optimal solutions. In addition, a distributed solver based on the alternating direction method of multipliers enables water and power operators to pursue individual objectives while respecting the couplings between the two networks. The merits of the proposed approach are demonstrated for the case of a distribution feeder coupled with a municipal water distribution network.
Zhang, Langwen; Xie, Wei; Wang, Jingcheng
2017-11-01
In this work, synthesis of robust distributed model predictive control (MPC) is presented for a class of linear systems subject to structured time-varying uncertainties. By decomposing a global system into smaller dimensional subsystems, a set of distributed MPC controllers, instead of a centralised controller, are designed. To ensure the robust stability of the closed-loop system with respect to model uncertainties, distributed state feedback laws are obtained by solving a min-max optimisation problem. The design of robust distributed MPC is then transformed into solving a minimisation optimisation problem with linear matrix inequality constraints. An iterative online algorithm with adjustable maximum iteration is proposed to coordinate the distributed controllers to achieve a global performance. The simulation results show the effectiveness of the proposed robust distributed MPC algorithm.
Technology of solving multi-objective problems of control of systems with distributed parameters
Rapoport, E. Ya.; Pleshivtseva, Yu. E.
2017-07-01
A constructive technology of multi-objective optimization of control of distributed parameter plants is proposed. The technology is based on a single-criterion version in the form of the minimax convolution of normalized performance criteria. The approach under development is based on the transition to an equivalent form of the variational problem with constraints, with the problem solution being a priori Pareto-effective. Further procedures of preliminary parameterization of control actions and subsequent reduction to a special problem of semi-infinite programming make it possible to find the sought extremals with the use of their Chebyshev properties and fundamental laws of the subject domain. An example of multi-objective optimization of operation modes of an engineering thermophysics object is presented, which is of independent interest.
Frausto-Solis, Juan; Liñán-García, Ernesto; Sánchez-Hernández, Juan Paulo; González-Barbosa, J Javier; González-Flores, Carlos; Castilla-Valdez, Guadalupe
2016-01-01
A new hybrid Multiphase Simulated Annealing Algorithm using Boltzmann and Bose-Einstein distributions (MPSABBE) is proposed. MPSABBE was designed for solving the Protein Folding Problem (PFP) instances. This new approach has four phases: (i) Multiquenching Phase (MQP), (ii) Boltzmann Annealing Phase (BAP), (iii) Bose-Einstein Annealing Phase (BEAP), and (iv) Dynamical Equilibrium Phase (DEP). BAP and BEAP are simulated annealing searching procedures based on Boltzmann and Bose-Einstein distributions, respectively. DEP is also a simulated annealing search procedure, which is applied at the final temperature of the fourth phase, which can be seen as a second Bose-Einstein phase. MQP is a search process that ranges from extremely high to high temperatures, applying a very fast cooling process, and is not very restrictive to accept new solutions. However, BAP and BEAP range from high to low and from low to very low temperatures, respectively. They are more restrictive for accepting new solutions. DEP uses a particular heuristic to detect the stochastic equilibrium by applying a least squares method during its execution. MPSABBE parameters are tuned with an analytical method, which considers the maximal and minimal deterioration of problem instances. MPSABBE was tested with several instances of PFP, showing that the use of both distributions is better than using only the Boltzmann distribution on the classical SA.
Modeling and Solving the Liner Shipping Service Selection Problem
DEFF Research Database (Denmark)
Karsten, Christian Vad; Balakrishnan, Anant
We address a tactical planning problem, the Liner Shipping Service Selection Problem (LSSSP), facing container shipping companies. Given estimated demand between various ports, the LSSSP entails selecting the best subset of non-simple cyclic sailing routes from a given pool of candidate routes...... to accurately model transshipment costs and incorporate routing policies such as maximum transit time, maritime cabotage rules, and operational alliances. Our hop-indexed arc flow model is smaller and easier to solve than path flow models. We outline a preprocessing procedure that exploits both the routing...... requirements and the hop limits to reduce problem size, and describe techniques to accelerate the solution procedure. We present computational results for realistic problem instances from the benchmark suite LINER-LIB....
Menshikh, V.; Samorokovskiy, A.; Avsentev, O.
2018-03-01
The mathematical model of optimizing the allocation of resources to reduce the time for management decisions and algorithms to solve the general problem of resource allocation. The optimization problem of choice of resources in organizational systems in order to reduce the total execution time of a job is solved. This problem is a complex three-level combinatorial problem, for the solving of which it is necessary to implement the solution to several specific problems: to estimate the duration of performing each action, depending on the number of performers within the group that performs this action; to estimate the total execution time of all actions depending on the quantitative composition of groups of performers; to find such a distribution of the existing resource of performers in groups to minimize the total execution time of all actions. In addition, algorithms to solve the general problem of resource allocation are proposed.
Towards an Information Model of Consistency Maintenance in Distributed Interactive Applications
Directory of Open Access Journals (Sweden)
Xin Zhang
2008-01-01
Full Text Available A novel framework to model and explore predictive contract mechanisms in distributed interactive applications (DIAs using information theory is proposed. In our model, the entity state update scheme is modelled as an information generation, encoding, and reconstruction process. Such a perspective facilitates a quantitative measurement of state fidelity loss as a result of the distribution protocol. Results from an experimental study on a first-person shooter game are used to illustrate the utility of this measurement process. We contend that our proposed model is a starting point to reframe and analyse consistency maintenance in DIAs as a problem in distributed interactive media compression.
DEFF Research Database (Denmark)
Ding, Tao; Li, Cheng; Huang, Can
2018-01-01
–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost......In order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master...... optimality. Numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods....
Dynamical Models For Prices With Distributed Delays
Directory of Open Access Journals (Sweden)
Mircea Gabriela
2015-06-01
Full Text Available In the present paper we study some models for the price dynamics of a single commodity market. The quantities of supplied and demanded are regarded as a function of time. Nonlinearities in both supply and demand functions are considered. The inventory and the level of inventory are taken into consideration. Due to the fact that the consumer behavior affects commodity demand, and the behavior is influenced not only by the instantaneous price, but also by the weighted past prices, the distributed time delay is introduced. The following kernels are taken into consideration: demand price weak kernel and demand price Dirac kernel. Only one positive equilibrium point is found and its stability analysis is presented. When the demand price kernel is weak, under some conditions of the parameters, the equilibrium point is locally asymptotically stable. When the demand price kernel is Dirac, the existence of the local oscillations is investigated. A change in local stability of the equilibrium point, from stable to unstable, implies a Hopf bifurcation. A family of periodic orbits bifurcates from the positive equilibrium point when the time delay passes through a critical value. The last part contains some numerical simulations to illustrate the effectiveness of our results and conclusions.
Directory of Open Access Journals (Sweden)
V. V. Egorov
2011-01-01
Full Text Available The paper contains an analysis of specific features pertaining to the activity of operators dealing with automatic control systems of gas-distribution stations. The professional operator’s activity is presented in the form of the developed data model. Possible conceptual approaches to the research are analyzed in the paper. The paper describes an author’s approach to studying a risk decrease problem in the activity of operators on the basis of the analytical research results. Technology for obtaining research results is cited in the paper.
A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.
Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio
2017-11-01
Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this
The Model of Problem Based Learning in Practice: Evidence from Aalborg University
DEFF Research Database (Denmark)
Turcan, Romeo V.
into practice when they go through solving such problems. At the end of the day, the PBL based teaching is assessed based on the success of the problems solved, e.g., in the form of solution(s) provided, their creativity, innovation and applicability. Moreover, PBL-based teaching can identify theoretical gaps......The aim of this paper is to share an experience from Aalborg University on the application of Problem Based Learning (PBL) model, with a specific example from a bachelor studies. PBL model has now been acknowledged worldwide as a powerful tool that allows students, faculty members and industry...... practitioners engage in multi-disciplinary, collaborative and geographically distributed activities. The key word in the model is ‘problem’ – a problem that is correctly formulated eventually affects the process of learning. It is also linked to the intended outcome of the PBL based teaching, whereby students...
Various forms of indexing HDMR for modelling multivariate classification problems
Energy Technology Data Exchange (ETDEWEB)
Aksu, Çağrı [Bahçeşehir University, Information Technologies Master Program, Beşiktaş, 34349 İstanbul (Turkey); Tunga, M. Alper [Bahçeşehir University, Software Engineering Department, Beşiktaş, 34349 İstanbul (Turkey)
2014-12-10
The Indexing HDMR method was recently developed for modelling multivariate interpolation problems. The method uses the Plain HDMR philosophy in partitioning the given multivariate data set into less variate data sets and then constructing an analytical structure through these partitioned data sets to represent the given multidimensional problem. Indexing HDMR makes HDMR be applicable to classification problems having real world data. Mostly, we do not know all possible class values in the domain of the given problem, that is, we have a non-orthogonal data structure. However, Plain HDMR needs an orthogonal data structure in the given problem to be modelled. In this sense, the main idea of this work is to offer various forms of Indexing HDMR to successfully model these real life classification problems. To test these different forms, several well-known multivariate classification problems given in UCI Machine Learning Repository were used and it was observed that the accuracy results lie between 80% and 95% which are very satisfactory.
Developing a Model for Solving the Flight Perturbation Problem
Directory of Open Access Journals (Sweden)
Amirreza Nickkar
2015-02-01
Full Text Available Purpose: In the aviation and airline industry, crew costs are the second largest direct operating cost next to the fuel costs. But unlike the fuel costs, a considerable portion of the crew costs can be saved through optimized utilization of the internal resources of an airline company. Therefore, solving the flight perturbation scheduling problem, in order to provide an optimized schedule in a comprehensive manner that covered all problem dimensions simultaneously, is very important. In this paper, we defined an integrated recovery model as that which is able to recover aircraft and crew dimensions simultaneously in order to produce more economical solutions and create fewer incompatibilities between the decisions. Design/methodology/approach: Current research is performed based on the development of one of the flight rescheduling models with disruption management approach wherein two solution strategies for flight perturbation problem are presented: Dantzig-Wolfe decomposition and Lagrangian heuristic. Findings: According to the results of this research, Lagrangian heuristic approach for the DW-MP solved the problem optimally in all known cases. Also, this strategy based on the Dantig-Wolfe decomposition manage to produce a solution within an acceptable time (Under 1 Sec. Originality/value: This model will support the decisions of the flight controllers in the operation centers for the airlines. When the flight network faces a problem the flight controllers achieve a set of ranked answers using this model thus, applying crew’s conditions in the proposed model caused this model to be closer to actual conditions.
Application of the distributed genetic algorithm for loading pattern optimization problems
International Nuclear Information System (INIS)
Hashimoto, Hiroshi; Yamamoto, Akio
2000-01-01
The distributed genetic algorithm (DGA) is applied for loading pattern optimization problems of the pressurized water reactors (PWR). Due to stiff nature of the loading pattern optimizations (e.g. multi-modality and non-linearity), stochastic methods like the simulated annealing or the genetic algorithm (GA) are widely applied for these problems. A basic concept of DGA is based on that of GA. However, DGA equally distributes candidates of solutions (i.e. loading patterns) to several independent 'islands' and evolves them in each island. Migrations of some candidates are performed among islands with a certain period. Since candidates of solutions independently evolve in each island with accepting different genes of migrants from other islands, premature convergence in the traditional GA can be prevented. Because many candidate loading patterns should be evaluated in one generation of GA or DGA, the parallelization in these calculations works efficiently. Parallel efficiency was measured using our optimization code and good load balance was attained even in a heterogeneous cluster environment due to dynamic distribution of the calculation load. The optimization code is based on the client/server architecture with the TCP/IP native socket and a client (optimization module) and calculation server modules communicate the objects of loading patterns each other. Throughout the sensitivity study on optimization parameters of DGA, a suitable set of the parameters for a test problem was identified. Finally, optimization capability of DGA and the traditional GA was compared in the test problem and DGA provided better optimization results than the traditional GA. (author)
Data-Driven Model Order Reduction for Bayesian Inverse Problems
Cui, Tiangang
2014-01-06
One of the major challenges in using MCMC for the solution of inverse problems is the repeated evaluation of computationally expensive numerical models. We develop a data-driven projection- based model order reduction technique to reduce the computational cost of numerical PDE evaluations in this context.
Impedance model for quantum-mechanical barrier problems
International Nuclear Information System (INIS)
Nelin, Evgenii A
2007-01-01
Application of the impedance model to typical quantum-mechanical barrier problems, including those for structures with resonant electron tunneling, is discussed. The efficiency of the approach is illustrated. The physical transparency and compactness of the model and its potential as a teaching and learning tool are discussed. (methodological notes)
Problems associated with modelling future biomass use in developing countries
International Nuclear Information System (INIS)
Turkson, J.; Fenhann, J.
1997-01-01
One of the main objectives of modelling biomass consumption is to obtain accurate assessment of current and future biomass supply and demand patterns. Some problems associated with biomass modelling in the developing countries are discussed, the focus is put on Africa. The wood fuel and charcoal consumption in households are investigated. Differences between rural and urban areas are pointed out. (K.A.)
Evaluating to Solve Educational Problems: An Alternative Model.
Friedman, Myles I.; Anderson, Lorin W.
1979-01-01
A 19-step general evaluation model is described through its four stages: identifying problems, prescribing program solutions, evaluating the operation of the program, and evaluating the effectiveness of the model. The role of the evaluator in decision making is also explored. (RAO)
Correlation Structures of Correlated Binomial Models and Implied Default Distribution
S. Mori; K. Kitsukawa; M. Hisakado
2006-01-01
We show how to analyze and interpret the correlation structures, the conditional expectation values and correlation coefficients of exchangeable Bernoulli random variables. We study implied default distributions for the iTraxx-CJ tranches and some popular probabilistic models, including the Gaussian copula model, Beta binomial distribution model and long-range Ising model. We interpret the differences in their profiles in terms of the correlation structures. The implied default distribution h...
A hybrid genetic algorithm for the distributed permutation flowshop scheduling problem
Directory of Open Access Journals (Sweden)
Jian Gao
2011-08-01
Full Text Available Distributed Permutation Flowshop Scheduling Problem (DPFSP is a newly proposed scheduling problem, which is a generalization of classical permutation flow shop scheduling problem. The DPFSP is NP-hard in general. It is in the early stages of studies on algorithms for solving this problem. In this paper, we propose a GA-based algorithm, denoted by GA_LS, for solving this problem with objective to minimize the maximum completion time. In the proposed GA_LS, crossover and mutation operators are designed to make it suitable for the representation of DPFSP solutions, where the set of partial job sequences is employed. Furthermore, GA_LS utilizes an efficient local search method to explore neighboring solutions. The local search method uses three proposed rules that move jobs within a factory or between two factories. Intensive experiments on the benchmark instances, extended from Taillard instances, are carried out. The results indicate that the proposed hybrid genetic algorithm can obtain better solutions than all the existing algorithms for the DPFSP, since it obtains better relative percentage deviation and differences of the results are also statistically significant. It is also seen that best-known solutions for most instances are updated by our algorithm. Moreover, we also show the efficiency of the GA_LS by comparing with similar genetic algorithms with the existing local search methods.
Application of a Mathematical Model to an Advertisement Reservation Problem
Directory of Open Access Journals (Sweden)
Ozlem COSGUN
2013-01-01
Full Text Available Television networks provide TV programs free of charge to the public. However, they acquire their revenue by telecasting advertisements in the midst of continuing programs or shows. A key problem faced by the TV networks in Turkey is how to accept and televise the advertisements reserved by a client on a specified advertisement break which we called “Advertisement Reservation Problem” (ARP. The problem is complicated by limited time inventory, by different rating points for different target groups, competition avoidance and the relationship between TV networks and clients. In this study we have developed a mathematical model for advertisement reservation problem and extended this model for some cases encountered in real business life. We have also discussed how these cases affect the decisions of a TV network. Mixed integer linear programming approach is proposed to solve these problems. This approach has been implemented to a case taken from one of the biggest TV networks of Turkey.
Modelling human problem solving with data from an online game.
Rach, Tim; Kirsch, Alexandra
2016-11-01
Since the beginning of cognitive science, researchers have tried to understand human strategies in order to develop efficient and adequate computational methods. In the domain of problem solving, the travelling salesperson problem has been used for the investigation and modelling of human solutions. We propose to extend this effort with an online game, in which instances of the travelling salesperson problem have to be solved in the context of a game experience. We report on our effort to design and run such a game, present the data contained in the resulting openly available data set and provide an outlook on the use of games in general for cognitive science research. In addition, we present three geometrical models mapping the starting point preferences in the problems presented in the game as the result of an evaluation of the data set.
International Nuclear Information System (INIS)
Iskandar, Ismed; Gondokaryono, Yudi Satria
2016-01-01
In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range
Flash flood modeling with the MARINE hydrological distributed model
Estupina-Borrell, V.; Dartus, D.; Ababou, R.
2006-11-01
Flash floods are characterized by their violence and the rapidity of their occurrence. Because these events are rare and unpredictable, but also fast and intense, their anticipation with sufficient lead time for warning and broadcasting is a primary subject of research. Because of the heterogeneities of the rain and of the behavior of the surface, spatially distributed hydrological models can lead to a better understanding of the processes and so on they can contribute to a better forecasting of flash flood. Our main goal here is to develop an operational and robust methodology for flash flood forecasting. This methodology should provide relevant data (information) about flood evolution on short time scales, and should be applicable even in locations where direct observations are sparse (e.g. absence of historical and modern rainfalls and streamflows in small mountainous watersheds). The flash flood forecast is obtained by the physically based, space-time distributed hydrological model "MARINE'' (Model of Anticipation of Runoff and INondations for Extreme events). This model is presented and tested in this paper for a real flash flood event. The model consists in two steps, or two components: the first component is a "basin'' flood module which generates flood runoff in the upstream part of the watershed, and the second component is the "stream network'' module, which propagates the flood in the main river and its subsidiaries. The basin flash flood generation model is a rainfall-runoff model that can integrate remotely sensed data. Surface hydraulics equations are solved with enough simplifying hypotheses to allow real time exploitation. The minimum data required by the model are: (i) the Digital Elevation Model, used to calculate slopes that generate runoff, it can be issued from satellite imagery (SPOT) or from French Geographical Institute (IGN); (ii) the rainfall data from meteorological radar, observed or anticipated by the French Meteorological Service (M
Deterministic Properties of Serially Connected Distributed Lag Models
Directory of Open Access Journals (Sweden)
Piotr Nowak
2013-01-01
Full Text Available Distributed lag models are an important tool in modeling dynamic systems in economics. In the analysis of composite forms of such models, the component models are ordered in parallel (with the same independent variable and/or in series (where the independent variable is also the dependent variable in the preceding model. This paper presents an analysis of certain deterministic properties of composite distributed lag models composed of component distributed lag models arranged in sequence, and their asymptotic properties in particular. The models considered are in discrete form. Even though the paper focuses on deterministic properties of distributed lag models, the derivations are based on analytical tools commonly used in probability theory such as probability distributions and the central limit theorem. (original abstract
Identification of Chemistry Learning Problems Viewed From Conceptual Change Model
Redhana, I. W; Sudria, I. B. N; Hidayat, I; Merta, L. M
2017-01-01
This study aimed at describing and explaining chemistry learning problems viewed from conceptual change model and misconceptions of students. The study was qualitative research of case study type conducted in one class of SMAN 1 Singaraja. Subjects of the study were a chemistry teacher and students. Data were obtained through classroom observation, interviews, and conception tests. The chemistry learning problems were grouped based on aspects of necessity, intelligibility, plausibility, and f...
Mathematical model of heat transfer to predict distribution of hardness through the Jominy bar
International Nuclear Information System (INIS)
Lopez, E.; Hernandez, J. B.; Solorio, G.; Vergara, H. J.; Vazquez, O.; Garnica, F.
2013-01-01
The heat transfer coefficient was estimated at the bottom surface at Jominy bar end quench specimen by solution of the heat inverse conduction problem. A mathematical model based on the finite-difference method was developed to predict thermal paths and volume fraction of transformed phases. The mathematical model was codified in the commercial package Microsoft Visual Basic v. 6. The calculated thermal path and final phase distribution were used to evaluate the hardness distribution along the AISI 4140 Jominy bar. (Author)
Managing problem employees: a model program and practical guide.
Miller, Laurence
2010-01-01
This article presents a model program for managing problem employees that includes a description ofthe basic types of problem employees and employee problems, as well as practical recommendations for. (1) selection and screening, (2) education and training, (3) coaching and counseling, (4) discipline, (5) psychological fitness-for-duty evaluations, (6) mental health services, (7) termination, and (8) leadership and administrative strategies. Throughout, the emphasis on balancing the need for order and productivity in the workplace with fairness and concern for employee health and well-being.
Angular momentum dependence of the distribution of shell model eigenenergies
International Nuclear Information System (INIS)
Yen, M.K.
1974-01-01
In the conventional shell model calculation the many-particle energy matrices are constructed and diagonalized for definite angular momentum and parity. However the resulting set of eigenvalues possess a near normal behavior and hence a simple statistical description is possible. Usually one needs only about four parameters to capture the average level densities if the size of the set is not too small. The parameters are essentially moments of the distribution. But the difficulty lies in the yet unsolved problem of calculating moments in the fixed angular momentum subspace. We have derived a formula to approximate the angular momentum projection dependence of any operator averaged in a shell model basis. This approximate formula which is a truncated series in Hermite polynomials has been proved very good numerically and justified analytically for large systems. Applying this formula to seven physical cases we have found that the fixed angular momentum projection energy centroid, width and higher central moments can be obtained accurately provided for even-even nuclei the even and odd angular momentum projections are treated separately. Using this information one can construct the energy distribution for fixed angular momentum projection assuming normal behavior. Then the fixed angular momentum level densities are deduced and spectra are extracted. Results are in reasonably good agreement with the exact values although not as good as those obtained using exact fixed angular momentum moments. (Diss. Abstr. Int., B)
Modelling of temperature distribution and pulsations in fast reactor units
International Nuclear Information System (INIS)
Ushakov, P.A.; Sorokin, A.P.
1994-01-01
Reasons for the occurrence of thermal stresses in reactor units have been analyzed. The main reasons for this analysis are: temperature non-uniformity at the output of reactor core and breeder and the ensuing temperature pulsation; temperature pulsations due to mixing of sodium jets of a different temperature; temperature nonuniformity and pulsations resulting from the part of loops (circuits) un-plug; temperature nonuniformity and fluctuations in transient and accidental shut down of reactor or transfer to cooling by natural circulation. The results of investigating the thermal hydraulic characteristics are obtained by modelling the processes mentioned above. Analysis carried out allows the main lines of investigation to be defined and conclusions can be drawn regarding the problem of temperature distribution and fluctuation in fast reactor units
Exacerbating the Cosmological Constant Problem with Interacting Dark Energy Models.
Marsh, M C David
2017-01-06
Future cosmological surveys will probe the expansion history of the Universe and constrain phenomenological models of dark energy. Such models do not address the fine-tuning problem of the vacuum energy, i.e., the cosmological constant problem (CCP), but can make it spectacularly worse. We show that this is the case for "interacting dark energy" models in which the masses of the dark matter states depend on the dark energy sector. If realized in nature, these models have far-reaching implications for proposed solutions to the CCP that require the number of vacua to exceed the fine-tuning of the vacuum energy density. We show that current estimates of the number of flux vacua in string theory, N_{vac}∼O(10^{272 000}), are far too small to realize certain simple models of interacting dark energy and solve the cosmological constant problem anthropically. These models admit distinctive observational signatures that can be targeted by future gamma-ray observatories, hence making it possible to observationally rule out the anthropic solution to the cosmological constant problem in theories with a finite number of vacua.
Multiplicity distributions in the dual parton model
International Nuclear Information System (INIS)
Batunin, A.V.; Tolstenkov, A.N.
1985-01-01
Multiplicity distributions are calculated by means of a new mechanism of production of hadrons in a string, which was proposed previously by the authors and takes into account explicitly the valence character of the ends of the string. It is shown that allowance for this greatly improves the description of the low-energy multiplicity distributions. At superhigh energies, the contribution of the ends of the strings becomes negligibly small, but in this case multi-Pomeron contributions must be taken into account
Electricity distribution management Smart Grid system model
Directory of Open Access Journals (Sweden)
Wiesław Nowak
2012-06-01
Full Text Available This paper presents issues concerning the implementation of Smart Grid solutions in a real distribution network. The main components possible to quick implementation were presented. Realization of these ideas should bring tangible benefi ts to both customers and distribution system operators. Moreover the paper shows selected research results which examine proposed solutions in area of improving supply reliability and reducing energy losses in analysed network.
Zhou, Lin; Baldacci, Roberto; Vigo, Daniele; Wang, Xu
2018-01-01
In this paper, we introduce a new city logistics problem arising in the last mile distribution of e-commerce. The problem involves two levels of routing problems. The first requires a design of the routes for a vehicle fleet located at the depots to transport the customer demands to a subset of the
Two efficient heuristics to solve the integrated load distribution and production planning problem
International Nuclear Information System (INIS)
Gajpal, Yuvraj; Nourelfath, Mustapha
2015-01-01
This paper considers a multi-period production system where a set of machines are arranged in parallel. The machines are unreliable and the failure rate of machine depends on the load assigned to the machine. The expected production rate of the system is considered to be a non-monotonic function of its load. Because of the machine failure rate, the total production output depends on the combination of loads assigned to different machines. We consider the integration of load distribution decisions with production planning decision. The product demands are considered to be known in advance. The objective is to minimize the sum of holding costs, backorder costs, production costs, setup costs, capacity change costs and unused capacity costs while satisfying the demand over specified time horizon. The constraint is not to exceed available repair resources required to repair the machine breakdown. The paper develops two heuristics to solve the integrated load distribution and production planning problem. The first heuristic consists of a three-phase approach, while the second one is based on tabu search metaheuristic. The efficiency of the proposed heuristics is tested through the randomly generated problem instances. - Highlights: • The expected performance of the system is a non-monotonic function of its load. • We consider the integration of load distribution and production planning decisions. • The paper proposes three phase and tabu search based heuristics to solve the problem. • Lower bound has been developed for checking the effectiveness of the heuristics. • The efficiency of the heuristic is tested through randomly generated instances.
A model for the distribution channels planning process
Neves, M.F.; Zuurbier, P.; Campomar, M.C.
2001-01-01
Research of existing literature reveals some models (sequence of steps) for companies that want to plan distribution channels. None of these models uses strong contributions from transaction cost economics, bringing a possibility to elaborate on a "distribution channels planning model", with these
On the problem of finding a suitable distribution of students to universities in Germany
Schneider, Johannes J.; Hirtreiter, Christian; Morgenstern, Ingo
2009-10-01
For many years, the problem of how to distribute students to the various universities in Germany according to the preferences of the students has remained unsolved. Various approaches, like the centralized method to let a central agency organize the distribution to the various universities or the decentralized method to let the students apply directly at their preferred universities, turned out to lead to a significant fraction of frustrated students ending up at universities not being on their preference list or even not having a place to study at all. With our centralized approach, we are able to decrease the fraction of frustrated students as well as the bureaucratic expenses for applicants and universities drastically.
Effect of PLISSIT Model on Solution of Sexual Problems
Directory of Open Access Journals (Sweden)
Esra Uslu
2016-03-01
Full Text Available This systematic review study aims to determine the effect of PLISSIT model (permission, limited information, special suggestions, intensive therapy in the care of individuals having sexual problems. Two of the studies included in the systematic review have been carried out in Iran and one of them in Turkey. These studies were limited to the patients with stoma and women having sexual problems. Results presented that care via PLISSIT model improves the sexual functions and reduces sexual stress, increases the sexual desire, sexual arousal, lubrication, orgasm, sexual satisfaction and frequency of sexual activity. [Psikiyatride Guncel Yaklasimlar - Current Approaches in Psychiatry 2016; 8(1: 52-63
Use of model analysis to analyse Thai students’ attitudes and approaches to physics problem solving
Rakkapao, S.; Prasitpong, S.
2018-03-01
This study applies the model analysis technique to explore the distribution of Thai students’ attitudes and approaches to physics problem solving and how those attitudes and approaches change as a result of different experiences in physics learning. We administered the Attitudes and Approaches to Problem Solving (AAPS) survey to over 700 Thai university students from five different levels, namely students entering science, first-year science students, and second-, third- and fourth-year physics students. We found that their inferred mental states were generally mixed. The largest gap between physics experts and all levels of the students was about the role of equations and formulas in physics problem solving, and in views towards difficult problems. Most participants of all levels believed that being able to handle the mathematics is the most important part of physics problem solving. Most students’ views did not change even though they gained experiences in physics learning.
Personality Disorder Models and their Coverage of Interpersonal Problems
Williams, Trevor F.; Simms, Leonard J.
2015-01-01
Interpersonal dysfunction is a defining feature of personality disorders (PDs) and can serve as a criterion for comparing PD models. In this study, the interpersonal coverage of four competing PD models was examined using a sample of 628 current or recent psychiatric patients who completed the NEO Personality Inventory-3 First Half (NEO-PI-3FH; McCrae & Costa, 2007), Personality Inventory for the DSM-5 (PID-5; Krueger et al., 2012), Computerized Adaptive Test of Personality Disorder-Static Form (CAT-PD-SF; Simms et al., 2011), and Structured Clinical Interview for DSM-IV Personality Questionnaire (SCID-II PQ; First, Spitzer, Gibbon, & Williams, 1995). Participants also completed the Inventory of Interpersonal Problems-Short Circumplex (IIP-SC; Soldz, Budman, Demby, & Merry, 1995) to assess interpersonal dysfunction. Analyses compared the severity and style of interpersonal problems that characterize PD models. Previous research with DSM-5 Section II and III models was generally replicated. Extraversion and Agreeableness facets related to the most well defined interpersonal problems across normal-range and pathological traits. Pathological trait models provided more coverage of dominance problems, whereas normal-range traits covered nonassertiveness better. These results suggest that more work may be needed to reconcile descriptions of personality pathology at the level of specific constructs. PMID:26168406
The κ-generalized distribution: A new descriptive model for the size distribution of incomes
Clementi, F.; Di Matteo, T.; Gallegati, M.; Kaniadakis, G.
2008-05-01
This paper proposes the κ-generalized distribution as a model for describing the distribution and dispersion of income within a population. Formulas for the shape, moments and standard tools for inequality measurement-such as the Lorenz curve and the Gini coefficient-are given. A method for parameter estimation is also discussed. The model is shown to fit extremely well the data on personal income distribution in Australia and in the United States.
Size distribution of dust grains: A problem of self-similarity
International Nuclear Information System (INIS)
Henning, TH.; Dorschner, J.; Guertler, J.
1989-01-01
Distribution functions describing the results of natural processes frequently show the shape of power laws. It is an open question whether this behavior is a result simply coming about by the chosen mathematical representation of the observational data or reflects a deep-seated principle of nature. The authors suppose the latter being the case. Using a dust model consisting of silicate and graphite grains Mathis et al. (1977) showed that the interstellar extinction curve can be represented by taking a grain radii distribution of power law type n(a) varies as a(exp -p) with 3.3 less than or equal to p less than or equal to 3.6 (example 1) as a basis. A different approach to understanding power laws like that in example 1 becomes possible by the theory of self-similar processes (scale invariance). The beta model of turbulence (Frisch et al., 1978) leads in an elementary way to the concept of the self-similarity dimension D, a special case of Mandelbrot's (1977) fractal dimension. In the frame of this beta model, it is supposed that on each stage of a cascade the system decays to N clumps and that only the portion beta N remains active further on. An important feature of this model is that the active eddies become less and less space-filling. In the following, the authors assume that grain-grain collisions are such a scale-invarient process and that the remaining grains are the inactive (frozen) clumps of the cascade. In this way, a size distribution n(a) da varies as a(exp -(D+1))da (example 2) results. It seems to be highly probable that the power law character of the size distribution of interstellar dust grains is the result of a self-similarity process. We can, however, not exclude that the process leading to the interstellar grain size distribution is not fragmentation at all
Analysis and Comparison of Typical Models within Distribution Network Design
DEFF Research Database (Denmark)
Jørgensen, Hans Jacob; Larsen, Allan; Madsen, Oli B.G.
This paper investigates the characteristics of typical optimisation models within Distribution Network Design. During the paper fourteen models known from the literature will be thoroughly analysed. Through this analysis a schematic approach to categorisation of distribution network design models...... for educational purposes. Furthermore, the paper can be seen as a practical introduction to network design modelling as well as a being an art manual or recipe when constructing such a model....
Guidance for modeling causes and effects in environmental problem solving
Armour, Carl L.; Williamson, Samuel C.
1988-01-01
Environmental problems are difficult to solve because their causes and effects are not easily understood. When attempts are made to analyze causes and effects, the principal challenge is organization of information into a framework that is logical, technically defensible, and easy to understand and communicate. When decisionmakers attempt to solve complex problems before an adequate cause and effect analysis is performed there are serious risks. These risks include: greater reliance on subjective reasoning, lessened chance for scoping an effective problem solving approach, impaired recognition of the need for supplemental information to attain understanding, increased chance for making unsound decisions, and lessened chance for gaining approval and financial support for a program/ Cause and effect relationships can be modeled. This type of modeling has been applied to various environmental problems, including cumulative impact assessment (Dames and Moore 1981; Meehan and Weber 1985; Williamson et al. 1987; Raley et al. 1988) and evaluation of effects of quarrying (Sheate 1986). This guidance for field users was written because of the current interest in documenting cause-effect logic as a part of ecological problem solving. Principal literature sources relating to the modeling approach are: Riggs and Inouye (1975a, b), Erickson (1981), and United States Office of Personnel Management (1986).
Distributed Model Predictive Control for Smart Energy Systems
DEFF Research Database (Denmark)
Halvgaard, Rasmus Fogtmann; Vandenberghe, Lieven; Poulsen, Niels Kjølstad
2016-01-01
Integration of a large number of flexible consumers in a smart grid requires a scalable power balancing strategy. We formulate the control problem as an optimization problem to be solved repeatedly by the aggregator in a model predictive control framework. To solve the large-scale control problem...
Modeling the economics and market adoption of distributed power generation
International Nuclear Information System (INIS)
Maribu, Karl Magnus
2006-01-01
significant value in postponing investment until larger projects are profitable. In the second paper, Combined Heat and Power in Commercial Buildings: Investment and Risk Analysis, a Monte Carlo simulation program to find the value and risk characteristics of combined heat and power units is presented. Using historical price data to estimate price process parameters, it is shown that uncertain prices should not be a barrier for investment, since on-site generators can adapt to uncertain prices and reduce the total energy cost risks. In, Optimizing Distributed Generation Systems for Commercial Buildings, which uses a mixed integer linear program, distributed generation portfolios that maximize profitability are tailored to a building's energy load. Distributed generation with heat recovery and thermally activated cooling are found profitable in an office and a health care building, using current generator data and energy tariffs from California. With the fourth paper, Distributed Energy Resources Market Diffusion Model, the analysis is taken a step further to predict distributed generation market diffusion. Market penetration is assumed to depend on economic attractiveness and knowledge and trust in the technologies. A case study based on the U.S. commercial sector depicts a large market for reciprocating engines and micro turbines, with the West and Northeast regions driving market diffusion. Technology research and outreach programs can speed up and change the path of capacity expansion. The thesis presents three different models for analyzing investments in distributed generation, all of which have benefits and disadvantages. Choice of model depends on the specific application, but the different approaches can be used on the same problem to analyze it from different viewpoints. The cases in the thesis indicate that distributed generation can reduce expected energy costs while at the same time improve cost predictability. Further, the thesis identifies several important
A review of mathematical models in economic environmental problems
DEFF Research Database (Denmark)
Nahorski, Z.; Ravn, H.F.
2000-01-01
The paper presents a review of mathematical models used,in economic analysis of environmental problems. This area of research combines macroeconomic models of growth, as dependent on capital, labour, resources, etc., with environmental models describing such phenomena like natural resources...... exhaustion or pollution accumulation and degradation. In simpler cases the models can be treated analytically and the utility function can be optimized using, e.g., such tools as the maximum principle. In more complicated cases calculation of the optimal environmental policies requires a computer solution....
A Compromise Programming Model for Highway Maintenance Resources Allocation Problem
Directory of Open Access Journals (Sweden)
Hui Xiong
2012-01-01
Full Text Available This paper formulates a bilevel compromise programming model for allocating resources between pavement and bridge deck maintenances. The first level of the model aims to solve the resource allocation problems for pavement management and bridge deck maintenance, without considering resource sharing between them. At the second level, the model uses the results from the first step as an input and generates the final solution to the resource-sharing problem. To solve the model, the paper applies genetic algorithms to search for the optimal solution. We use a combination of two digits to represent different maintenance types. Results of numerical examples show that the conditions of both pavements and bridge decks are improved significantly by applying compromise programming, rather than conventional methods. Resources are also utilized more efficiently when the proposed method is applied.
Deterioration and optimal rehabilitation modelling for urban water distribution systems
Zhou, Y.
2018-01-01
Pipe failures in water distribution systems can have a serious impact and hence it’s important to maintain the condition and integrity of the distribution system. This book presents a whole-life cost optimisation model for the rehabilitation of water distribution systems. It combines a pipe breakage
Bayesian Nonparametric Model for Estimating Multistate Travel Time Distribution
Directory of Open Access Journals (Sweden)
Emmanuel Kidando
2017-01-01
Full Text Available Multistate models, that is, models with more than two distributions, are preferred over single-state probability models in modeling the distribution of travel time. Literature review indicated that the finite multistate modeling of travel time using lognormal distribution is superior to other probability functions. In this study, we extend the finite multistate lognormal model of estimating the travel time distribution to unbounded lognormal distribution. In particular, a nonparametric Dirichlet Process Mixture Model (DPMM with stick-breaking process representation was used. The strength of the DPMM is that it can choose the number of components dynamically as part of the algorithm during parameter estimation. To reduce computational complexity, the modeling process was limited to a maximum of six components. Then, the Markov Chain Monte Carlo (MCMC sampling technique was employed to estimate the parameters’ posterior distribution. Speed data from nine links of a freeway corridor, aggregated on a 5-minute basis, were used to calculate the corridor travel time. The results demonstrated that this model offers significant flexibility in modeling to account for complex mixture distributions of the travel time without specifying the number of components. The DPMM modeling further revealed that freeway travel time is characterized by multistate or single-state models depending on the inclusion of onset and offset of congestion periods.
A Study of Four Textbook Distribution Models
Graydon, Benjamin; Urbach-Buholz, Blake; Kohen, Cheryl
2011-01-01
Textbooks too often hinder rather than help students because of their prohibitively expensive prices. Colleges and universities facing intense pressure to lower education expenses while increasing access, retention, and achievement now find addressing the textbook problem more and more urgent. Used textbook sales have grown dramatically over the…
The problem of predicting the size distribution of sediment supplied by hillslopes to rivers
Sklar, Leonard S.; Riebe, Clifford S.; Marshall, Jill A.; Genetti, Jennifer; Leclere, Shirin; Lukens, Claire L.; Merces, Viviane
2017-01-01
Sediments link hillslopes to river channels. The size of sediments entering channels is a key control on river morphodynamics across a range of scales, from channel response to human land use to landscape response to changes in tectonic and climatic forcing. However, very little is known about what controls the size distribution of particles eroded from bedrock on hillslopes, and how particle sizes evolve before sediments are delivered to channels. Here we take the first steps toward building a geomorphic transport law to predict the size distribution of particles produced on hillslopes and supplied to channels. We begin by identifying independent variables that can be used to quantify the influence of five key boundary conditions: lithology, climate, life, erosion rate, and topography, which together determine the suite of geomorphic processes that produce and transport sediments on hillslopes. We then consider the physical and chemical mechanisms that determine the initial size distribution of rock fragments supplied to the hillslope weathering system, and the duration and intensity of weathering experienced by particles on their journey from bedrock to the channel. We propose a simple modeling framework with two components. First, the initial rock fragment sizes are set by the distribution of spacing between fractures in unweathered rock, which is influenced by stresses encountered by rock during exhumation and by rock resistance to fracture propagation. That initial size distribution is then transformed by a weathering function that captures the influence of climate and mineralogy on chemical weathering potential, and the influence of erosion rate and soil depth on residence time and the extent of particle size reduction. Model applications illustrate how spatial variation in weathering regime can lead to bimodal size distributions and downstream fining of channel sediment by down-valley fining of hillslope sediment supply, two examples of hillslope control on
Radar meteors range distribution model. I. Theory
Czech Academy of Sciences Publication Activity Database
Pecinová, Drahomíra; Pecina, Petr
2007-01-01
Roč. 37, č. 2 (2007), s. 83-106 ISSN 1335-1842 R&D Projects: GA ČR GA205/03/1405 Institutional research plan: CEZ:AV0Z10030501 Keywords : physics of meteors * radar meteors * range distribution Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics
An examination of the developmental propensity model of conduct problems.
Rhee, Soo Hyun; Friedman, Naomi P; Corley, Robin P; Hewitt, John K; Hink, Laura K; Johnson, Daniel P; Smith Watts, Ashley K; Young, Susan E; Robinson, JoAnn; Waldman, Irwin D; Zahn-Waxler, Carolyn
2016-05-01
The present study tested specific hypotheses advanced by the developmental propensity model of the etiology of conduct problems in the Colorado Longitudinal Twin Study, a prospective, longitudinal, genetically informative sample. High negative emotionality, low behavioral inhibition, low concern and high disregard for others, and low cognitive ability assessed during toddlerhood (age 14 to 36 months) were examined as predictors of conduct problems in later childhood and adolescence (age 4 to 17 years). Each hypothesized antisocial propensity dimension predicted conduct problems, but some predictions may be context specific or due to method covariance. The most robust predictors were observed disregard for others (i.e., responding to others' distress with active, negative responses such as anger and hostility), general cognitive ability, and language ability, which were associated with conduct problems reported by parents, teachers, and adolescents, and change in observed negative emotionality (i.e., frustration tolerance), which was associated with conduct problems reported by teachers and adolescents. Furthermore, associations between the most robust early predictors and later conduct problems were influenced by the shared environment rather than genes. We conclude that shared environmental influences that promote disregard for others and detract from cognitive and language development during toddlerhood also predispose individuals to conduct problems in later childhood and adolescence. The identification of those shared environmental influences common to early antisocial propensity and later conduct problems is an important future direction, and additional developmental behavior genetic studies examining the interaction between children's characteristics and socializing influences on conduct problems are needed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Psychological profile: the problem of modeling the unknown criminal personality
Directory of Open Access Journals (Sweden)
Г. М. Гетьман
2013-10-01
Full Text Available The article investigates the problem of modeling an unknown person in the preparation of criminal psychological profile. Some approaches to the concept of "psychological profile" and "psychological portrait", in particular the proposed delineation of these terms. We consider the system steps in the development of the psychological profile of an unknown perpetrator.
On the problem of model reduction in the gap metric
Mutsaers, M.E.C.; Weiland, S.
2010-01-01
This paper deals with the model reduction problem where, for a given linear time-invariant dynamical system of complexity n, a simpler system of complexity r
Stieltjes electrostatic model interpretation for bound state problems
Indian Academy of Sciences (India)
In this paper, it is shown that Stieltjes electrostatic model and quantum Hamilton Jacobi formalism are analogous to each other. This analogy allows the bound state problem to mimic as unit moving imaginary charges i ℏ , which are placed in between the two fixed imaginary charges arising due to the classical turning ...
Directory of Open Access Journals (Sweden)
Eckhard Limpert
Full Text Available BACKGROUND: The gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by mean ± SD, or with the standard error of the mean, mean ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. METHODOLOGY/PRINCIPAL FINDINGS: Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the "95% range check", their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log- normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to mean ± SD, it connects the multiplicative (or geometric mean mean * and the multiplicative standard deviation s* in the form mean * x/s*, that is advantageous and recommended. CONCLUSIONS/SIGNIFICANCE: The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life.
Modelling the distribution of pig production and diseases in Thailand
Thanapongtharm, Weerapong
2015-01-01
This thesis, entitled “Modelling the distribution of pig production and diseases in Thailand”, presents many aspects of pig production in Thailand including the characteristics of pig farming system, distribution of pig population and pig farms, spatio-temporal distribution and risk of most important diseases in pig at present, and the suitability area for pig farming. Spatial distribution and characteristics of pig farming in Thailand were studied using time-series pig population data to des...
Calculations of dose distributions using a neural network model
International Nuclear Information System (INIS)
Mathieu, R; Martin, E; Gschwind, R; Makovicka, L; Contassot-Vivier, S; Bahi, J
2005-01-01
The main goal of external beam radiotherapy is the treatment of tumours, while sparing, as much as possible, surrounding healthy tissues. In order to master and optimize the dose distribution within the patient, dosimetric planning has to be carried out. Thus, for determining the most accurate dose distribution during treatment planning, a compromise must be found between the precision and the speed of calculation. Current techniques, using analytic methods, models and databases, are rapid but lack precision. Enhanced precision can be achieved by using calculation codes based, for example, on Monte Carlo methods. However, in spite of all efforts to optimize speed (methods and computer improvements), Monte Carlo based methods remain painfully slow. A newer way to handle all of these problems is to use a new approach in dosimetric calculation by employing neural networks. Neural networks (Wu and Zhu 2000 Phys. Med. Biol. 45 913-22) provide the advantages of those various approaches while avoiding their main inconveniences, i.e., time-consumption calculations. This permits us to obtain quick and accurate results during clinical treatment planning. Currently, results obtained for a single depth-dose calculation using a Monte Carlo based code (such as BEAM (Rogers et al 2003 NRCC Report PIRS-0509(A) rev G)) require hours of computing. By contrast, the practical use of neural networks (Mathieu et al 2003 Proceedings Journees Scientifiques Francophones, SFRP) provides almost instant results and quite low errors (less than 2%) for a two-dimensional dosimetric map
Improving permafrost distribution modelling using feature selection algorithms
Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail
2016-04-01
The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its
Modeling and optimization of an electric power distribution network ...
African Journals Online (AJOL)
Modeling and optimization of an electric power distribution network planning system using ... of the network was modelled with non-linear mathematical expressions. ... given feasible locations, re-conductoring of existing feeders in the network, ...
Bilinear reduced order approximate model of parabolic distributed solar collectors
Elmetennani, Shahrazed; Laleg-Kirati, Taous-Meriem
2015-01-01
This paper proposes a novel, low dimensional and accurate approximate model for the distributed parabolic solar collector, by means of a modified gaussian interpolation along the spatial domain. The proposed reduced model, taking the form of a low
International Nuclear Information System (INIS)
Yuan Yuan; Yi Hongliang; Shuai Yong; Wang Fuqiang; Tan Heping
2010-01-01
As a part of resolving optical properties in atmosphere radiative transfer calculations, this paper focuses on obtaining aerosol optical thicknesses (AOTs) in the visible and near infrared wave band through indirect method by gleaning the values of aerosol particle size distribution parameters. Although various inverse techniques have been applied to obtain values for these parameters, we choose a stochastic particle swarm optimization (SPSO) algorithm to perform an inverse calculation. Computational performances of different inverse methods are investigated and the influence of swarm size on the inverse problem of computation particles is examined. Next, computational efficiencies of various particle size distributions and the influences of the measured errors on computational accuracy are compared. Finally, we recover particle size distributions for atmospheric aerosols over Beijing using the measured AOT data (at wavelengths λ=0.400, 0.690, 0.870, and 1.020 μm) obtained from AERONET at different times and then calculate other AOT values for this band based on the inverse results. With calculations agreeing with measured data, the SPSO algorithm shows good practicability.
A Monte Carlo estimation of the marginal distributions in a problem of probabilistic dynamics
International Nuclear Information System (INIS)
Labeau, P.E.
1996-01-01
Modelling the effect of the dynamic behaviour of a system on its PSA study leads, in a Markovian framework, to a development at first order of the Chapman-Kolmogorov equation, whose solutions are the probability densities of the problem. Because of its size, there is no hope of solving directly these equations in realistic circumstances. We present in this paper a biased simulation giving the marginals and compare different ways of speeding up the integration of the equations of the dynamics
Designing the Distributed Model Integration Framework – DMIF
Belete, Getachew F.; Voinov, Alexey; Morales, Javier
2017-01-01
We describe and discuss the design and prototype of the Distributed Model Integration Framework (DMIF) that links models deployed on different hardware and software platforms. We used distributed computing and service-oriented development approaches to address the different aspects of
A Modeling Framework for Schedulability Analysis of Distributed Avionics Systems
DEFF Research Database (Denmark)
Han, Pujie; Zhai, Zhengjun; Nielsen, Brian
2018-01-01
This paper presents a modeling framework for schedulability analysis of distributed integrated modular avionics (DIMA) systems that consist of spatially distributed ARINC-653 modules connected by a unified AFDX network. We model a DIMA system as a set of stopwatch automata (SWA) in UPPAAL...
a Proposed Benchmark Problem for Scatter Calculations in Radiographic Modelling
Jaenisch, G.-R.; Bellon, C.; Schumm, A.; Tabary, J.; Duvauchelle, Ph.
2009-03-01
Code Validation is a permanent concern in computer modelling, and has been addressed repeatedly in eddy current and ultrasonic modeling. A good benchmark problem is sufficiently simple to be taken into account by various codes without strong requirements on geometry representation capabilities, focuses on few or even a single aspect of the problem at hand to facilitate interpretation and to avoid that compound errors compensate themselves, yields a quantitative result and is experimentally accessible. In this paper we attempt to address code validation for one aspect of radiographic modeling, the scattered radiation prediction. Many NDT applications can not neglect scattered radiation, and the scatter calculation thus is important to faithfully simulate the inspection situation. Our benchmark problem covers the wall thickness range of 10 to 50 mm for single wall inspections, with energies ranging from 100 to 500 keV in the first stage, and up to 1 MeV with wall thicknesses up to 70 mm in the extended stage. A simple plate geometry is sufficient for this purpose, and the scatter data is compared on a photon level, without a film model, which allows for comparisons with reference codes like MCNP. We compare results of three Monte Carlo codes (McRay, Sindbad and Moderato) as well as an analytical first order scattering code (VXI), and confront them to results obtained with MCNP. The comparison with an analytical scatter model provides insights into the application domain where this kind of approach can successfully replace Monte-Carlo calculations.
A finite element model for the quench front evolution problem
International Nuclear Information System (INIS)
Folescu, J.; Galeao, A.C.N.R.; Carmo, E.G.D. do.
1985-01-01
A model for the rewetting problem associated with the loss of coolant accident in a PWR reactor is proposed. A variational formulation for the time-dependent heat conduction problem on fuel rod cladding is used, and appropriate boundary conditions are assumed in order to simulate the thermal interaction between the fuel rod cladding and the fluid. A numerical procedure which uses the finite element method for the spatial discretization and a Crank-Nicolson-like method for the step-by-step integration is developed. Some numerical results are presented showing the quench front evolution and its stationary profile. (Author) [pt
A generalized statistical model for the size distribution of wealth
International Nuclear Information System (INIS)
Clementi, F; Gallegati, M; Kaniadakis, G
2012-01-01
In a recent paper in this journal (Clementi et al 2009 J. Stat. Mech. P02037), we proposed a new, physically motivated, distribution function for modeling individual incomes, having its roots in the framework of the κ-generalized statistical mechanics. The performance of the κ-generalized distribution was checked against real data on personal income for the United States in 2003. In this paper we extend our previous model so as to be able to account for the distribution of wealth. Probabilistic functions and inequality measures of this generalized model for wealth distribution are obtained in closed form. In order to check the validity of the proposed model, we analyze the US household wealth distributions from 1984 to 2009 and conclude an excellent agreement with the data that is superior to any other model already known in the literature. (paper)
A generalized statistical model for the size distribution of wealth
Clementi, F.; Gallegati, M.; Kaniadakis, G.
2012-12-01
In a recent paper in this journal (Clementi et al 2009 J. Stat. Mech. P02037), we proposed a new, physically motivated, distribution function for modeling individual incomes, having its roots in the framework of the κ-generalized statistical mechanics. The performance of the κ-generalized distribution was checked against real data on personal income for the United States in 2003. In this paper we extend our previous model so as to be able to account for the distribution of wealth. Probabilistic functions and inequality measures of this generalized model for wealth distribution are obtained in closed form. In order to check the validity of the proposed model, we analyze the US household wealth distributions from 1984 to 2009 and conclude an excellent agreement with the data that is superior to any other model already known in the literature.
Robustness of a Distributed Knowledge Management Model
DEFF Research Database (Denmark)
Pedersen, Mogens Kühn; Larsen, Michael Holm
1999-01-01
Knowledge management based on symmetric incentives is rarely found in literature. A knowledge exchange model relies upon a double loop knowledge conversion with symmetric incentives in a network. The model merges specific knowledge with knowledge from other actors into a decision support system...
Karmeshu; Gupta, Varun; Kadambari, K V
2011-06-01
A single neuronal model incorporating distributed delay (memory)is proposed. The stochastic model has been formulated as a Stochastic Integro-Differential Equation (SIDE) which results in the underlying process being non-Markovian. A detailed analysis of the model when the distributed delay kernel has exponential form (weak delay) has been carried out. The selection of exponential kernel has enabled the transformation of the non-Markovian model to a Markovian model in an extended state space. For the study of First Passage Time (FPT) with exponential delay kernel, the model has been transformed to a system of coupled Stochastic Differential Equations (SDEs) in two-dimensional state space. Simulation studies of the SDEs provide insight into the effect of weak delay kernel on the Inter-Spike Interval(ISI) distribution. A measure based on Jensen-Shannon divergence is proposed which can be used to make a choice between two competing models viz. distributed delay model vis-á-vis LIF model. An interesting feature of the model is that the behavior of (CV(t))((ISI)) (Coefficient of Variation) of the ISI distribution with respect to memory kernel time constant parameter η reveals that neuron can switch from a bursting state to non-bursting state as the noise intensity parameter changes. The membrane potential exhibits decaying auto-correlation structure with or without damped oscillatory behavior depending on the choice of parameters. This behavior is in agreement with empirically observed pattern of spike count in a fixed time window. The power spectral density derived from the auto-correlation function is found to exhibit single and double peaks. The model is also examined for the case of strong delay with memory kernel having the form of Gamma distribution. In contrast to fast decay of damped oscillations of the ISI distribution for the model with weak delay kernel, the decay of damped oscillations is found to be slower for the model with strong delay kernel.
A distributed computing model for telemetry data processing
Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.
1994-05-01
We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.
A distributed computing model for telemetry data processing
Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.
1994-01-01
We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.
Bayesian models based on test statistics for multiple hypothesis testing problems.
Ji, Yuan; Lu, Yiling; Mills, Gordon B
2008-04-01
We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.
Tempered stable distributions stochastic models for multiscale processes
Grabchak, Michael
2015-01-01
This brief is concerned with tempered stable distributions and their associated Levy processes. It is a good text for researchers interested in learning about tempered stable distributions. A tempered stable distribution is one which takes a stable distribution and modifies its tails to make them lighter. The motivation for this class comes from the fact that infinite variance stable distributions appear to provide a good fit to data in a variety of situations, but the extremely heavy tails of these models are not realistic for most real world applications. The idea of using distributions that modify the tails of stable models to make them lighter seems to have originated in the influential paper of Mantegna and Stanley (1994). Since then, these distributions have been extended and generalized in a variety of ways. They have been applied to a wide variety of areas including mathematical finance, biostatistics,computer science, and physics.
A model for the inverse 1-median problem on trees under uncertain costs
Directory of Open Access Journals (Sweden)
Kien Trung Nguyen
2016-01-01
Full Text Available We consider the problem of justifying vertex weights of a tree under uncertain costs so that a prespecified vertex become optimal and the total cost should be optimal in the uncertainty scenario. We propose a model which delivers the information about the optimal cost which respect to each confidence level \\(\\alpha \\in [0,1]\\. To obtain this goal, we first define an uncertain variable with respect to the minimum cost in each confidence level. If all costs are independently linear distributed, we present the inverse distribution function of this uncertain variable in \\(O(n^{2}\\log n\\ time, where \\(n\\ is the number of vertices in the tree.
A model for solving the prescribed burn planning problem.
Rachmawati, Ramya; Ozlen, Melih; Reinke, Karin J; Hearne, John W
2015-01-01
The increasing frequency of destructive wildfires, with a consequent loss of life and property, has led to fire and land management agencies initiating extensive fuel management programs. This involves long-term planning of fuel reduction activities such as prescribed burning or mechanical clearing. In this paper, we propose a mixed integer programming (MIP) model that determines when and where fuel reduction activities should take place. The model takes into account multiple vegetation types in the landscape, their tolerance to frequency of fire events, and keeps track of the age of each vegetation class in each treatment unit. The objective is to minimise fuel load over the planning horizon. The complexity of scheduling fuel reduction activities has led to the introduction of sophisticated mathematical optimisation methods. While these approaches can provide optimum solutions, they can be computationally expensive, particularly for fuel management planning which extends across the landscape and spans long term planning horizons. This raises the question of how much better do exact modelling approaches compare to simpler heuristic approaches in their solutions. To answer this question, the proposed model is run using an exact MIP (using commercial MIP solver) and two heuristic approaches that decompose the problem into multiple single-period sub problems. The Knapsack Problem (KP), which is the first heuristic approach, solves the single period problems, using an exact MIP approach. The second heuristic approach solves the single period sub problem using a greedy heuristic approach. The three methods are compared in term of model tractability, computational time and the objective values. The model was tested using randomised data from 711 treatment units in the Barwon-Otway district of Victoria, Australia. Solutions for the exact MIP could be obtained for up to a 15-year planning only using a standard implementation of CPLEX. Both heuristic approaches can solve
Photovoltaic subsystem marketing and distribution model: programming manual. Final report
Energy Technology Data Exchange (ETDEWEB)
1982-07-01
Complete documentation of the marketing and distribution (M and D) computer model is provided. The purpose is to estimate the costs of selling and transporting photovoltaic solar energy products from the manufacturer to the final customer. The model adjusts for the inflation and regional differences in marketing and distribution costs. The model consists of three major components: the marketing submodel, the distribution submodel, and the financial submodel. The computer program is explained including the input requirements, output reports, subprograms and operating environment. The program specifications discuss maintaining the validity of the data and potential improvements. An example for a photovoltaic concentrator collector demonstrates the application of the model.
Correlation Structures of Correlated Binomial Models and Implied Default Distribution
Mori, Shintaro; Kitsukawa, Kenji; Hisakado, Masato
2008-11-01
We show how to analyze and interpret the correlation structures, the conditional expectation values and correlation coefficients of exchangeable Bernoulli random variables. We study implied default distributions for the iTraxx-CJ tranches and some popular probabilistic models, including the Gaussian copula model, Beta binomial distribution model and long-range Ising model. We interpret the differences in their profiles in terms of the correlation structures. The implied default distribution has singular correlation structures, reflecting the credit market implications. We point out two possible origins of the singular behavior.
Species Distribution modeling as a tool to unravel determinants of palm distribution in Thailand
DEFF Research Database (Denmark)
Tovaranonte, Jantrararuk; Barfod, Anders S.; Balslev, Henrik
2011-01-01
As a consequence of the decimation of the forest cover in Thailand from 50% to ca. 20 % since the 1950ies, it is difficult to gain insight in the drivers behind past, present and future distribution ranges of plant species. Species distribution modeling allows visualization of potential species...... distribution under specific sets of assumptions. In this study we used maximum entropy to map potential distributions of 103 species of palms for which more than 5 herbarium records exist. Palms constitute key-stone plant group from both an ecological, economical and conservation perspective. The models were......) and the Area Under the Curve (AUC). All models performed well with AUC scores above 0.95. The predicted distribution ranges showed high suitability for palms in the southern region of Thailand. It also shows that spatial predictor variables are important in cases where historical processes may explain extant...
Directory of Open Access Journals (Sweden)
Lin Zhou
2016-08-01
Full Text Available With the increasing interest in online shopping, the Last Mile delivery is regarded as one of the most expensive and pollutive—and yet the least efficient—stages of the e-commerce supply chain. To address this challenge, a novel location-routing problem with simultaneous home delivery and customer’s pickup is proposed. This problem aims to build a more effective Last Mile distribution system by providing two kinds of service options when delivering packages to customers. To solve this specific problem, a hybrid evolution search algorithm by combining genetic algorithm (GA and local search (LS is presented. In this approach, a diverse population generation algorithm along with a two-phase solution initialization heuristic is first proposed to give high quality initial population. Then, advantaged solution representation, individual evaluation, crossover and mutation operations are designed to enhance the evolution and search efficiency. Computational experiments based on a large family of instances are conducted, and the results obtained indicate the validity of the proposed model and method.
Distributed Prognostics Based on Structural Model Decomposition
National Aeronautics and Space Administration — Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based...
Alloy design as an inverse problem of cluster expansion models
DEFF Research Database (Denmark)
Larsen, Peter Mahler; Kalidindi, Arvind R.; Schmidt, Søren
2017-01-01
Central to a lattice model of an alloy system is the description of the energy of a given atomic configuration, which can be conveniently developed through a cluster expansion. Given a specific cluster expansion, the ground state of the lattice model at 0 K can be solved by finding the configurat......Central to a lattice model of an alloy system is the description of the energy of a given atomic configuration, which can be conveniently developed through a cluster expansion. Given a specific cluster expansion, the ground state of the lattice model at 0 K can be solved by finding...... the inverse problem in terms of energetically distinct configurations, using a constraint satisfaction model to identify constructible configurations, and show that a convex hull can be used to identify ground states. To demonstrate the approach, we solve for all ground states for a binary alloy in a 2D...
A new mathematical modeling for pure parsimony haplotyping problem.
Feizabadi, R; Bagherian, M; Vaziri, H R; Salahi, M
2016-11-01
Pure parsimony haplotyping (PPH) problem is important in bioinformatics because rational haplotyping inference plays important roles in analysis of genetic data, mapping complex genetic diseases such as Alzheimer's disease, heart disorders and etc. Haplotypes and genotypes are m-length sequences. Although several integer programing models have already been presented for PPH problem, its NP-hardness characteristic resulted in ineffectiveness of those models facing the real instances especially instances with many heterozygous sites. In this paper, we assign a corresponding number to each haplotype and genotype and based on those numbers, we set a mixed integer programing model. Using numbers, instead of sequences, would lead to less complexity of the new model in comparison with previous models in a way that there are neither constraints nor variables corresponding to heterozygous nucleotide sites in it. Experimental results approve the efficiency of the new model in producing better solution in comparison to two state-of-the art haplotyping approaches. Copyright © 2016 Elsevier Inc. All rights reserved.
Everyday ethical problems in dementia care: a teleological model.
Bolmsjö, Ingrid Agren; Edberg, Anna-Karin; Sandman, Lars
2006-07-01
In this article, a teleological model for analysis of everyday ethical situations in dementia care is used to analyse and clarify perennial ethical problems in nursing home care for persons with dementia. This is done with the aim of describing how such a model could be useful in a concrete care context. The model was developed by Sandman and is based on four aspects: the goal; ethical side-constraints to what can be done to realize such a goal; structural constraints; and nurses' ethical competency. The model contains the following main steps: identifying and describing the normative situation; identifying and describing the different possible alternatives; assessing and evaluating the different alternatives; and deciding on, implementing and evaluating the chosen alternative. Three ethically difficult situations from dementia care were used for the application of the model. The model proved useful for the analysis of nurses' everyday ethical dilemmas and will be further explored to evaluate how well it can serve as a tool to identify and handle problems that arise in nursing care.
A Contextualized Model of Headquarters-subsidiary Agency Problems
DEFF Research Database (Denmark)
Kostova, Tatiana; Nell, Phillip Christopher; Hoenen, Anne Kristin
This paper proposes an agency model for headquarters-subsidiary relationships in multinational organizations with headquarters as the principal and the subsidiary as the agent. As a departure from classical agency theory, our model is developed for the unit level of analysis and considers two root...... in which the headquarters-subsidiary dyad is embedded. We then discuss several agency scenarios that lead to different manifestations of the agency problem. The framework informs more relevant applications of agency theory in organizational studies and motivates future research....... causes of the agency problem – self-interest and bounded rationality. We argue that one cannot assume absolute self-interest and perfect rationality of agents but should allow them to vary. We explain subsidiary-level variation through a set of internal organizational and external social conditions...
Building a generalized distributed system model
Mukkamala, R.
1993-01-01
The key elements in the 1992-93 period of the project are the following: (1) extensive use of the simulator to implement and test - concurrency control algorithms, interactive user interface, and replica control algorithms; and (2) investigations into the applicability of data and process replication in real-time systems. In the 1993-94 period of the project, we intend to accomplish the following: (1) concentrate on efforts to investigate the effects of data and process replication on hard and soft real-time systems - especially we will concentrate on the impact of semantic-based consistency control schemes on a distributed real-time system in terms of improved reliability, improved availability, better resource utilization, and reduced missed task deadlines; and (2) use the prototype to verify the theoretically predicted performance of locking protocols, etc.
Irving, J.; Koepke, C.; Elsheikh, A. H.
2017-12-01
Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion
Environmental problems and economic development in an endogenous fertility model
Frank Joest; Martin Quaas; Johannes Schiller
2006-01-01
Population growth is often viewed as a most oppressive global problem with respect to environmental deterioration, but the relationships between population development, economic dynamics and environmental pollution are complex due to various feedback mechanisms. We analyze society’s economic decisions on birth rates, investment into human and physical capital, and polluting emissions within an optimal control model of the coupled demographic-economic-environmental system. We show that a long-...
Finite element approximation to a model problem of transonic flow
International Nuclear Information System (INIS)
Tangmanee, S.
1986-12-01
A model problem of transonic flow ''the Tricomi equation'' in Ω is contained in IR 2 bounded by the rectangular-curve boundary is posed in the form of symmetric positive differential equations. The finite element method is then applied. When the triangulation of Ω-bar is made of quadrilaterals and the approximation space is the Lagrange polynomial, we get the error estimates. 14 refs, 1 fig
Real-time distributed economic model predictive control for complete vehicle energy management
Romijn, Constantijn; Donkers, Tijs; Kessels, John; Weiland, Siep
2017-01-01
In this paper, a real-time distributed economic model predictive control approach for complete vehicle energy management (CVEM) is presented using a receding control horizon in combination with a dual decomposition. The dual decomposition allows the CVEM optimization problem to be solved by solving
Ecology and equity in global fisheries: Modelling policy options using theoretical distributions
Rammelt, C.F.; van Schie, Maarten
2016-01-01
Global fisheries present a typical case of political ecology or environmental injustice, i.e. a problem of distribution of resources within ecological limits. We built a stock-flow model to visualize this challenge and its dynamics, with both an ecological and a social dimension. We incorporated
Santos, S.F.; Paterakis, N.G.; Catalao, J.P.S.; Camarinha-Matos, L.M.; Baldissera, T.A.; Di Orio, G.; Marques, F.
2015-01-01
The distribution systems (DS) reconfiguration problem is formulated in this paper as a multi-objective mixed-integer linear programming (MILP) multiperiod problem, enforcing that the obtained topology is radial in order to exploit several advantages those configurations offer. The effects of
DEFF Research Database (Denmark)
Khoshfetrat Pakazad, Sina; Hansson, Anders; Andersen, Martin S.
2017-01-01
In this paper, we propose a distributed algorithm for solving coupled problems with chordal sparsity or an inherent tree structure which relies on primal–dual interior-point methods. We achieve this by distributing the computations at each iteration, using message-passing. In comparison to existi...
Robust Optimization Model for Production Planning Problem under Uncertainty
Directory of Open Access Journals (Sweden)
Pembe GÜÇLÜ
2017-01-01
Full Text Available Conditions of businesses change very quickly. To take into account the uncertainty engendered by changes has become almost a rule while planning. Robust optimization techniques that are methods of handling uncertainty ensure to produce less sensitive results to changing conditions. Production planning, is to decide from which product, when and how much will be produced, with a most basic definition. Modeling and solution of the Production planning problems changes depending on structure of the production processes, parameters and variables. In this paper, it is aimed to generate and apply scenario based robust optimization model for capacitated two-stage multi-product production planning problem under parameter and demand uncertainty. With this purpose, production planning problem of a textile company that operate in İzmir has been modeled and solved, then deterministic scenarios’ and robust method’s results have been compared. Robust method has provided a production plan that has higher cost but, will result close to feasible and optimal for most of the different scenarios in the future.
Spreadsheet Modeling of Electron Distributions in Solids
Glassy, Wingfield V.
2006-01-01
A series of spreadsheet modeling exercises constructed as part of a new upper-level elective course on solid state materials and surface chemistry is described. The spreadsheet exercises are developed to provide students with the opportunity to interact with the conceptual framework where the role of the density of states and the Fermi-Dirac…
Distributed Model Predictive Control via Dual Decomposition
DEFF Research Database (Denmark)
Biegel, Benjamin; Stoustrup, Jakob; Andersen, Palle
2014-01-01
This chapter presents dual decomposition as a means to coordinate a number of subsystems coupled by state and input constraints. Each subsystem is equipped with a local model predictive controller while a centralized entity manages the subsystems via prices associated with the coupling constraints...
IMPORTANCE OF PROBLEM SETTING BEFORE DEVELOPING A BUSINESS MODEL CANVAS
Bekhradi , Alborz; Yannou , Bernard; Cluzel , François
2016-01-01
International audience; In this paper, the importance of problem setting in front end of innovation to radically innovate is emphasized prior to the use of the BMC. After discussing the context of the Business Model Canvas usage, the failure reasons of a premature use (in early design stages) of the BMC tool is discussed through some real examples of innovative startups in Paris area. This paper ends with the proposition of three main rules to follow when one wants to use the Business Model C...
Business model and problem about the radioactive wastes management
International Nuclear Information System (INIS)
Yoshida, Norimasa; Torii, Hiroyuki
2007-01-01
The PFI (Private Finance Initiative) is a new method to construct, maintain and manage public facilities by using private capital, management skills, and technical abilities. This article described business model and related problem for making use of PFI for the management of low-level radioactive wastes produced at reactors and nuclear fuel facilities of research institutes, universities and others. This service projects could provide public services with higher quality while reducing the business costs to the country and the local authority. Social impacts, business models and risks of the projects had been assessed. (T. Tanaka)
Garcia, Tanya P; Ma, Yanyuan
2017-10-01
We develop consistent and efficient estimation of parameters in general regression models with mismeasured covariates. We assume the model error and covariate distributions are unspecified, and the measurement error distribution is a general parametric distribution with unknown variance-covariance. We construct root- n consistent, asymptotically normal and locally efficient estimators using the semiparametric efficient score. We do not estimate any unknown distribution or model error heteroskedasticity. Instead, we form the estimator under possibly incorrect working distribution models for the model error, error-prone covariate, or both. Empirical results demonstrate robustness to different incorrect working models in homoscedastic and heteroskedastic models with error-prone covariates.
Modelling Difficulties and Their Overcoming Strategies in the Solution of a Modelling Problem
Dede, Ayse Tekin
2016-01-01
The purpose of the study is to reveal the elementary mathematics student teachers' difficulties encountered in the solution of a modelling problem, the strategies to overcome those difficulties and whether the strategies worked or not. Nineteen student teachers solved the modelling problem in their four or five-person groups, and the video records…
Testing and Modeling of Contact Problems in Resistance Welding
DEFF Research Database (Denmark)
Song, Quanfeng
together two or three cylindrical parts as well as disc-ring pairs of dissimilar metals. The tests have demonstrated the effectiveness of the model. A theoretical and experimental study is performed on the contact resistance aiming at a more reliable model for numerical simulation of resistance welding......As a part of the efforts towards a professional and reliable numerical tool for resistance welding engineers, this Ph.D. project is dedicated to refining the numerical models related to the interface behavior. An FE algorithm for the contact problems in resistance welding has been developed...... for the formulation, and the interfaces are treated in a symmetric pattern. The frictional sliding contact is also solved employing the constant friction model. The algorithm is incorporated into the finite element code. Verification is carried out in some numerical tests as well as experiments such as upsetting...
Charge distribution in an two-chain dual model
International Nuclear Information System (INIS)
Fialkowski, K.; Kotanski, A.
1983-01-01
Charge distributions in the multiple production processes are analysed using the dual chain model. A parametrisation of charge distributions for single dual chains based on the νp and anti vp data is proposed. The rapidity charge distributions are then calculated for pp and anti pp collisions and compared with the previous calculations based on the recursive cascade model of single chains. The results differ at the SPS collider energies and in the energy dependence of the net forward charge supplying the useful tests of the dual chain model. (orig.)
An Optimal Design Model for New Water Distribution Networks in ...
African Journals Online (AJOL)
The mathematical formulation is a Linear Programming Problem (LPP) which involves the design of a new network of water distribution considering the cost in the form of unit price of pipes, the hydraulic gradient and the loss of pressure. The objective function minimizes the cost of the network which is computed as the sum ...
Energy Technology Data Exchange (ETDEWEB)
Marcondes, Eduardo; Goldbarg, Elizabeth; Goldbarg, Marco; Cunha, Thatiana [Universidade Federal do Rio Grande do Norte (UFRN), Natal, RN (Brazil)
2008-07-01
A major problem about the planning of production in refinery is the determination of what should be done in each stage of production as a horizon of time. Among such problems, distribution of oil products through networks of pipelines is a very significant problem because of its economic importance. In this work, a problem of distribution of oil through a network of pipelines is modeled. The network studied is a simplification of a real network. There are several restrictions to be met, such as limits of storage, transmission or receipt of limits and limitations of transport. The model is adopted bi-goal where you want to minimize the fragmentation and the time of transmission, given the restrictions of demand and storage capacity. Whereas the occupancy rate of networks is increasingly high, is of great importance optimize its use. In this work, the technique of optimization by Cloud of particles is applied to the problem of distribution of oil products by networks of pipelines. (author)
DEFF Research Database (Denmark)
Soares, Tiago; Pereira, Fábio; Morais, Hugo
2015-01-01
The high penetration of distributed energy resources (DER) in distribution networks and the competitive environment of electricity markets impose the use of new approaches in several domains. The network cost allocation, traditionally used in transmission networks, should be adapted and used...... in the distribution networks considering the specifications of the connected resources. The main goal is to develop a fairer methodology trying to distribute the distribution network use costs to all players which are using the network in each period. In this paper, a model considering different type of costs (fixed......, losses, and congestion costs) is proposed comprising the use of a large set of DER, namely distributed generation (DG), demand response (DR) of direct load control type, energy storage systems (ESS), and electric vehicles with capability of discharging energy to the network, which is known as vehicle...
Linear Power-Flow Models in Multiphase Distribution Networks: Preprint
Energy Technology Data Exchange (ETDEWEB)
Bernstein, Andrey; Dall' Anese, Emiliano
2017-05-26
This paper considers multiphase unbalanced distribution systems and develops approximate power-flow models where bus-voltages, line-currents, and powers at the point of common coupling are linearly related to the nodal net power injections. The linearization approach is grounded on a fixed-point interpretation of the AC power-flow equations, and it is applicable to distribution systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. The proposed linear models can facilitate the development of computationally-affordable optimization and control applications -- from advanced distribution management systems settings to online and distributed optimization routines. Performance of the proposed models is evaluated on different test feeders.
Distributed modelling of shallow landslides triggered by intense rainfall
Directory of Open Access Journals (Sweden)
G. B. Crosta
2003-01-01
Full Text Available Hazard assessment of shallow landslides represents an important aspect of land management in mountainous areas. Among all the methods proposed in the literature, physically based methods are the only ones that explicitly includes the dynamic factors that control landslide triggering (rainfall pattern, land-use. For this reason, they allow forecasting both the temporal and the spatial distribution of shallow landslides. Physically based methods for shallow landslides are based on the coupling of the infinite slope stability analysis with hydrological models. Three different grid-based distributed hydrological models are presented in this paper: a steady state model, a transient "piston-flow" wetting front model, and a transient diffusive model. A comparative test of these models was performed to simulate landslide occurred during a rainfall event (27–28 June 1997 that triggered hundreds of shallow landslides within Lecco province (central Southern Alps, Italy. In order to test the potential for a completely distributed model for rainfall-triggered landslides, radar detected rainfall intensity has been used. A new procedure for quantitative evaluation of distributed model performance is presented and used in this paper. The diffusive model results in the best model for the simulation of shallow landslide triggering after a rainfall event like the one that we have analysed. Finally, radar data available for the June 1997 event permitted greatly improving the simulation. In particular, radar data allowed to explain the non-uniform distribution of landslides within the study area.
Distributed MAP in the SpinJa Model Checker
Directory of Open Access Journals (Sweden)
Stefan Vijzelaar
2011-10-01
Full Text Available Spin in Java (SpinJa is an explicit state model checker for the Promela modelling language also used by the SPIN model checker. Designed to be extensible and reusable, the implementation of SpinJa follows a layered approach in which each new layer extends the functionality of the previous one. While SpinJa has preliminary support for shared-memory model checking, it did not yet support distributed-memory model checking. This tool paper presents a distributed implementation of a maximal accepting predecessors (MAP search algorithm on top of SpinJa.
Modeling and analysis of solar distributed generation
Ortiz Rivera, Eduardo Ivan
Recent changes in the global economy are creating a big impact in our daily life. The price of oil is increasing and the number of reserves are less every day. Also, dramatic demographic changes are impacting the viability of the electric infrastructure and ultimately the economic future of the industry. These are some of the reasons that many countries are looking for alternative energy to produce electric energy. The most common form of green energy in our daily life is solar energy. To convert solar energy into electrical energy is required solar panels, dc-dc converters, power control, sensors, and inverters. In this work, a photovoltaic module, PVM, model using the electrical characteristics provided by the manufacturer data sheet is presented for power system applications. Experimental results from testing are showed, verifying the proposed PVM model. Also in this work, three maximum power point tracker, MPPT, algorithms would be presented to obtain the maximum power from a PVM. The first MPPT algorithm is a method based on the Rolle's and Lagrange's Theorems and can provide at least an approximate answer to a family of transcendental functions that cannot be solved using differential calculus. The second MPPT algorithm is based on the approximation of the proposed PVM model using fractional polynomials where the shape, boundary conditions and performance of the proposed PVM model are satisfied. The third MPPT algorithm is based in the determination of the optimal duty cycle for a dc-dc converter and the previous knowledge of the load or load matching conditions. Also, four algorithms to calculate the effective irradiance level and temperature over a photovoltaic module are presented in this work. The main reasons to develop these algorithms are for monitoring climate conditions, the elimination of temperature and solar irradiance sensors, reductions in cost for a photovoltaic inverter system, and development of new algorithms to be integrated with maximum
A proposed centralised distribution model for the South African automotive component industry
Directory of Open Access Journals (Sweden)
Micheline J. Naude
2009-12-01
Full Text Available Purpose: This article explores the possibility of developing a distribution model, similar to the model developed and implemented by the South African pharmaceutical industry, which could be implemented by automotive component manufacturers for supply to independent retailers. Problem Investigated: The South African automotive components distribution chain is extensive with a number of players of varying sizes, from the larger spares distribution groups to a number of independent retailers. Distributing to the smaller independent retailers is costly for the automotive component manufacturers. Methodology: This study is based on a preliminary study of an explorative nature. Interviews were conducted with a senior staff member from a leading automotive component manufacturer in KwaZulu Natal and nine participants at a senior management level at five of their main customers (aftermarket retailers. Findings: The findings from the empirical study suggest that the aftermarket component industry is mature with the role players well established. The distribution chain to the independent retailer is expensive in terms of transaction and distribution costs for the automotive component manufacturer. A proposed centralised distribution model for supply to independent retailers has been developed which should reduce distribution costs for the automotive component manufacturer in terms of (1 the lowest possible freight rate; (2 timely and controlled delivery; and (3 reduced congestion at the customer's receiving dock. Originality: This research is original in that it explores the possibility of implementing a centralised distribution model for independent retailers in the automotive component industry. Furthermore, there is a dearth of published research on the South African automotive component industry particularly addressing distribution issues. Conclusion: The distribution model as suggested is a practical one and should deliver added value to automotive
Idealized models of the joint probability distribution of wind speeds
Monahan, Adam H.
2018-05-01
The joint probability distribution of wind speeds at two separate locations in space or points in time completely characterizes the statistical dependence of these two quantities, providing more information than linear measures such as correlation. In this study, we consider two models of the joint distribution of wind speeds obtained from idealized models of the dependence structure of the horizontal wind velocity components. The bivariate Rice distribution follows from assuming that the wind components have Gaussian and isotropic fluctuations. The bivariate Weibull distribution arises from power law transformations of wind speeds corresponding to vector components with Gaussian, isotropic, mean-zero variability. Maximum likelihood estimates of these distributions are compared using wind speed data from the mid-troposphere, from different altitudes at the Cabauw tower in the Netherlands, and from scatterometer observations over the sea surface. While the bivariate Rice distribution is more flexible and can represent a broader class of dependence structures, the bivariate Weibull distribution is mathematically simpler and may be more convenient in many applications. The complexity of the mathematical expressions obtained for the joint distributions suggests that the development of explicit functional forms for multivariate speed distributions from distributions of the components will not be practical for more complicated dependence structure or more than two speed variables.
A Parallel Computational Model for Multichannel Phase Unwrapping Problem
Imperatore, Pasquale; Pepe, Antonio; Lanari, Riccardo
2015-05-01
In this paper, a parallel model for the solution of the computationally intensive multichannel phase unwrapping (MCh-PhU) problem is proposed. Firstly, the Extended Minimum Cost Flow (EMCF) algorithm for solving MCh-PhU problem is revised within the rigorous mathematical framework of the discrete calculus ; thus permitting to capture its topological structure in terms of meaningful discrete differential operators. Secondly, emphasis is placed on those methodological and practical aspects, which lead to a parallel reformulation of the EMCF algorithm. Thus, a novel dual-level parallel computational model, in which the parallelism is hierarchically implemented at two different (i.e., process and thread) levels, is presented. The validity of our approach has been demonstrated through a series of experiments that have revealed a significant speedup. Therefore, the attained high-performance prototype is suitable for the solution of large-scale phase unwrapping problems in reasonable time frames, with a significant impact on the systematic exploitation of the existing, and rapidly growing, large archives of SAR data.
Model Checking Geographically Distributed Interlocking Systems Using UMC
DEFF Research Database (Denmark)
Fantechi, Alessandro; Haxthausen, Anne Elisabeth; Nielsen, Michel Bøje Randahl
2017-01-01
the relevant distributed protocols. By doing that we obey the safety guidelines of the railway signalling domain, that require formal methods to support the certification of such products. We also show how formal modelling can help designing alternative distributed solutions, while maintaining adherence...
Modelling aspects of distributed processing in telecommunication networks
Tomasgard, A; Audestad, JA; Dye, S; Stougie, L; van der Vlerk, MH; Wallace, SW
1998-01-01
The purpose of this paper is to formally describe new optimization models for telecommunication networks with distributed processing. Modem distributed networks put more focus on the processing of information and less on the actual transportation of data than we are traditionally used to in
Modified Normal Demand Distributions in (R,S)-Inventory Models
Strijbosch, L.W.G.; Moors, J.J.A.
2003-01-01
To model demand, the normal distribution is by far the most popular; the disadvantage that it takes negative values is taken for granted.This paper proposes two modi.cations of the normal distribution, both taking non-negative values only.Safety factors and order-up-to-levels for the familiar (R,
Income Distribution Over Educational Levels: A Simple Model.
Tinbergen, Jan
An econometric model is formulated that explains income per person in various compartments of the labor market defined by three main levels of education and by education required. The model enables an estimation of the effect of increased access to education on that distribution. The model is based on a production for the economy as a whole; a…
Modeling of unified power quality conditioner (UPQC) in distribution systems load flow
International Nuclear Information System (INIS)
Hosseini, M.; Shayanfar, H.A.; Fotuhi-Firuzabad, M.
2009-01-01
This paper presents modeling of unified power quality conditioner (UPQC) in load flow calculations for steady-state voltage compensation. An accurate model for this device is derived to use in load flow calculations. The rating of this device as well as direction of reactive power injection required to compensate voltage to the desired value (1 p.u.) is derived and discussed analytically and mathematically using phasor diagram method. Since performance of the compensator varies when it reaches to its maximum capacity, modeling of UPQC in its maximum rating of reactive power injection is derived. The validity of the proposed model is examined using two standard distribution systems consisting of 33 and 69 nodes, respectively. The best location of UPQC for under voltage problem mitigation in the distribution network is determined. The results show the validity of the proposed model for UPQC in large distribution systems.
Modeling of unified power quality conditioner (UPQC) in distribution systems load flow
Energy Technology Data Exchange (ETDEWEB)
Hosseini, M.; Shayanfar, H.A. [Center of Excellence for Power System Automation and Operation, Department of Electrical Engineering, Iran University of Science and Technology, Tehran (Iran); Fotuhi-Firuzabad, M. [Department of Electrical Engineering, Sharif University of Technology, Tehran (Iran)
2009-06-15
This paper presents modeling of unified power quality conditioner (UPQC) in load flow calculations for steady-state voltage compensation. An accurate model for this device is derived to use in load flow calculations. The rating of this device as well as direction of reactive power injection required to compensate voltage to the desired value (1 p.u.) is derived and discussed analytically and mathematically using phasor diagram method. Since performance of the compensator varies when it reaches to its maximum capacity, modeling of UPQC in its maximum rating of reactive power injection is derived. The validity of the proposed model is examined using two standard distribution systems consisting of 33 and 69 nodes, respectively. The best location of UPQC for under voltage problem mitigation in the distribution network is determined. The results show the validity of the proposed model for UPQC in large distribution systems. (author)
Distributed Generation Market Demand Model (dGen): Documentation
Energy Technology Data Exchange (ETDEWEB)
Sigrin, Benjamin [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gleason, Michael [National Renewable Energy Lab. (NREL), Golden, CO (United States); Preus, Robert [National Renewable Energy Lab. (NREL), Golden, CO (United States); Baring-Gould, Ian [National Renewable Energy Lab. (NREL), Golden, CO (United States); Margolis, Robert [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2016-02-01
The Distributed Generation Market Demand model (dGen) is a geospatially rich, bottom-up, market-penetration model that simulates the potential adoption of distributed energy resources (DERs) for residential, commercial, and industrial entities in the continental United States through 2050. The National Renewable Energy Laboratory (NREL) developed dGen to analyze the key factors that will affect future market demand for distributed solar, wind, storage, and other DER technologies in the United States. The new model builds off, extends, and replaces NREL's SolarDS model (Denholm et al. 2009a), which simulates the market penetration of distributed PV only. Unlike the SolarDS model, dGen can model various DER technologies under one platform--it currently can simulate the adoption of distributed solar (the dSolar module) and distributed wind (the dWind module) and link with the ReEDS capacity expansion model (Appendix C). The underlying algorithms and datasets in dGen, which improve the representation of customer decision making as well as the spatial resolution of analyses (Figure ES-1), also are improvements over SolarDS.
Development of vortex model with realistic axial velocity distribution
International Nuclear Information System (INIS)
Ito, Kei; Ezure, Toshiki; Ohshima, Hiroyuki
2014-01-01
A vortex is considered as one of significant phenomena which may cause gas entrainment (GE) and/or vortex cavitation in sodium-cooled fast reactors. In our past studies, the vortex is assumed to be approximated by the well-known Burgers vortex model. However, the Burgers vortex model has a simple but unreal assumption that the axial velocity component is horizontally constant, while in real the free surface vortex has the axial velocity distribution which shows large gradient in radial direction near the vortex center. In this study, a new vortex model with realistic axial velocity distribution is proposed. This model is derived from the steady axisymmetric Navier-Stokes equation as well as the Burgers vortex model, but the realistic axial velocity distribution in radial direction is considered, which is defined to be zero at the vortex center and to approach asymptotically to zero at infinity. As the verification, the new vortex model is applied to the evaluation of a simple vortex experiment, and shows good agreements with the experimental data in terms of the circumferential velocity distribution and the free surface shape. In addition, it is confirmed that the Burgers vortex model fails to calculate accurate velocity distribution with the assumption of uniform axial velocity. However, the calculation accuracy of the Burgers vortex model can be enhanced close to that of the new vortex model in consideration of the effective axial velocity which is calculated as the average value only in the vicinity of the vortex center. (author)
Regularized multivariate regression models with skew-t error distributions
Chen, Lianfu; Pourahmadi, Mohsen; Maadooliat, Mehdi
2014-01-01
We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both
Modelling distributed energy resources in energy service networks
Acha, Salvador
2013-01-01
Focuses on modelling two key infrastructures (natural gas and electrical) in urban energy systems with embedded technologies (cogeneration and electric vehicles) to optimise the operation of natural gas and electrical infrastructures under the presence of distributed energy resources
A phenomenological retention tank model using settling velocity distributions.
Maruejouls, T; Vanrolleghem, P A; Pelletier, G; Lessard, P
2012-12-15
Many authors have observed the influence of the settling velocity distribution on the sedimentation process in retention tanks. However, the pollutants' behaviour in such tanks is not well characterized, especially with respect to their settling velocity distribution. This paper presents a phenomenological modelling study dealing with the way by which the settling velocity distribution of particles in combined sewage changes between entering and leaving an off-line retention tank. The work starts from a previously published model (Lessard and Beck, 1991) which is first implemented in a wastewater management modelling software, to be then tested with full-scale field data for the first time. Next, its performance is improved by integrating the particle settling velocity distribution and adding a description of the resuspension due to pumping for emptying the tank. Finally, the potential of the improved model is demonstrated by comparing the results for one more rain event. Copyright © 2011 Elsevier Ltd. All rights reserved.
Modeling of Drift Effects on Solar Tower Concentrated Flux Distributions
Directory of Open Access Journals (Sweden)
Luis O. Lara-Cerecedo
2016-01-01
Full Text Available A novel modeling tool for calculation of central receiver concentrated flux distributions is presented, which takes into account drift effects. This tool is based on a drift model that includes different geometrical error sources in a rigorous manner and on a simple analytic approximation for the individual flux distribution of a heliostat. The model is applied to a group of heliostats of a real field to obtain the resulting flux distribution and its variation along the day. The distributions differ strongly from those obtained assuming the ideal case without drift or a case with a Gaussian tracking error function. The time evolution of peak flux is also calculated to demonstrate the capabilities of the model. The evolution of this parameter also shows strong differences in comparison to the case without drift.
A Framework for Modeling and Analyzing Complex Distributed Systems
National Research Council Canada - National Science Library
Lynch, Nancy A; Shvartsman, Alex Allister
2005-01-01
Report developed under STTR contract for topic AF04-T023. This Phase I project developed a modeling language and laid a foundation for computational support tools for specifying, analyzing, and verifying complex distributed system designs...
Beauregard, Frieda; de Blois, Sylvie
2014-01-01
Both climatic and edaphic conditions determine plant distribution, however many species distribution models do not include edaphic variables especially over large geographical extent. Using an exceptional database of vegetation plots (n = 4839) covering an extent of ∼55,000 km2, we tested whether the inclusion of fine scale edaphic variables would improve model predictions of plant distribution compared to models using only climate predictors. We also tested how well these edaphic variables could predict distribution on their own, to evaluate the assumption that at large extents, distribution is governed largely by climate. We also hypothesized that the relative contribution of edaphic and climatic data would vary among species depending on their growth forms and biogeographical attributes within the study area. We modelled 128 native plant species from diverse taxa using four statistical model types and three sets of abiotic predictors: climate, edaphic, and edaphic-climate. Model predictive accuracy and variable importance were compared among these models and for species' characteristics describing growth form, range boundaries within the study area, and prevalence. For many species both the climate-only and edaphic-only models performed well, however the edaphic-climate models generally performed best. The three sets of predictors differed in the spatial information provided about habitat suitability, with climate models able to distinguish range edges, but edaphic models able to better distinguish within-range variation. Model predictive accuracy was generally lower for species without a range boundary within the study area and for common species, but these effects were buffered by including both edaphic and climatic predictors. The relative importance of edaphic and climatic variables varied with growth forms, with trees being more related to climate whereas lower growth forms were more related to edaphic conditions. Our study identifies the potential
Directory of Open Access Journals (Sweden)
Frieda Beauregard
Full Text Available Both climatic and edaphic conditions determine plant distribution, however many species distribution models do not include edaphic variables especially over large geographical extent. Using an exceptional database of vegetation plots (n = 4839 covering an extent of ∼55,000 km2, we tested whether the inclusion of fine scale edaphic variables would improve model predictions of plant distribution compared to models using only climate predictors. We also tested how well these edaphic variables could predict distribution on their own, to evaluate the assumption that at large extents, distribution is governed largely by climate. We also hypothesized that the relative contribution of edaphic and climatic data would vary among species depending on their growth forms and biogeographical attributes within the study area. We modelled 128 native plant species from diverse taxa using four statistical model types and three sets of abiotic predictors: climate, edaphic, and edaphic-climate. Model predictive accuracy and variable importance were compared among these models and for species' characteristics describing growth form, range boundaries within the study area, and prevalence. For many species both the climate-only and edaphic-only models performed well, however the edaphic-climate models generally performed best. The three sets of predictors differed in the spatial information provided about habitat suitability, with climate models able to distinguish range edges, but edaphic models able to better distinguish within-range variation. Model predictive accuracy was generally lower for species without a range boundary within the study area and for common species, but these effects were buffered by including both edaphic and climatic predictors. The relative importance of edaphic and climatic variables varied with growth forms, with trees being more related to climate whereas lower growth forms were more related to edaphic conditions. Our study
Geometric modeling in the problem of ball bearing accuracy
Glukhov, V. I.; Pushkarev, V. V.; Khomchenko, V. G.
2017-06-01
The manufacturing quality of ball bearings is an urgent problem for machine-building industry. The aim of the research is to improve the geometric specifications accuracy of bearings based on evidence-based systematic approach and the method of adequate size, location and form deviations modeling of the rings and assembled ball bearings. The present work addressed the problem of bearing geometric specifications identification and the study of these specifications. The deviation from symmetric planar of rings and bearings assembly and mounting width are among these specifications. A systematic approach to geometric specifications values and ball bearings tolerances normalization in coordinate systems will improve the quality of bearings by optimizing and minimizing the number of specifications. The introduction of systematic approach to the international standards on rolling bearings is a guarantee of a significant increase in accuracy of bearings and the quality of products where they are applied.
Modeling of problems of projection: A non-countercyclic approach
Directory of Open Access Journals (Sweden)
Jason Ginsburg
2016-06-01
Full Text Available This paper describes a computational implementation of the recent Problems of Projection (POP approach to the study of language (Chomsky 2013; 2015. While adopting the basic proposals of POP, notably with respect to how labeling occurs, we a attempt to formalize the basic proposals of POP, and b develop new proposals that overcome some problems with POP that arise with respect to cyclicity, labeling, and wh-movement operations. We show how this approach accounts for simple declarative sentences, ECM constructions, and constructions that involve long-distance movement of a wh-phrase (including the that-trace effect. We implemented these proposals with a computer model that automatically constructs step-by-step derivations of target sentences, thus making it possible to verify that these proposals work.
The Inverse Problem of Identification of Hydrogen Permeability Model
Directory of Open Access Journals (Sweden)
Yury V. Zaika
2018-01-01
Full Text Available One of the technological challenges for hydrogen materials science is the currently active search for structural materials with important applications (including the ITER project and gas-separation plants. One had to estimate the parameters of diffusion and sorption to numerically model the different scenarios and experimental conditions of the material usage (including extreme ones. The article presents boundary value problems of hydrogen permeability and thermal desorption with dynamical boundary conditions. A numerical method is developed for TDS spectrum simulation, where only integration of a nonlinear system of low order ordinary differential equations is required. The main final output of the article is a noise-resistant algorithm for solving the inverse problem of parametric identification for the aggregated experiment where desorption and diffusion are dynamically interrelated (without the artificial division of studies into the diffusion limited regime (DLR and the surface limited regime (SLR.
Fractional and multivariable calculus model building and optimization problems
Mathai, A M
2017-01-01
This textbook presents a rigorous approach to multivariable calculus in the context of model building and optimization problems. This comprehensive overview is based on lectures given at five SERC Schools from 2008 to 2012 and covers a broad range of topics that will enable readers to understand and create deterministic and nondeterministic models. Researchers, advanced undergraduate, and graduate students in mathematics, statistics, physics, engineering, and biological sciences will find this book to be a valuable resource for finding appropriate models to describe real-life situations. The first chapter begins with an introduction to fractional calculus moving on to discuss fractional integrals, fractional derivatives, fractional differential equations and their solutions. Multivariable calculus is covered in the second chapter and introduces the fundamentals of multivariable calculus (multivariable functions, limits and continuity, differentiability, directional derivatives and expansions of multivariable ...
Neutronic modelling of the Harwell MTR's: some recent problems
International Nuclear Information System (INIS)
Taylor, N.P.
1984-01-01
Use of the Harwell Materials Testing Reactors for the irradiation of experimental rigs gives rise to a number of requirements for calculations of neutron fluxes. In addition photon fluxes are required for estimates of nuclear heating rates. A range of calculational methods are employed, from simple cell to whole reactor models, and the latter have been extended for preliminary design studies for the next generation of MTR to replace DIDO and PLUTO. The technique used for these various models are described in this note, with emphasis on the areas in which modelling problems are encountered. The applications divide into three distinct areas: calculations concerning rigs irradiated within the reactor core, those for rigs positioned in the D 2 O reflector surrounding the core, and design studies for a replacement reactor. (Auth.)
Teaching Problem Solving without Modeling through "Thinking Aloud Pair Problem Solving."
Pestel, Beverly C.
1993-01-01
Reviews research relevant to the problem of unsatisfactory student problem-solving abilities and suggests a teaching strategy that addresses the issue. Author explains how she uses teaching aloud problem solving (TAPS) in college chemistry and presents evaluation data. Among the findings are that the TAPS class got fewer problems completely right,…
Directory of Open Access Journals (Sweden)
José M. Sánchez Vázqez
2006-12-01
Full Text Available As part of the supply chain, manufacturing firms are increasingly placing greater emphasis on the management of their outsourced distribution channels (DCs. However, the role that interorganizational Management Control Systems (MCS can play in managing DC problems is still not clearly understood. Through an exploratory case study, we show how intra-organizational control problems persist in an inter-organizational context, rooted in informational asymmetries and conflicts of interest and aggravated by interdependencies. Likewise, the case study illustrates the way in which MCS assists the manufacturing firm to communicate to its representatives what the organization wants from them, motivating them and transferring capabilities. Thus, MCS can help to complement and re-orientate inter-firm agreements and constitutes a key tool for managing DCs in a flexible way.Como parte de la cadena de suministros, las empresas productoras están poniendo mayor énfasis en la gestión de sus canales de distribución externalizados (DCs. Sin embargo, aún no existe una clara comprensión sobre el papel que los Sistemas de Control de Gestión inter-organizativos (MCS pueden desarrollar en la gestión de los problemas de los DCs. A través de un estudio de caso, se muestra cómo los problemas de control intra-organizativos persisten en un contexto inter-organizativo, causados por las asimetrías informativas y el conflicto de intereses, y agravándose por las interdependencias. Asimismo, se expone cómo los MCS ayudan a la empresa productora a comunicar a sus distribuidores lo que la organización desea de ellos, motivándolos y capacitándolos. De esta forma, los MCS pueden ayudar a completar y redirigir acuerdos entre firmas y constituir una herramienta clave en la gestión flexible de los DCs.
A Complex Network Approach to Distributional Semantic Models.
Directory of Open Access Journals (Sweden)
Akira Utsumi
Full Text Available A number of studies on network analysis have focused on language networks based on free word association, which reflects human lexical knowledge, and have demonstrated the small-world and scale-free properties in the word association network. Nevertheless, there have been very few attempts at applying network analysis to distributional semantic models, despite the fact that these models have been studied extensively as computational or cognitive models of human lexical knowledge. In this paper, we analyze three network properties, namely, small-world, scale-free, and hierarchical properties, of semantic networks created by distributional semantic models. We demonstrate that the created networks generally exhibit the same properties as word association networks. In particular, we show that the distribution of the number of connections in these networks follows the truncated power law, which is also observed in an association network. This indicates that distributional semantic models can provide a plausible model of lexical knowledge. Additionally, the observed differences in the network properties of various implementations of distributional semantic models are consistently explained or predicted by considering the intrinsic semantic features of a word-context matrix and the functions of matrix weighting and smoothing. Furthermore, to simulate a semantic network with the observed network properties, we propose a new growing network model based on the model of Steyvers and Tenenbaum. The idea underlying the proposed model is that both preferential and random attachments are required to reflect different types of semantic relations in network growth process. We demonstrate that this model provides a better explanation of network behaviors generated by distributional semantic models.
Two problems from the theory of semiotic control models. I. Representations of semiotic models
Energy Technology Data Exchange (ETDEWEB)
Osipov, G S
1981-11-01
Two problems from the theory of semiotic control models are being stated, in particular the representation of models and the semantic analysis of themtheory of semiotic control models are being stated, in particular the representation of models and the semantic analysis of them. Algebraic representation of semiotic models, covering of representations, their reduction and equivalence are discussed. The interrelations between functional and structural characteristics of semiotic models are investigated. 20 references.
Numerical solution of a model for a superconductor field problem
International Nuclear Information System (INIS)
Alsop, L.E.; Goodman, A.S.; Gustavson, F.G.; Miranker, W.L.
1979-01-01
A model of a magnetic field problem occurring in connection with Josephson junction devices is derived, and numerical solutions are obtained. The model is of mathematical interest, because the magnetic vector potential satisfies inhomogeneous Helmholtz equations in part of the region, i.e., the superconductors, and the Laplace equation elsewhere. Moreover, the inhomogeneities are the guage constants for the potential, which are different for each superconductor, and their magnitudes are proportional to the currents flowing in the superconductors. These constants are directly related to the self and mutual inductances of the superconducting elements in the device. The numerical solution is obtained by the iterative use of a fast Poisson solver. Chebyshev acceleration is used to reduce the number of iterations required to obtain a solution. A typical problem involves solving 100,000 simultaneous equations, which the algorithm used with this model does in 20 iterations, requiring three minutes of CPU time on an IBM VM/370/168. Excellent agreement is obtained between calculated and observed values for the inductances
Performance modeling of parallel algorithms for solving neutron diffusion problems
International Nuclear Information System (INIS)
Azmy, Y.Y.; Kirk, B.L.
1995-01-01
Neutron diffusion calculations are the most common computational methods used in the design, analysis, and operation of nuclear reactors and related activities. Here, mathematical performance models are developed for the parallel algorithm used to solve the neutron diffusion equation on message passing and shared memory multiprocessors represented by the Intel iPSC/860 and the Sequent Balance 8000, respectively. The performance models are validated through several test problems, and these models are used to estimate the performance of each of the two considered architectures in situations typical of practical applications, such as fine meshes and a large number of participating processors. While message passing computers are capable of producing speedup, the parallel efficiency deteriorates rapidly as the number of processors increases. Furthermore, the speedup fails to improve appreciably for massively parallel computers so that only small- to medium-sized message passing multiprocessors offer a reasonable platform for this algorithm. In contrast, the performance model for the shared memory architecture predicts very high efficiency over a wide range of number of processors reasonable for this architecture. Furthermore, the model efficiency of the Sequent remains superior to that of the hypercube if its model parameters are adjusted to make its processors as fast as those of the iPSC/860. It is concluded that shared memory computers are better suited for this parallel algorithm than message passing computers
Flightdeck Automation Problems (FLAP) Model for Safety Technology Portfolio Assessment
Ancel, Ersin; Shih, Ann T.
2014-01-01
NASA's Aviation Safety Program (AvSP) develops and advances methodologies and technologies to improve air transportation safety. The Safety Analysis and Integration Team (SAIT) conducts a safety technology portfolio assessment (PA) to analyze the program content, to examine the benefits and risks of products with respect to program goals, and to support programmatic decision making. The PA process includes systematic identification of current and future safety risks as well as tracking several quantitative and qualitative metrics to ensure the program goals are addressing prominent safety risks accurately and effectively. One of the metrics within the PA process involves using quantitative aviation safety models to gauge the impact of the safety products. This paper demonstrates the role of aviation safety modeling by providing model outputs and evaluating a sample of portfolio elements using the Flightdeck Automation Problems (FLAP) model. The model enables not only ranking of the quantitative relative risk reduction impact of all portfolio elements, but also highlighting the areas with high potential impact via sensitivity and gap analyses in support of the program office. Although the model outputs are preliminary and products are notional, the process shown in this paper is essential to a comprehensive PA of NASA's safety products in the current program and future programs/projects.
Software reliability growth models with normal failure time distributions
International Nuclear Information System (INIS)
Okamura, Hiroyuki; Dohi, Tadashi; Osaki, Shunji
2013-01-01
This paper proposes software reliability growth models (SRGM) where the software failure time follows a normal distribution. The proposed model is mathematically tractable and has sufficient ability of fitting to the software failure data. In particular, we consider the parameter estimation algorithm for the SRGM with normal distribution. The developed algorithm is based on an EM (expectation-maximization) algorithm and is quite simple for implementation as software application. Numerical experiment is devoted to investigating the fitting ability of the SRGMs with normal distribution through 16 types of failure time data collected in real software projects
Maxent modelling for predicting the potential distribution of Thai Palms
DEFF Research Database (Denmark)
Tovaranonte, Jantrararuk; Barfod, Anders S.; Overgaard, Anne Blach
2011-01-01
on presence data. The aim was to identify potential hot spot areas, assess the determinants of palm distribution ranges, and provide a firmer knowledge base for future conservation actions. We focused on a relatively small number of climatic, environmental and spatial variables in order to avoid...... overprediction of species distribution ranges. The models with the best predictive power were found by calculating the area under the curve (AUC) of receiver-operating characteristic (ROC). Here, we provide examples of contrasting predicted species distribution ranges as well as a map of modeled palm diversity...
Application of oil spill model to marine pollution and risk control problems
Aseev, Nikita; Agoshkov, Valery; Sheloput, Tatyana
2017-04-01
Oil transportation by sea induces challenging problems of environmental control. Millions of tonnes of oil are yearly released during routine ship operations, not to mention vast spills due to different accidents (e.g. tanker collisions, grounding, etc.). Oil pollution is dangerous to marine organisms such as plants, fish and mammals, leading to widespread damage to our planet. In turn, fishery and travel agencies can lose money and clients, and ship operators are obliged to pay huge penalties for environmental pollution. In this work we present the method of accessing oil pollution of marine environment using recently developed oil spill model. The model describes basic processes of the oil slick evolution: oil transport due to currents, drift under the action of wind, spreading on the surface, evaporation, emulsification and dispersion. Such parameters as slick location, mass, density of oil, water content, viscosity and density of "water-in-oil" emulsion can be calculated. We demonstrate how to apply the model to damage calculation problems using a concept of average damage to particular marine area. We also formulate the problem of oil spill risk control, when some accident parameters are not known, but their probability distribution is given. We propose a new algorithm to solve such problems and show results of our model simulations. The work can be interesting to broad environmental, physics and mathematics community. The work is supported by Russian Foundation for Basic Research grant 16-31-00510.
International Nuclear Information System (INIS)
Pouransari, Nasibeh; Maréchal, Francois
2015-01-01
Highlights: • Synthesizing industrial size heat recovery network with match reduction approach. • Targeting TSI with minimum exchange between process subsystems. • Generating a feasible close-to-optimum network. • Reducing tremendously the HLD computational time and complexity. • Generating realistic network with respect to the plant layout. - Abstract: This paper presents a targeting strategy to design a heat recovery network for an industrial plant by dividing the system into subsystems while considering the heat transfer opportunities between them. The methodology is based on a sequential approach. The heat recovery opportunity between process units and the optimal flow rates of utilities are first identified using a Mixed Integer Linear Programming (MILP) model. The site is then divided into a number of subsystems where the overall interaction is resumed by a pair of virtual hot and cold stream per subsystem which is reconstructed by solving the heat cascade inside each subsystem. The Heat Load Distribution (HLD) problem is then solved between those packed subsystems in a sequential procedure where each time one of the subsystems is unpacked by switching from the virtual stream pair back into the original ones. The main advantages are to minimize the number of connections between process subsystems, to alleviate the computational complexity of the HLD problem and to generate a feasible network which is compatible with the minimum energy consumption objective. The application of the proposed methodology is illustrated through a number of case studies, discussed and compared with the relevant results from the literature
Modeling, robust and distributed model predictive control for freeway networks
Liu, S.
2016-01-01
In Model Predictive Control (MPC) for traffic networks, traffic models are crucial since they are used as prediction models for determining the optimal control actions. In order to reduce the computational complexity of MPC for traffic networks, macroscopic traffic models are often used instead of
The Distributed Geothermal Market Demand Model (dGeo): Documentation
Energy Technology Data Exchange (ETDEWEB)
McCabe, Kevin [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Mooney, Meghan E [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Sigrin, Benjamin O [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gleason, Michael [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Liu, Xiaobing [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2017-11-06
The National Renewable Energy Laboratory (NREL) developed the Distributed Geothermal Market Demand Model (dGeo) as a tool to explore the potential role of geothermal distributed energy resources (DERs) in meeting thermal energy demands in the United States. The dGeo model simulates the potential for deployment of geothermal DERs in the residential and commercial sectors of the continental United States for two specific technologies: ground-source heat pumps (GHP) and geothermal direct use (DU) for district heating. To quantify the opportunity space for these technologies, dGeo leverages a highly resolved geospatial database and robust bottom-up, agent-based modeling framework. This design is consistent with others in the family of Distributed Generation Market Demand models (dGen; Sigrin et al. 2016), including the Distributed Solar Market Demand (dSolar) and Distributed Wind Market Demand (dWind) models. dGeo is intended to serve as a long-term scenario-modeling tool. It has the capability to simulate the technical potential, economic potential, market potential, and technology deployment of GHP and DU through the year 2050 under a variety of user-defined input scenarios. Through these capabilities, dGeo can provide substantial analytical value to various stakeholders interested in exploring the effects of various techno-economic, macroeconomic, financial, and policy factors related to the opportunity for GHP and DU in the United States. This report documents the dGeo modeling design, methodology, assumptions, and capabilities.
Working toward integrated models of alpine plant distribution.
Carlson, Bradley Z; Randin, Christophe F; Boulangeat, Isabelle; Lavergne, Sébastien; Thuiller, Wilfried; Choler, Philippe
2013-10-01
Species distribution models (SDMs) have been frequently employed to forecast the response of alpine plants to global changes. Efforts to model alpine plant distribution have thus far been primarily based on a correlative approach, in which ecological processes are implicitly addressed through a statistical relationship between observed species occurrences and environmental predictors. Recent evidence, however, highlights the shortcomings of correlative SDMs, especially in alpine landscapes where plant species tend to be decoupled from atmospheric conditions in micro-topographic habitats and are particularly exposed to geomorphic disturbances. While alpine plants respond to the same limiting factors as plants found at lower elevations, alpine environments impose a particular set of scale-dependent and hierarchical drivers that shape the realized niche of species and that require explicit consideration in a modelling context. Several recent studies in the European Alps have successfully integrated both correlative and process-based elements into distribution models of alpine plants, but for the time being a single integrative modelling framework that includes all key drivers remains elusive. As a first step in working toward a comprehensive integrated model applicable to alpine plant communities, we propose a conceptual framework that structures the primary mechanisms affecting alpine plant distributions. We group processes into four categories, including multi-scalar abiotic drivers, gradient dependent species interactions, dispersal and spatial-temporal plant responses to disturbance. Finally, we propose a methodological framework aimed at developing an integrated model to better predict alpine plant distribution.
Orbital angular momentum parton distributions in quark models
International Nuclear Information System (INIS)
Scopetta, S.; Vento, V.
2000-01-01
At the low energy, hadronic, scale we calculate Orbital Angular Momentum (OAM) twist-two parton distributions for the relativistic MIT bag model and for nonrelativistic quark models. We reach the scale of the data by leading order evolution in perturbative QCD. We confirm that the contribution of quarks and gluons OAM to the nucleon spin grows with Q 2 , and it can be relevant at the experimental scale, even if it is negligible at the hadronic scale, irrespective of the model used. The sign and shape of the quark OAM distribution at high Q 2 may depend strongly on the relative size of the OAM and spin distributions at the hadronic scale. Sizeable quark OAM distributions at the hadronic scale, as proposed by several authors, can produce the dominant contribution to the nucleon spin at high Q 2 . (author)
Sustainable energy from biomass: Biomethane manufacturing plant location and distribution problem
International Nuclear Information System (INIS)
Wu, Bingqing; Sarker, Bhaba R.; Paudel, Krishna P.
2015-01-01
Highlights: • Optimal strategy to locate biogas reactor and allocating feedstock. • Nonlinear mixed integer programming problem structure. • Real world supply chain of biogas production system. • Considers construction cost, transportation and labor costs. • Novel heuristic improves efficiency to obtain optimal solution. - Abstract: As an environment-friendly and renewable energy source, biomethane plays a significant role in the supply of sustainable energy. To facilitate the decision-making process of where to build a biomethane production system (BMPS) and how to allocate the resources for the BMPS, this paper develops an analytical method to find the solutions to location and allocation problems by minimizing the supply chain cost of the BMPS. The BMPS consists of the local farms for providing feedstock, the hubs for collecting and storing feedstock from farms, and the reactors for producing biomethane from feedstock. A mixed integer nonlinear programming (MINLP) is introduced to model the supply chain by considering building, transportation, and labor costs. An alternative heuristic is proposed to obtain an optimal/sub-optimal solution from the MINLP. The validity of the proposed heuristic is proven by numerical examples that are abstracted from practical scenarios.
Modelling and analysis of distributed simulation protocols with distributed graph transformation
Lara, Juan de; Taentzer, Gabriele
2005-01-01
Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. J. de Lara, and G. Taentzer, "Modelling and analysis of distributed simulation protocols with distributed graph transformation...
Directory of Open Access Journals (Sweden)
Christian Vögeli
2016-12-01
Full Text Available Accurate knowledge on snow distribution in alpine terrain is crucial for various applicationssuch as flood risk assessment, avalanche warning or managing water supply and hydro-power.To simulate the seasonal snow cover development in alpine terrain, the spatially distributed,physics-based model Alpine3D is suitable. The model is typically driven by spatial interpolationsof observations from automatic weather stations (AWS, leading to errors in the spatial distributionof atmospheric forcing. With recent advances in remote sensing techniques, maps of snowdepth can be acquired with high spatial resolution and accuracy. In this work, maps of the snowdepth distribution, calculated from summer and winter digital surface models based on AirborneDigital Sensors (ADS, are used to scale precipitation input data, with the aim to improve theaccuracy of simulation of the spatial distribution of snow with Alpine3D. A simple method toscale and redistribute precipitation is presented and the performance is analysed. The scalingmethod is only applied if it is snowing. For rainfall the precipitation is distributed by interpolation,with a simple air temperature threshold used for the determination of the precipitation phase.It was found that the accuracy of spatial snow distribution could be improved significantly forthe simulated domain. The standard deviation of absolute snow depth error is reduced up toa factor 3.4 to less than 20 cm. The mean absolute error in snow distribution was reducedwhen using representative input sources for the simulation domain. For inter-annual scaling, themodel performance could also be improved, even when using a remote sensing dataset from adifferent winter. In conclusion, using remote sensing data to process precipitation input, complexprocesses such as preferential snow deposition and snow relocation due to wind or avalanches,can be substituted and modelling performance of spatial snow distribution is improved.
The G-dwarf problem and the closed-box models of Galactic evolution
International Nuclear Information System (INIS)
Francois, P.; Vangioni-Flam, E.; Audouze, J.
1990-01-01
The paucity of very iron-poor stars in the Galactic disk with respect to the predictions of the simple model of Galactic chemical evolution (the notorious G-dwarf problem) is one of the most fundamental constraints of Galactic evolutionary models. This paper tests recently proposed models, with bimodal and varying star formation rates, against the G-dwarf metallicity distribution, the gas/total mass ratio in the solar vicinity, the age-metallicity relation, and the abundances of deuterium, O-16, Mg-24, Si-28, and Fe-56 at the birth of the sun. It is shown that none of these models agree entirely with the data, but that it is possible to find a combination of the two models leading to reasonable results. 35 refs
International Nuclear Information System (INIS)
Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim
2014-01-01
A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems
Energy Technology Data Exchange (ETDEWEB)
Elsheikh, Ahmed H., E-mail: aelsheikh@ices.utexas.edu [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Institute of Petroleum Engineering, Heriot-Watt University, Edinburgh EH14 4AS (United Kingdom); Wheeler, Mary F. [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Hoteit, Ibrahim [Department of Earth Sciences and Engineering, King Abdullah University of Science and Technology (KAUST), Thuwal (Saudi Arabia)
2014-02-01
A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.
Renewable Distributed Generation Models in Three-Phase Load Flow Analysis for Smart Grid
Directory of Open Access Journals (Sweden)
K. M. Nor
2013-11-01
Full Text Available The paper presents renewable distributed generationÂ (RDG models as three-phase resource in load flow computation and analyzes their effect when they are connected in composite networks. The RDG models that have been considered comprise of photovoltaic (PV and wind turbine generation (WTG. The voltage-controlled node and complex power injection node are used in the models. These improvement models are suitable for smart grid power system analysis. The combination of IEEE transmission and distribution data used to test and analyze the algorithm in solving balanced/unbalanced active systems. The combination of IEEE transmission data and IEEE test feeder are used to test the the algorithm for balanced and unbalanced multi-phase distribution system problem. The simulation results show that by increased number and size of RDG units have improved voltage profile and reduced system losses.
Statistical Modelling of Synaptic Vesicles Distribution and Analysing their Physical Characteristics
DEFF Research Database (Denmark)
Khanmohammadi, Mahdieh
transmission electron microscopy is used to acquire images from two experimental groups of rats: 1) rats subjected to a behavioral model of stress and 2) rats subjected to sham stress as the control group. The synaptic vesicle distribution and interactions are modeled by employing a point process approach......This Ph.D. thesis deals with mathematical and statistical modeling of synaptic vesicle distribution, shape, orientation and interactions. The first major part of this thesis treats the problem of determining the effect of stress on synaptic vesicle distribution and interactions. Serial section...... on differences of statistical measures in section and the same measures in between sections. Three-dimensional (3D) datasets are reconstructed by using image registration techniques and estimated thicknesses. We distinguish the effect of stress by estimating the synaptic vesicle densities and modeling...
Modeling highway travel time distribution with conditional probability models
Energy Technology Data Exchange (ETDEWEB)
Oliveira Neto, Francisco Moraes [ORNL; Chin, Shih-Miao [ORNL; Hwang, Ho-Ling [ORNL; Han, Lee [University of Tennessee, Knoxville (UTK)
2014-01-01
ABSTRACT Under the sponsorship of the Federal Highway Administration's Office of Freight Management and Operations, the American Transportation Research Institute (ATRI) has developed performance measures through the Freight Performance Measures (FPM) initiative. Under this program, travel speed information is derived from data collected using wireless based global positioning systems. These telemetric data systems are subscribed and used by trucking industry as an operations management tool. More than one telemetric operator submits their data dumps to ATRI on a regular basis. Each data transmission contains truck location, its travel time, and a clock time/date stamp. Data from the FPM program provides a unique opportunity for studying the upstream-downstream speed distributions at different locations, as well as different time of the day and day of the week. This research is focused on the stochastic nature of successive link travel speed data on the continental United States Interstates network. Specifically, a method to estimate route probability distributions of travel time is proposed. This method uses the concepts of convolution of probability distributions and bivariate, link-to-link, conditional probability to estimate the expected distributions for the route travel time. Major contribution of this study is the consideration of speed correlation between upstream and downstream contiguous Interstate segments through conditional probability. The established conditional probability distributions, between successive segments, can be used to provide travel time reliability measures. This study also suggests an adaptive method for calculating and updating route travel time distribution as new data or information is added. This methodology can be useful to estimate performance measures as required by the recent Moving Ahead for Progress in the 21st Century Act (MAP 21).
Modelling the potential distribution of Betula utilis in the Himalaya
Directory of Open Access Journals (Sweden)
Maria Bobrowski
2017-07-01
Full Text Available Developing sustainable adaptation pathways under climate change conditions in mountain regions requires accurate predictions of treeline shifts and future distribution ranges of treeline species. Here, we model for the first time the potential distribution of Betula utilis, a principal Himalayan treeline species, to provide a basis for the analysis of future range shifts. Our target species Betula utilis is widespread at alpine treelines in the Himalayan mountains, the distribution range extends across the Himalayan mountain range. Our objective is to model the potential distribution of B. utilis in relation to current climate conditions. We generated a dataset of 590 occurrence records and used 24 variables for ecological niche modelling. We calibrated Generalized Linear Models using the Akaike Information Criterion (AIC and evaluated model performance using threshold-independent (AUC, Area Under the Curve and threshold-dependent (TSS, True Skill Statistics characteristics as well as visual assessments of projected distribution maps. We found two temperature-related (Mean Temperature of the Wettest Quarter, Temperature Annual Range and three precipitation-related variables (Precipitation of the Coldest Quarter, Average Precipitation of March, April and May and Precipitation Seasonality to be useful for predicting the potential distribution of B. utilis. All models had high predictive power (AUC ≥ 0.98 and TSS ≥ 0.89. The projected suitable area in the Himalayan mountains varies considerably, with most extensive distribution in the western and central Himalayan region. A substantial difference between potential and real distribution in the eastern Himalaya points to decreasing competitiveness of B. utilis under more oceanic conditions in the eastern part of the mountain system. A comparison between the vegetation map of Schweinfurth (1957 and our current predictions suggests that B. utilis does not reach the upper elevational limit in
Distribution of radon and radium in the ocean and its bearing on some oceanographic problems
International Nuclear Information System (INIS)
Miyake, Y.; Sugimura, Y.; Saruhashi, K.
1980-01-01
Radon and radium contents in seawater near the ocean floor and in the surface layer of the ocean were studied. The results showed a fairly large amount of excess of radon over radium (1520 to 315%) near the ocean floor. The vertical eddy-diffusion coefficient, D, near the seabed was calculated from a vertical distribution of the excess amount of radon. In the surface layer of the ocean, a remarkable deficiency of radon with respect to radium (50 to 70%) was observed. The mass balance of radium in the mixed layer was considered using a box model. The results showed that the residence time of radon in the mixed layer was about 8 days
Graph Modeling for Quadratic Assignment Problems Associated with the Hypercube
International Nuclear Information System (INIS)
Mittelmann, Hans; Peng Jiming; Wu Xiaolin
2009-01-01
In the paper we consider the quadratic assignment problem arising from channel coding in communications where one coefficient matrix is the adjacency matrix of a hypercube in a finite dimensional space. By using the geometric structure of the hypercube, we first show that there exist at least n different optimal solutions to the underlying QAPs. Moreover, the inherent symmetries in the associated hypercube allow us to obtain partial information regarding the optimal solutions and thus shrink the search space and improve all the existing QAP solvers for the underlying QAPs.Secondly, we use graph modeling technique to derive a new integer linear program (ILP) models for the underlying QAPs. The new ILP model has n(n-1) binary variables and O(n 3 log(n)) linear constraints. This yields the smallest known number of binary variables for the ILP reformulation of QAPs. Various relaxations of the new ILP model are obtained based on the graphical characterization of the hypercube, and the lower bounds provided by the LP relaxations of the new model are analyzed and compared with what provided by several classical LP relaxations of QAPs in the literature.
An inverse problem for a mathematical model of aquaponic agriculture
Bobak, Carly; Kunze, Herb
2017-01-01
Aquaponic agriculture is a sustainable ecosystem that relies on a symbiotic relationship between fish and macrophytes. While the practice has been growing in popularity, relatively little mathematical models exist which aim to study the system processes. In this paper, we present a system of ODEs which aims to mathematically model the population and concetrations dynamics present in an aquaponic environment. Values of the parameters in the system are estimated from the literature so that simulated results can be presented to illustrate the nature of the solutions to the system. As well, a brief sensitivity analysis is performed in order to identify redundant parameters and highlight those which may need more reliable estimates. Specifically, an inverse problem with manufactured data for fish and plants is presented to demonstrate the ability of the collage theorem to recover parameter estimates.
Two models of the capacitated vehicle routing problem
Directory of Open Access Journals (Sweden)
Zuzana Borčinova
2017-01-01
Full Text Available The aim of the Capacitated Vehicle Routing Problem (CVRP is to find a set of minimum total cost routes for a fleet of capacitated vehicles based at a single depot, to serve a set of customers. There exist various integer linear programming models of the CVRP. One of the main differences lies in the way to eliminate sub-tours, i.e. cycles that do not go through the depot. In this paper, we describe a well-known flow formulation of CVRP, where sub-tour elimination constraints have a cardinality exponentially growing with the number of customers. Then we present a mixed linear programming formulation with polynomial cardinality of sub-tour elimination constraints. Both of the models were implemented and compared on several benchmarks.
Solving seismological problems using sgraph program: II-waveform modeling
International Nuclear Information System (INIS)
Abdelwahed, Mohamed F.
2012-01-01
One of the seismological programs to manipulate seismic data is SGRAPH program. It consists of integrated tools to perform advanced seismological techniques. SGRAPH is considered a new system for maintaining and analyze seismic waveform data in a stand-alone Windows-based application that manipulate a wide range of data formats. SGRAPH was described in detail in the first part of this paper. In this part, I discuss the advanced techniques including in the program and its applications in seismology. Because of the numerous tools included in the program, only SGRAPH is sufficient to perform the basic waveform analysis and to solve advanced seismological problems. In the first part of this paper, the application of the source parameters estimation and hypocentral location was given. Here, I discuss SGRAPH waveform modeling tools. This paper exhibits examples of how to apply the SGRAPH tools to perform waveform modeling for estimating the focal mechanism and crustal structure of local earthquakes.
Modelling grid losses and the geographic distribution of electricity generation
DEFF Research Database (Denmark)
Østergaard, Poul Alberg
2005-01-01
In Denmark more than 40% of the electricity consumption is covered by geographically scattered electricity sources namely wind power and local CHP (cogeneration of heat and power) plants. This causes problems in regard to load balancing and possible grid overloads. The potential grid problems...... and methods for solving these are analysed in this article on the basis of energy systems analyses, geographic distribution of consumption and production and grid load-flow analyses. It is concluded that by introducing scattered load balancing using local CHP plants actively and using interruptible loads...
Directory of Open Access Journals (Sweden)
2009-03-01
Full Text Available We define a special case for the vehicle routing problem with stochastic demands (SC-VRPSD where customer demands are normally distributed. We propose a new linear model for computing the expected length of a tour in SC-VRPSD. The proposed model is based on the integration of the “Traveling Salesman Problem” (TSP and the Assignment Problem. For large-scale problems, we also use an Iterated Local Search (ILS algorithm in order to reach an effective solution.
Directory of Open Access Journals (Sweden)
Tian-tian Feng
2017-06-01
Full Text Available The development of distributed energy systems in China is one of the important measures to promote the revolution for energy production and its utilization patterns. First of all, we analyze the present application status of China’s distributed generation from three major types: natural gas, photovoltaic, and distributed wind. Secondly, based on the analysis of the project overview, project scale, and project effect in three patterns of distributed generation, we summarize the policy deficiencies and development obstacles. Finally, aiming to promote the development of distributed energy in China, we propose some relevant policies corresponding to countermeasures on the problems existing in the development process of China’s distributed generation of natural gas, photovoltaic, and wind power.
Assigning probability distributions to input parameters of performance assessment models
Energy Technology Data Exchange (ETDEWEB)
Mishra, Srikanta [INTERA Inc., Austin, TX (United States)
2002-02-01
This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.
Assigning probability distributions to input parameters of performance assessment models
International Nuclear Information System (INIS)
Mishra, Srikanta
2002-02-01
This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available
Calibration process of highly parameterized semi-distributed hydrological model
Vidmar, Andrej; Brilly, Mitja
2017-04-01
Hydrological phenomena take place in the hydrological system, which is governed by nature, and are essentially stochastic. These phenomena are unique, non-recurring, and changeable across space and time. Since any river basin with its own natural characteristics and any hydrological event therein, are unique, this is a complex process that is not researched enough. Calibration is a procedure of determining the parameters of a model that are not known well enough. Input and output variables and mathematical model expressions are known, while only some parameters are unknown, which are determined by calibrating the model. The software used for hydrological modelling nowadays is equipped with sophisticated algorithms for calibration purposes without possibility to manage process by modeler. The results are not the best. We develop procedure for expert driven process of calibration. We use HBV-light-CLI hydrological model which has command line interface and coupling it with PEST. PEST is parameter estimation tool which is used widely in ground water modeling and can be used also on surface waters. Process of calibration managed by expert directly, and proportionally to the expert knowledge, affects the outcome of the inversion procedure and achieves better results than if the procedure had been left to the selected optimization algorithm. First step is to properly define spatial characteristic and structural design of semi-distributed model including all morphological and hydrological phenomena, like karstic area, alluvial area and forest area. This step includes and requires geological, meteorological, hydraulic and hydrological knowledge of modeler. Second step is to set initial parameter values at their preferred values based on expert knowledge. In this step we also define all parameter and observation groups. Peak data are essential in process of calibration if we are mainly interested in flood events. Each Sub Catchment in the model has own observations group
Scholten, H.
2008-01-01
Mathematical models are more and more used to support to solve multidisciplinary, real world problems of increasing complexity. They are often plagued by obstacles such as miscommunication between modellers with different disciplinary backgrounds leading to a non-transparent modelling process. Other
Modeling the probability distribution of peak discharge for infiltrating hillslopes
Baiamonte, Giorgio; Singh, Vijay P.
2017-07-01
Hillslope response plays a fundamental role in the prediction of peak discharge at the basin outlet. The peak discharge for the critical duration of rainfall and its probability distribution are needed for designing urban infrastructure facilities. This study derives the probability distribution, denoted as GABS model, by coupling three models: (1) the Green-Ampt model for computing infiltration, (2) the kinematic wave model for computing discharge hydrograph from the hillslope, and (3) the intensity-duration-frequency (IDF) model for computing design rainfall intensity. The Hortonian mechanism for runoff generation is employed for computing the surface runoff hydrograph. Since the antecedent soil moisture condition (ASMC) significantly affects the rate of infiltration, its effect on the probability distribution of peak discharge is investigated. Application to a watershed in Sicily, Italy, shows that with the increase of probability, the expected effect of ASMC to increase the maximum discharge diminishes. Only for low values of probability, the critical duration of rainfall is influenced by ASMC, whereas its effect on the peak discharge seems to be less for any probability. For a set of parameters, the derived probability distribution of peak discharge seems to be fitted by the gamma distribution well. Finally, an application to a small watershed, with the aim to test the possibility to arrange in advance the rational runoff coefficient tables to be used for the rational method, and a comparison between peak discharges obtained by the GABS model with those measured in an experimental flume for a loamy-sand soil were carried out.
Stand diameter distribution modelling and prediction based on Richards function.
Directory of Open Access Journals (Sweden)
Ai-guo Duan
Full Text Available The objective of this study was to introduce application of the Richards equation on modelling and prediction of stand diameter distribution. The long-term repeated measurement data sets, consisted of 309 diameter frequency distributions from Chinese fir (Cunninghamia lanceolata plantations in the southern China, were used. Also, 150 stands were used as fitting data, the other 159 stands were used for testing. Nonlinear regression method (NRM or maximum likelihood estimates method (MLEM were applied to estimate the parameters of models, and the parameter prediction method (PPM and parameter recovery method (PRM were used to predict the diameter distributions of unknown stands. Four main conclusions were obtained: (1 R distribution presented a more accurate simulation than three-parametric Weibull function; (2 the parameters p, q and r of R distribution proved to be its scale, location and shape parameters, and have a deep relationship with stand characteristics, which means the parameters of R distribution have good theoretical interpretation; (3 the ordinate of inflection point of R distribution has significant relativity with its skewness and kurtosis, and the fitted main distribution range for the cumulative diameter distribution of Chinese fir plantations was 0.4∼0.6; (4 the goodness-of-fit test showed diameter distributions of unknown stands can be well estimated by applying R distribution based on PRM or the combination of PPM and PRM under the condition that only quadratic mean DBH or plus stand age are known, and the non-rejection rates were near 80%, which are higher than the 72.33% non-rejection rate of three-parametric Weibull function based on the combination of PPM and PRM.
Predictive Distribution of the Dirichlet Mixture Model by the Local Variational Inference Method
DEFF Research Database (Denmark)
Ma, Zhanyu; Leijon, Arne; Tan, Zheng-Hua
2014-01-01
the predictive likelihood of the new upcoming data, especially when the amount of training data is small. The Bayesian estimation of a Dirichlet mixture model (DMM) is, in general, not analytically tractable. In our previous work, we have proposed a global variational inference-based method for approximately...... calculating the posterior distributions of the parameters in the DMM analytically. In this paper, we extend our previous study for the DMM and propose an algorithm to calculate the predictive distribution of the DMM with the local variational inference (LVI) method. The true predictive distribution of the DMM...... is analytically intractable. By considering the concave property of the multivariate inverse beta function, we introduce an upper-bound to the true predictive distribution. As the global minimum of this upper-bound exists, the problem is reduced to seek an approximation to the true predictive distribution...
A Discrete Model for HIV Infection with Distributed Delay
Directory of Open Access Journals (Sweden)
Brahim EL Boukari
2014-01-01
Full Text Available We give a consistent discretization of a continuous model of HIV infection, with distributed time delays to express the lag between the times when the virus enters a cell and when the cell becomes infected. The global stability of the steady states of the model is determined and numerical simulations are presented to illustrate our theoretical results.
Optimal dimensioning model of water distribution systems | Gomes ...
African Journals Online (AJOL)
This study is aimed at developing a pipe-sizing model for a water distribution system. The optimal solution minimises the system's total cost, which comprises the hydraulic network capital cost, plus the capitalised cost of pumping energy. The developed model, called Lenhsnet, may also be used for economical design when ...
Five (or so) challenges for species distribution modelling
DEFF Research Database (Denmark)
Bastos Araujo, Miguel; Guisan, Antoine
2006-01-01
Species distribution modelling is central to both fundamental and applied research in biogeography. Despite widespread use of models, there are still important conceptual ambiguities as well as biotic and algorithmic uncertainties that need to be investigated in order to increase confidence in mo...
Degree distribution of a new model for evolving networks
Indian Academy of Sciences (India)
on intuitive but realistic consideration that nodes are added to the network with both preferential and random attachments. The degree distribution of the model is between a power-law and an exponential decay. Motivated by the features of network evolution, we introduce a new model of evolving networks, incorporating the ...
Diffusion approximation for modeling of 3-D radiation distributions
International Nuclear Information System (INIS)
Zardecki, A.; Gerstl, S.A.W.; De Kinder, R.E. Jr.
1985-01-01
A three-dimensional transport code DIF3D, based on the diffusion approximation, is used to model the spatial distribution of radiation energy arising from volumetric isotropic sources. Future work will be concerned with the determination of irradiances and modeling of realistic scenarios, relevant to the battlefield conditions. 8 refs., 4 figs
Occam factors and model independent Bayesian learning of continuous distributions
International Nuclear Information System (INIS)
Nemenman, Ilya; Bialek, William
2002-01-01
Learning of a smooth but nonparametric probability density can be regularized using methods of quantum field theory. We implement a field theoretic prior numerically, test its efficacy, and show that the data and the phase space factors arising from the integration over the model space determine the free parameter of the theory ('smoothness scale') self-consistently. This persists even for distributions that are atypical in the prior and is a step towards a model independent theory for learning continuous distributions. Finally, we point out that a wrong parametrization of a model family may sometimes be advantageous for small data sets
Spatial distribution of emissions to air - the SPREAD model
Energy Technology Data Exchange (ETDEWEB)
Plejdrup, M S; Gyldenkaerne, S
2011-04-15
The National Environmental Research Institute (NERI), Aarhus University, completes the annual national emission inventories for greenhouse gases and air pollutants according to Denmark's obligations under international conventions, e.g. the climate convention, UNFCCC and the convention on long-range transboundary air pollution, CLRTAP. NERI has developed a model to distribute emissions from the national emission inventories on a 1x1 km grid covering the Danish land and sea territory. The new spatial high resolution distribution model for emissions to air (SPREAD) has been developed according to the requirements for reporting of gridded emissions to CLRTAP. Spatial emission data is e.g. used as input for air quality modelling, which again serves as input for assessment and evaluation of health effects. For these purposes distributions with higher spatial resolution have been requested. Previously, a distribution on the 17x17 km EMEP grid has been set up and used in research projects combined with detailed distributions for a few sectors or sub-sectors e.g. a distribution for emissions from road traffic on 1x1 km resolution. SPREAD is developed to generate improved spatial emission data for e.g. air quality modelling in exposure studies. SPREAD includes emission distributions for each sector in the Danish inventory system; stationary combustion, mobile sources, fugitive emissions from fuels, industrial processes, solvents and other product use, agriculture and waste. This model enables generation of distributions for single sectors and for a number of sub-sectors and single sources as well. This report documents the methodologies in this first version of SPREAD and presents selected results. Further, a number of potential improvements for later versions of SPREAD are addressed and discussed. (Author)
Spatial distribution of emissions to air - the SPREAD model
Energy Technology Data Exchange (ETDEWEB)
Plejdrup, M.S.; Gyldenkaerne, S.
2011-04-15
The National Environmental Research Institute (NERI), Aarhus University, completes the annual national emission inventories for greenhouse gases and air pollutants according to Denmark's obligations under international conventions, e.g. the climate convention, UNFCCC and the convention on long-range transboundary air pollution, CLRTAP. NERI has developed a model to distribute emissions from the national emission inventories on a 1x1 km grid covering the Danish land and sea territory. The new spatial high resolution distribution model for emissions to air (SPREAD) has been developed according to the requirements for reporting of gridded emissions to CLRTAP. Spatial emission data is e.g. used as input for air quality modelling, which again serves as input for assessment and evaluation of health effects. For these purposes distributions with higher spatial resolution have been requested. Previously, a distribution on the 17x17 km EMEP grid has been set up and used in research projects combined with detailed distributions for a few sectors or sub-sectors e.g. a distribution for emissions from road traffic on 1x1 km resolution. SPREAD is developed to generate improved spatial emission data for e.g. air quality modelling in exposure studies. SPREAD includes emission distributions for each sector in the Danish inventory system; stationary combustion, mobile sources, fugitive emissions from fuels, industrial processes, solvents and other product use, agriculture and waste. This model enables generation of distributions for single sectors and for a number of sub-sectors and single sources as well. This report documents the methodologies in this first version of SPREAD and presents selected results. Further, a number of potential improvements for later versions of SPREAD are addressed and discussed. (Author)
Smoluchowski coagulation models of sea ice thickness distribution dynamics
Godlovitch, D.; Illner, R.; Monahan, A.
2011-12-01
Sea ice thickness distributions display a ubiquitous exponential decrease with thickness. This tail characterizes the range of ice thickness produced by mechanical redistribution of ice through the process of ridging, rafting, and shearing. We investigate how well the thickness distribution can be simulated by representing mechanical redistribution as a generalized stacking process. Such processes are naturally described by a well-studied class of models known as Smoluchowski Coagulation Models (SCMs), which describe the dynamics of a population of fixed-mass "particles" which combine in pairs to form a "particle" with the combined mass of the constituent pair at a rate which depends on the mass of the interacting particles. Like observed sea ice thickness distributions, the mass distribution of the populations generated by SCMs has an exponential or quasi-exponential form. We use SCMs to model sea ice, identifying mass-increasing particle combinations with thickness-increasing ice redistribution processes. Our model couples an SCM component with a thermodynamic component and generates qualitatively accurate thickness distributions with a variety of rate kernels. Our results suggest that the exponential tail of the sea ice thickness distribution arises from the nature of the ridging process, rather than specific physical properties of sea ice or the spatial arrangement of floes, and that the relative strengths of the dynamic and thermodynamic processes are key in accurately simulating the rate at which the sea ice thickness tail drops off with thickness.
Calibrating corneal material model parameters using only inflation data: an ill-posed problem
CSIR Research Space (South Africa)
Kok, S
2014-08-01
Full Text Available is to perform numerical modelling using the finite element method, for which a calibrated material model is required. These material models are typically calibrated using experimental inflation data by solving an inverse problem. In the inverse problem...
Handbook of EOQ inventory problems stochastic and deterministic models and applications
Choi, Tsan-Ming
2013-01-01
This book explores deterministic and stochastic EOQ-model based problems and applications, presenting technical analyses of single-echelon EOQ model based inventory problems, and applications of the EOQ model for multi-echelon supply chain inventory analysis.
Problem with parton-model descriptions of neutrino data
International Nuclear Information System (INIS)
Barger, V.; Weiler, T.; Phillips, R.J.N.
1976-01-01
The present results from νN and ν-barN scattering experiments appear to place conflicting requirements on conventional quark--parton models. The strong rise with energy of /sup ν-barN and sigma/sup ν-barN/sub T//sigma/ sup νN//sub T/ seems to require new-particle (charm) production from valence quarks in ν-barN interactions, whereas the x dependence of dimuon events and the dsigma/dy anomaly suggest that ν-barN charm production comes from sea quarks. No single model gives a fully satisfactory explanation of all the present data. We draw attention to this problem, illustrate the conflicting requirements of data with particular models, and discuss possible resolutions. The closest overall compromise with the present data is obtained with the six-quark model, using charm-quark masses m/sub c/ = 1.5 GeV, m/sub b/ = 5 GeV, and a higher t-quark mass
Crowd Sourcing for Challenging Technical Problems and Business Model
Davis, Jeffrey R.; Richard, Elizabeth
2011-01-01
imaging, microbial detection and even the use of pharmaceuticals for radiation protection. The internal challenges through NASA@Work drew over 6000 participants across all NASA centers. Challenges conducted by each NASA center elicited ideas and solutions from several other NASA centers and demonstrated rapid and efficient participation from employees at multiple centers to contribute to problem solving. Finally, on January 19, 2011, the SLSD conducted a workshop on open collaboration and innovation strategies and best practices through the newly established NASA Human Health and Performance Center (NHHPC). Initial projects will be described leading to a new business model for SLSD.
Distributing Correlation Coefficients of Linear Structure-Activity/Property Models
Directory of Open Access Journals (Sweden)
Sorana D. BOLBOACA
2011-12-01
Full Text Available Quantitative structure-activity/property relationships are mathematical relationships linking chemical structure and activity/property in a quantitative manner. These in silico approaches are frequently used to reduce animal testing and risk-assessment, as well as to increase time- and cost-effectiveness in characterization and identification of active compounds. The aim of our study was to investigate the pattern of correlation coefficients distribution associated to simple linear relationships linking the compounds structure with their activities. A set of the most common ordnance compounds found at naval facilities with a limited data set with a range of toxicities on aquatic ecosystem and a set of seven properties was studied. Statistically significant models were selected and investigated. The probability density function of the correlation coefficients was investigated using a series of possible continuous distribution laws. Almost 48% of the correlation coefficients proved fit Beta distribution, 40% fit Generalized Pareto distribution, and 12% fit Pert distribution.
Kaon quark distribution functions in the chiral constituent quark model
Watanabe, Akira; Sawada, Takahiro; Kao, Chung Wen
2018-04-01
We investigate the valence u and s ¯ quark distribution functions of the K+ meson, vK (u )(x ,Q2) and vK (s ¯)(x ,Q2), in the framework of the chiral constituent quark model. We judiciously choose the bare distributions at the initial scale to generate the dressed distributions at the higher scale, considering the meson cloud effects and the QCD evolution, which agree with the phenomenologically satisfactory valence quark distribution of the pion and the experimental data of the ratio vK (u )(x ,Q2)/vπ (u )(x ,Q2) . We show how the meson cloud effects affect the bare distribution functions in detail. We find that a smaller S U (3 ) flavor symmetry breaking effect is observed, compared with results of the preceding studies based on other approaches.
THE CUSP/CORE PROBLEM AND THE SECONDARY INFALL MODEL
International Nuclear Information System (INIS)
Del Popolo, A.
2009-01-01
We study the cusp/core problem using a secondary infall model that takes into account the effect of ordered and random angular momentum, dynamical friction, and baryons adiabatic contraction (AC). The model is applied to structures on galactic scales (normal and dwarfs spiral galaxies) and on clusters of galaxies scales. Our analysis suggest that angular momentum and dynamical friction are able, on galactic scales, to overcome the competing effect of AC eliminating the cusp. The slope of density profile of inner halos flattens with decreasing halo mass and the profile is well approximated by a Burkert's profile. In order to obtain the Navarro-Frenk-White (NFW) profile, starting from the profiles obtained from our model, the magnitude of angular momentum and dynamical friction must be reduced with respect to the values predicted by the model itself. The rotation curves of four lower sideband galaxies from Gentile et al. are compared to the rotation curves obtained by the model in the present paper obtaining a good fit to the observational data. The time evolution of the density profile of a galaxy of 10 8 -10 9 M sun shows that after a transient steepening, due to the AC, the density profile flattens to α ≅ 0. On cluster scales we observe a similar evolution of the dark matter (DM) density profile but in this case the density profile slope flattens to α ≅ 0.6 for a cluster of ≅10 14 M sun . The total mass profile, differently from that of DM, shows a central cusp well fitted by an NFW model.
Bilinear reduced order approximate model of parabolic distributed solar collectors
Elmetennani, Shahrazed
2015-07-01
This paper proposes a novel, low dimensional and accurate approximate model for the distributed parabolic solar collector, by means of a modified gaussian interpolation along the spatial domain. The proposed reduced model, taking the form of a low dimensional bilinear state representation, enables the reproduction of the heat transfer dynamics along the collector tube for system analysis. Moreover, presented as a reduced order bilinear state space model, the well established control theory for this class of systems can be applied. The approximation efficiency has been proven by several simulation tests, which have been performed considering parameters of the Acurex field with real external working conditions. Model accuracy has been evaluated by comparison to the analytical solution of the hyperbolic distributed model and its semi discretized approximation highlighting the benefits of using the proposed numerical scheme. Furthermore, model sensitivity to the different parameters of the gaussian interpolation has been studied.
Actors: A Model of Concurrent Computation in Distributed Systems.
1985-06-01
Artificial Intelligence Labora- tory of the Massachusetts Institute of Technology. Support for the labora- tory’s aritificial intelligence research is...RD-A157 917 ACTORS: A MODEL OF CONCURRENT COMPUTATION IN 1/3- DISTRIBUTED SYTEMS(U) MASSACHUSETTS INST OF TECH CRMBRIDGE ARTIFICIAL INTELLIGENCE ...Computation In Distributed Systems Gui A. Aghai MIT Artificial Intelligence Laboratory Thsdocument ha. been cipp-oved I= pblicrelease and sale; itsI
DEFF Research Database (Denmark)
Han, Xue; Sandels, Claes; Zhu, Kun
2013-01-01
There has been a large body of statements claiming that the large-scale deployment of Distributed Energy Resources (DERs) could eventually reshape the future distribution grid operation in numerous ways. Thus, it is necessary to introduce a framework to measure to what extent the power system......, comprising distributed generation, active demand and electric vehicles. Subsequently, quantitative analysis was made on the basis of the current and envisioned DER deployment scenarios proposed for Sweden. Simulations are performed in two typical distribution network models for four seasons. The simulation...... results show that in general the DER deployment brings in the possibilities to reduce the power losses and voltage drops by compensating power from the local generation and optimizing the local load profiles....
Directory of Open Access Journals (Sweden)
M. I. Fursanov
2014-01-01
Full Text Available This article reflects algorithmization of search methods of effective replacement of consumer transformers in distributed electrical networks. As any electrical equipment of power systems, power transformers have their own limited service duration, which is determined by natural processes of materials degradation and also by unexpected wear under different conditions of overload and overvoltage. According to the standards, adapted by in the Republic of Belarus, rated service life of power transformers is 25 years. But it can be situations that transformers should be better changed till this time – economically efficient. The possibility of such replacement is considered in order to increase efficiency of electrical network operation connected with its physical wear and aging.In this article the faults of early developed mathematical models of transformers replacement were discussed. Early such worked out transformers were not used. But in practice they can be replaced in one substation but they can be successfully used in other substations .Especially if there are limits of financial resources and the replacement needs more detail technical and economical basis.During the research the authors developed the efficient algorithm for determining of optimal location of transformers at substations of distributed electrical networks, based on search of the best solution from all sets of displacement in oriented graph. Suggested algorithm allows considerably reduce design time of optimal placement of transformers using a set of simplifications. The result of algorithm’s work is series displacement of transformers in networks, which allow obtain a great economic effect in comparison with replacement of single transformer.
Reservoir theory, groundwater transit time distributions, and lumped parameter models
International Nuclear Information System (INIS)
Etcheverry, D.; Perrochet, P.
1999-01-01
The relation between groundwater residence times and transit times is given by the reservoir theory. It allows to calculate theoretical transit time distributions in a deterministic way, analytically, or on numerical models. Two analytical solutions validates the piston flow and the exponential model for simple conceptual flow systems. A numerical solution of a hypothetical regional groundwater flow shows that lumped parameter models could be applied in some cases to large-scale, heterogeneous aquifers. (author)
Quantification Model for Estimating Temperature Field Distributions of Apple Fruit
Zhang , Min; Yang , Le; Zhao , Huizhong; Zhang , Leijie; Zhong , Zhiyou; Liu , Yanling; Chen , Jianhua
2009-01-01
International audience; A quantification model of transient heat conduction was provided to simulate apple fruit temperature distribution in the cooling process. The model was based on the energy variation of apple fruit of different points. It took into account, heat exchange of representative elemental volume, metabolism heat and external heat. The following conclusions could be obtained: first, the quantification model can satisfactorily describe the tendency of apple fruit temperature dis...
Distributional Language Learning: Mechanisms and Models of ategory Formation.
Aslin, Richard N; Newport, Elissa L
2014-09-01
In the past 15 years, a substantial body of evidence has confirmed that a powerful distributional learning mechanism is present in infants, children, adults and (at least to some degree) in nonhuman animals as well. The present article briefly reviews this literature and then examines some of the fundamental questions that must be addressed for any distributional learning mechanism to operate effectively within the linguistic domain. In particular, how does a naive learner determine the number of categories that are present in a corpus of linguistic input and what distributional cues enable the learner to assign individual lexical items to those categories? Contrary to the hypothesis that distributional learning and category (or rule) learning are separate mechanisms, the present article argues that these two seemingly different processes---acquiring specific structure from linguistic input and generalizing beyond that input to novel exemplars---actually represent a single mechanism. Evidence in support of this single-mechanism hypothesis comes from a series of artificial grammar-learning studies that not only demonstrate that adults can learn grammatical categories from distributional information alone, but that the specific patterning of distributional information among attested utterances in the learning corpus enables adults to generalize to novel utterances or to restrict generalization when unattested utterances are consistently absent from the learning corpus. Finally, a computational model of distributional learning that accounts for the presence or absence of generalization is reviewed and the implications of this model for linguistic-category learning are summarized.
Using the Weibull distribution reliability, modeling and inference
McCool, John I
2012-01-01
Understand and utilize the latest developments in Weibull inferential methods While the Weibull distribution is widely used in science and engineering, most engineers do not have the necessary statistical training to implement the methodology effectively. Using the Weibull Distribution: Reliability, Modeling, and Inference fills a gap in the current literature on the topic, introducing a self-contained presentation of the probabilistic basis for the methodology while providing powerful techniques for extracting information from data. The author explains the use of the Weibull distribution
Shell model test of the Porter-Thomas distribution
International Nuclear Information System (INIS)
Grimes, S.M.; Bloom, S.D.
1981-01-01
Eigenvectors have been calculated for the A=18, 19, 20, 21, and 26 nuclei in an sd shell basis. The decomposition of these states into their shell model components shows, in agreement with other recent work, that this distribution is not a single Gaussian. We find that the largest amplitudes are distributed approximately in a Gaussian fashion. Thus, many experimental measurements should be consistent with the Porter-Thomas predictions. We argue that the non-Gaussian form of the complete distribution can be simply related to the structure of the Hamiltonian
Distributed model based control of multi unit evaporation systems
International Nuclear Information System (INIS)
Yudi Samyudia
2006-01-01
In this paper, we present a new approach to the analysis and design of distributed control systems for multi-unit plants. The approach is established after treating the effect of recycled dynamics as a gap metric uncertainty from which a distributed controller can be designed sequentially for each unit to tackle the uncertainty. We then use a single effect multi-unit evaporation system to illustrate how the proposed method is used to analyze different control strategies and to systematically achieve a better closed-loop performance using a distributed model-based controller
Transversity quark distributions in a covariant quark-diquark model
Energy Technology Data Exchange (ETDEWEB)
Cloet, I.C. [Physics Division, Argonne National Laboratory, Argonne, IL 60439-4843 (United States)], E-mail: icloet@anl.gov; Bentz, W. [Department of Physics, School of Science, Tokai University, Hiratsuka-shi, Kanagawa 259-1292 (Japan)], E-mail: bentz@keyaki.cc.u-tokai.ac.jp; Thomas, A.W. [Jefferson Lab, 12000 Jefferson Avenue, Newport News, VA 23606 (United States); College of William and Mary, Williamsburg, VA 23187 (United States)], E-mail: awthomas@jlab.org
2008-01-17
Transversity quark light-cone momentum distributions are calculated for the nucleon. We utilize a modified Nambu-Jona-Lasinio model in which confinement is simulated by eliminating unphysical thresholds for nucleon decay into quarks. The nucleon bound state is obtained by solving the relativistic Faddeev equation in the quark-diquark approximation, where both scalar and axial-vector diquark channels are included. Particular attention is paid to comparing our results with the recent experimental extraction of the transversity distributions by Anselmino et al. We also compare our transversity results with earlier spin-independent and helicity quark distributions calculated in the same approach.
Confronting species distribution model predictions with species functional traits.
Wittmann, Marion E; Barnes, Matthew A; Jerde, Christopher L; Jones, Lisa A; Lodge, David M
2016-02-01
Species distribution models are valuable tools in studies of biogeography, ecology, and climate change and have been used to inform conservation and ecosystem management. However, species distribution models typically incorporate only climatic variables and species presence data. Model development or validation rarely considers functional components of species traits or other types of biological data. We implemented a species distribution model (Maxent) to predict global climate habitat suitability for Grass Carp (Ctenopharyngodon idella). We then tested the relationship between the degree of climate habitat suitability predicted by Maxent and the individual growth rates of both wild (N = 17) and stocked (N = 51) Grass Carp populations using correlation analysis. The Grass Carp Maxent model accurately reflected the global occurrence data (AUC = 0.904). Observations of Grass Carp growth rate covered six continents and ranged from 0.19 to 20.1 g day(-1). Species distribution model predictions were correlated (r = 0.5, 95% CI (0.03, 0.79)) with observed growth rates for wild Grass Carp populations but were not correlated (r = -0.26, 95% CI (-0.5, 0.012)) with stocked populations. Further, a review of the literature indicates that the few studies for other species that have previously assessed the relationship between the degree of predicted climate habitat suitability and species functional traits have also discovered significant relationships. Thus, species distribution models may provide inferences beyond just where a species may occur, providing a useful tool to understand the linkage between species distributions and underlying biological mechanisms.
Modeling of non-linear CHP efficiency curves in distributed energy systems
DEFF Research Database (Denmark)
Milan, Christian; Stadler, Michael; Cardoso, Gonçalo
2015-01-01
Distributed energy resources gain an increased importance in commercial and industrial building design. Combined heat and power (CHP) units are considered as one of the key technologies for cost and emission reduction in buildings. In order to make optimal decisions on investment and operation...... for these technologies, detailed system models are needed. These models are often formulated as linear programming problems to keep computational costs and complexity in a reasonable range. However, CHP systems involve variations of the efficiency for large nameplate capacity ranges and in case of part load operation......, which can be even of non-linear nature. Since considering these characteristics would turn the models into non-linear problems, in most cases only constant efficiencies are assumed. This paper proposes possible solutions to address this issue. For a mixed integer linear programming problem two...
A distributed dynamic model of a monolith hydrogen membrane reactor
International Nuclear Information System (INIS)
Michelsen, Finn Are; Wilhelmsen, Øivind; Zhao, Lei; Aasen, Knut Ingvar
2013-01-01
Highlights: ► We model a rigorous distributed dynamic model for a HMR unit. ► The model includes enough complexity for steady-state and dynamic analysis. ► Simulations show that the model is non-linear within the normal operating range. ► The model is useful for studying and handling disturbances such as inlet changes and membrane leakage. - Abstract: This paper describes a distributed mechanistic dynamic model of a hydrogen membrane reformer unit (HMR) used for methane steam reforming. The model is based on a square channel monolith structure concept, where air flows adjacent to a mix of natural gas and water distributed in a chess pattern of channels. Combustion of hydrogen gives energy to the endothermic steam reforming reactions. The model is used for both steady state and dynamic analyses. It therefore needs to be computationally attractive, but still include enough complexity to study the important steady state and dynamic features of the process. Steady-state analysis of the model gives optimum for the steam to carbon and steam to oxygen ratios, where the conversion of methane is 92% and the hydrogen used as energy for the endothermic reactions is 28% at the nominal optimum. The dynamic analysis shows that non-linear control schemes may be necessary for satisfactory control performance
A distribution-free newsvendor model with balking penalty and random yield
Directory of Open Access Journals (Sweden)
Chongfeng Lan
2015-05-01
Full Text Available Purpose: The purpose of this paper is to extend the analysis of the distribution-free newsvendor problem in an environment of customer balking, which occurs when customers are reluctant to buy a product if its available inventory falls below a threshold level. Design/methodology/approach: We provide a new tradeoff tool as a replacement of the traditional one to weigh the holding cost and the goodwill costs segment: in addition to the shortage penalty, we also introduce the balking penalty. Furthermore, we extend our model to the case of random yield. Findings: A model is presented for determining both an optimal order quantity and a lower bound on the profit under the worst possible distribution of the demand. We also study the effects of shortage penalty and the balking penalty on the optimal order quantity, which have been largely bypassed in the existing distribution free single period models with balking. Numerical examples are presented to illustrate the result. Originality/value: The incorporation of balking penalty and random yield represents an important improvement in inventory policy performance for distribution-free newsvendor problem when customer balking occurs and the distributional form of demand is unknown.
Joint Distributed Surf Zone Environmental Model: FY96 Modeling Procedure
National Research Council Canada - National Science Library
Allard, Richard
1997-01-01
... to the modeling and simulation community. To test this proof of concept, a suite of models were identified and tested for Camp Pendelton, CA, during two 7 day periods in January and August 1995, in which data from the Coupled Ocean...
A model problem concerning ionic transport in microstructured solid electrolytes
Curto Sillamoni, Ignacio J.; Idiart, Martín I.
2015-11-01
We consider ionic transport by diffusion and migration through microstructured solid electrolytes. The assumed constitutive relations for the constituent phases follow from convex energy and dissipation potentials which guarantee thermodynamic consistency. The effective response is determined by homogenizing the relevant field equations via the notion ofmulti-scale convergence. The resulting homogenized response involves several effective tensors, but they all require the solution of just one standard conductivity problem over the representative volume element. A multi-scale model for semicrystalline polymer electrolytes with spherulitic morphologies is derived by applying the theory to a specific class of two-dimensional microgeometries for which the effective response can be computed exactly. An enriched model accounting for a random dispersion of filler particles with interphases is also derived. In both cases, explicit expressions for the effective material parameters are provided. The models are used to explore the effect of crystallinity and filler content on the overall response. Predictions support recent experimental observations on doped poly-ethylene-oxide systems which suggest that the anisotropic crystalline phase can actually support faster ion transport than the amorphous phase along certain directions dictated by the morphology of the polymeric chains. Predictions also support the viewpoint that ceramic fillers improve ionic conductivity and cation transport number via interphasial effects.
Modeling risk and uncertainty in designing reverse logistics problem
Directory of Open Access Journals (Sweden)
Aida Nazari Gooran
2018-01-01
Full Text Available Increasing attention to environmental problems and social responsibility lead to appear reverse logistic (RL issues in designing supply chain which, in most recently, has received considerable attention from both academicians and practitioners. In this paper, a multi-product reverse logistic network design model is developed; then a hybrid method including Chance-constrained programming, Genetic algorithm and Monte Carlo simulation, are proposed to solve the developed model. The proposed model is solved for risk-averse and risk-seeking decision makers by conditional value at risk, sum of the excepted value and standard deviation, respectively. Comparisons of the results show that minimizing the costs had no direct relation with the kind of decision makers; however, in the most cases, risk-seeking decision maker gained more return products than risk-averse ones. It is clear that by increasing returned products to the chain, production costs of new products and material will be reduced and also by this act, environmental benefits will be created.
Simple standard problem for the Preisach moving model
International Nuclear Information System (INIS)
Morentin, F.J.; Alejos, O.; Francisco, C. de; Munoz, J.M.; Hernandez-Gomez, P.; Torres, C.
2004-01-01
The present work proposes a simple magnetic system as a candidate for a Standard Problem for Preisach-based models. The system consists in a regular square array of magnetic particles totally oriented along the direction of application of an external magnetic field. The behavior of such system was numerically simulated for different values of the interaction between particles and of the standard deviation of the critical fields of the particles. The characteristic parameters of the Preisach moving model were worked out during simulations, i.e., the mean value and the standard deviation of the interaction field. For this system, results reveal that the mean interaction field depends linearly on the system magnetization, as the Preisach moving model predicts. Nevertheless, the standard deviation cannot be considered as independent of the magnetization. In fact, the standard deviation shows a maximum at demagnetization and two minima at magnetization saturation. Furthermore, not all the demagnetization states are equivalent. The plot standard deviation vs. magnetization is a multi-valuated curve when the system undergoes an AC demagnetization procedure. In this way, the standard deviation increases as the system goes from coercivity to the AC demagnetized state
Cooling problems of thermal power plants. Physical model studies
International Nuclear Information System (INIS)
Neale, L.C.
1975-01-01
The Alden Research Laboratories of Worcester Polytechnic Institute has for many years conducted physical model studies, which are normally classified as river or structural hydraulic studies. Since 1952 one aspect of these studies has involved the heated discharge from steam power plants. The early studies on such problems concentrated on improving the thermal efficiency of the system. This was accomplished by minimizing recirculation and by assuring full use of available cold water supplies. With the growing awareness of the impact of thermal power generation on the environment attention has been redirected to reducing the effect of heated discharges on the biology of the receiving body of water. More specifically the efforts of designers and operators of power plants are aimed at meeting or complying with standards established by various governmental agencies. Thus the studies involve developing means of minimizing surface temperatures at an outfall or establishing a local area of higher temperature with limits specified in terms of areas or distances. The physical models used for these studies have varied widely in scope, size, and operating features. These models have covered large areas with both distorted geometric scales and uniform dimensions. Instrumentations has also varied from simple mercury thermometers to computer control and processing of hundreds of thermocouple indicators
Energy Technology Data Exchange (ETDEWEB)
NONE
1995-02-17
The Natural Gas Transmission and Distribution Model (NGTDM) is the component of the National Energy Modeling System (NEMS) that is used to represent the domestic natural gas transmission and distribution system. NEMS was developed in the Office of integrated Analysis and Forecasting of the Energy information Administration (EIA). NEMS is the third in a series of computer-based, midterm energy modeling systems used since 1974 by the EIA and its predecessor, the Federal Energy Administration, to analyze domestic energy-economy markets and develop projections. The NGTDM is the model within the NEMS that represents the transmission, distribution, and pricing of natural gas. The model also includes representations of the end-use demand for natural gas, the production of domestic natural gas, and the availability of natural gas traded on the international market based on information received from other NEMS models. The NGTDM determines the flow of natural gas in an aggregate, domestic pipeline network, connecting domestic and foreign supply regions with 12 demand regions. The methodology employed allows the analysis of impacts of regional capacity constraints in the interstate natural gas pipeline network and the identification of pipeline capacity expansion requirements. There is an explicit representation of core and noncore markets for natural gas transmission and distribution services, and the key components of pipeline tariffs are represented in a pricing algorithm. Natural gas pricing and flow patterns are derived by obtaining a market equilibrium across the three main elements of the natural gas market: the supply element, the demand element, and the transmission and distribution network that links them. The NGTDM consists of four modules: the Annual Flow Module, the Capacity F-expansion Module, the Pipeline Tariff Module, and the Distributor Tariff Module. A model abstract is provided in Appendix A.
International Nuclear Information System (INIS)
1995-01-01
The Natural Gas Transmission and Distribution Model (NGTDM) is the component of the National Energy Modeling System (NEMS) that is used to represent the domestic natural gas transmission and distribution system. NEMS was developed in the Office of integrated Analysis and Forecasting of the Energy information Administration (EIA). NEMS is the third in a series of computer-based, midterm energy modeling systems used since 1974 by the EIA and its predecessor, the Federal Energy Administration, to analyze domestic energy-economy markets and develop projections. The NGTDM is the model within the NEMS that represents the transmission, distribution, and pricing of natural gas. The model also includes representations of the end-use demand for natural gas, the production of domestic natural gas, and the availability of natural gas traded on the international market based on information received from other NEMS models. The NGTDM determines the flow of natural gas in an aggregate, domestic pipeline network, connecting domestic and foreign supply regions with 12 demand regions. The methodology employed allows the analysis of impacts of regional capacity constraints in the interstate natural gas pipeline network and the identification of pipeline capacity expansion requirements. There is an explicit representation of core and noncore markets for natural gas transmission and distribution services, and the key components of pipeline tariffs are represented in a pricing algorithm. Natural gas pricing and flow patterns are derived by obtaining a market equilibrium across the three main elements of the natural gas market: the supply element, the demand element, and the transmission and distribution network that links them. The NGTDM consists of four modules: the Annual Flow Module, the Capacity F-expansion Module, the Pipeline Tariff Module, and the Distributor Tariff Module. A model abstract is provided in Appendix A
Solving the Standard Model Problems in Softened Gravity
Salvio, Alberto
2016-11-16
The Higgs naturalness problem is solved if the growth of Einstein's gravitational interaction is softened at an energy $ \\lesssim 10^{11}\\,$GeV (softened gravity). We work here within an explicit realization where the Einstein-Hilbert Lagrangian is extended to include terms quadratic in the curvature and a non-minimal coupling with the Higgs. We show that this solution is preserved by adding three right-handed neutrinos with masses below the electroweak scale, accounting for neutrino oscillations, dark matter and the baryon asymmetry. The smallness of the right-handed neutrino masses (compared to the Planck scale) and the QCD $\\theta$-term are also shown to be natural. We prove that a possible gravitational source of CP violation cannot spoil the model, thanks to the presence of right-handed neutrinos. Starobinsky inflation can occur in this context, even if we live in a metastable vacuum.
The hierarchy problem of the electroweak standard model revisited
International Nuclear Information System (INIS)
Jegerlehner, Fred
2013-05-01
A careful renormalization group analysis of the electroweak Standard Model reveals that there is no hierarchy problem in the SM. In the broken phase a light Higgs turns out to be natural as it is self-protected and self-tuned by the Higgs mechanism. It means that the scalar Higgs needs not be protected by any extra symmetry, specifically super symmetry, in order not to be much heavier than the other SM particles which are protected by gauge- or chiral-symmetry. Thus the existence of quadratic cutoff effects in the SM cannot motivate the need for a super symmetric extensions of the SM, but in contrast plays an important role in triggering the electroweak phase transition and in shaping the Higgs potential in the early universe to drive inflation as supported by observation.
Modeling and Identification of Harmonic Instability Problems In Wind Farms
DEFF Research Database (Denmark)
Ebrahimzadeh, Esmaeil; Blaabjerg, Frede; Wang, Xiongfei
2016-01-01
In power electronics based power systems like wind farms, the interactions between the inner control systems of the power converters and the passive components may lead to high frequency oscillations, which can be called harmonic instability. In this paper, a simple methodology is presented...... to identify harmonic instability problems in wind farms, where many wind turbines, cables, transformers, capacitor banks, shunt reactors, etc, typically are located. This methodology introduces the wind farm as a Multi-Input Multi-Outpur (MIMO) control system, where the linearized models of fast inner control....../EMTDC software environment for a 400-MW wind farm. The proposed analytical analysis method and time-domain simulation results show that both dynamics of the power electronic converter and the parameters of the passive component can effect on the wind farm stability....
The hierarchy problem of the electroweak standard model revisited
Energy Technology Data Exchange (ETDEWEB)
Jegerlehner, Fred [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)
2013-05-15
A careful renormalization group analysis of the electroweak Standard Model reveals that there is no hierarchy problem in the SM. In the broken phase a light Higgs turns out to be natural as it is self-protected and self-tuned by the Higgs mechanism. It means that the scalar Higgs needs not be protected by any extra symmetry, specifically super symmetry, in order not to be much heavier than the other SM particles which are protected by gauge- or chiral-symmetry. Thus the existence of quadratic cutoff effects in the SM cannot motivate the need for a super symmetric extensions of the SM, but in contrast plays an important role in triggering the electroweak phase transition and in shaping the Higgs potential in the early universe to drive inflation as supported by observation.
Mathematical modelling and numerical simulation of oil pollution problems
2015-01-01
Written by outstanding experts in the fields of marine engineering, atmospheric physics and chemistry, fluid dynamics and applied mathematics, the contributions in this book cover a wide range of subjects, from pure mathematics to real-world applications in the oil spill engineering business. Offering a truly interdisciplinary approach, the authors present both mathematical models and state-of-the-art numerical methods for adequately solving the partial differential equations involved, as well as highly practical experiments involving actual cases of ocean oil pollution. It is indispensable that different disciplines of mathematics, like analysis and numerics, together with physics, biology, fluid dynamics, environmental engineering and marine science, join forces to solve today’s oil pollution problems. The book will be of great interest to researchers and graduate students in the environmental sciences, mathematics and physics, showing the broad range of techniques needed in order to solve these poll...
Automatic generation of 3D statistical shape models with optimal landmark distributions.
Heimann, T; Wolf, I; Meinzer, H-P
2007-01-01
To point out the problem of non-uniform landmark placement in statistical shape modeling, to present an improved method for generating landmarks in the 3D case and to propose an unbiased evaluation metric to determine model quality. Our approach minimizes a cost function based on the minimum description length (MDL) of the shape model to optimize landmark correspondences over the training set. In addition to the standard technique, we employ an extended remeshing method to change the landmark distribution without losing correspondences, thus ensuring a uniform distribution over all training samples. To break the dependency of the established evaluation measures generalization and specificity from the landmark distribution, we change the internal metric from landmark distance to volumetric overlap. Redistributing landmarks to an equally spaced distribution during the model construction phase improves the quality of the resulting models significantly if the shapes feature prominent bulges or other complex geometry. The distribution of landmarks on the training shapes is -- beyond the correspondence issue -- a crucial point in model construction.
A two-stage stochastic programming model for the optimal design of distributed energy systems
International Nuclear Information System (INIS)
Zhou, Zhe; Zhang, Jianyun; Liu, Pei; Li, Zheng; Georgiadis, Michael C.; Pistikopoulos, Efstratios N.
2013-01-01
Highlights: ► The optimal design of distributed energy systems under uncertainty is studied. ► A stochastic model is developed using genetic algorithm and Monte Carlo method. ► The proposed system possesses inherent robustness under uncertainty. ► The inherent robustness is due to energy storage facilities and grid connection. -- Abstract: A distributed energy system is a multi-input and multi-output energy system with substantial energy, economic and environmental benefits. The optimal design of such a complex system under energy demand and supply uncertainty poses significant challenges in terms of both modelling and corresponding solution strategies. This paper proposes a two-stage stochastic programming model for the optimal design of distributed energy systems. A two-stage decomposition based solution strategy is used to solve the optimization problem with genetic algorithm performing the search on the first stage variables and a Monte Carlo method dealing with uncertainty in the second stage. The model is applied to the planning of a distributed energy system in a hotel. Detailed computational results are presented and compared with those generated by a deterministic model. The impacts of demand and supply uncertainty on the optimal design of distributed energy systems are systematically investigated using proposed modelling framework and solution approach.
Gauthier, Benoit; And Others
1997-01-01
Identifies the more representative problem-solving models in environmental education. Suggests the addition of a strategy for defining a problem situation using Soft Systems Methodology to environmental education activities explicitly designed for the development of critical thinking. Contains 45 references. (JRH)
Jackson, C. E., Jr.
1977-01-01
A sample problem library containing 20 problems covering most facets of Nastran Thermal Analyzer modeling is presented. Areas discussed include radiative interchange, arbitrary nonlinear loads, transient temperature and steady-state structural plots, temperature-dependent conductivities, simulated multi-layer insulation, and constraint techniques. The use of the major control options and important DMAP alters is demonstrated.
On one model problem for the reaction-diffusion-advection equation
Davydova, M. A.; Zakharova, S. A.; Levashova, N. T.
2017-09-01
The asymptotic behavior of the solution with boundary layers in the time-independent mathematical model of reaction-diffusion-advection arising when describing the distribution of greenhouse gases in the surface atmospheric layer is studied. On the basis of the asymptotic method of differential inequalities, the existence of a boundary-layer solution and its asymptotic Lyapunov stability as a steady-state solution of the corresponding parabolic problem is proven. One of the results of this work is the determination of the local domain of the attraction of a boundary-layer solution.
The Cauchy problem for the Bogolyubov hierarchy of equations. The BCS model
International Nuclear Information System (INIS)
Vidybida, A.K.
1975-01-01
A chain of Bogolyubov's kinetic equations for an infinite quantum system of particles distributed in space with the mean density 1/V and interacting with the BCS model operator is considered as a single abstract equation in some countable normalized space bsup(v) of sequences of integral operators. In this case an unique solution of the Cauchy problem has been obtained at arbitrary initial conditions from bsup(v), stationary solutions of the equation have been derived, and the class of the initial conditions which approach to stationary ones is indicated
Directory of Open Access Journals (Sweden)
Penny Masuoka
2010-11-01
Full Text Available Over 35,000 cases of Japanese encephalitis (JE are reported worldwide each year. Culex tritaeniorhynchus is the primary vector of the JE virus, while wading birds are natural reservoirs and swine amplifying hosts. As part of a JE risk analysis, the ecological niche modeling programme, Maxent, was used to develop a predictive model for the distribution of Cx. tritaeniorhynchus in the Republic of Korea, using mosquito collection data, temperature, precipitation, elevation, land cover and the normalized difference vegetation index (NDVI. The resulting probability maps from the model were consistent with the known environmental limitations of the mosquito with low probabilities predicted for forest covered mountains. July minimum temperature and land cover were the most important variables in the model. Elevation, summer NDVI (July-September, precipitation in July, summer minimum temperature (May-August and maximum temperature for fall and winter months also contributed to the model. Comparison of the Cx. tritaeniorhynchus model to the distribution of JE cases in the Republic of Korea from 2001 to 2009 showed that cases among a highly vaccinated Korean population were located in high-probability areas for Cx. tritaeniorhynchus. No recent JE cases were reported from the eastern coastline, where higher probabilities of mosquitoes were predicted, but where only small numbers of pigs are raised. The geographical distribution of reported JE cases corresponded closely with the predicted high-probability areas for Cx. tritaeniorhynchus, making the map a useful tool for health risk analysis that could be used for planning preventive public health measures.
Energy Technology Data Exchange (ETDEWEB)
Chen, Y W [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China); Zhang, L F [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China); Huang, J P [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China)
2007-07-20
By using theoretical analysis and computer simulations, we develop the Watts-Strogatz network model by including degree distribution, in an attempt to improve the comparison between characteristic path lengths and clustering coefficients predicted by the original Watts-Strogatz network model and those of the real networks with the small-world property. Good agreement between the predictions of the theoretical analysis and those of the computer simulations has been shown. It is found that the developed Watts-Strogatz network model can fit the real small-world networks more satisfactorily. Some other interesting results are also reported by adjusting the parameters in a model degree-distribution function. The developed Watts-Strogatz network model is expected to help in the future analysis of various social problems as well as financial markets with the small-world property.
International Nuclear Information System (INIS)
Chen, Y W; Zhang, L F; Huang, J P
2007-01-01
By using theoretical analysis and computer simulations, we develop the Watts-Strogatz network model by including degree distribution, in an attempt to improve the comparison between characteristic path lengths and clustering coefficients predicted by the original Watts-Strogatz network model and those of the real networks with the small-world property. Good agreement between the predictions of the theoretical analysis and those of the computer simulations has been shown. It is found that the developed Watts-Strogatz network model can fit the real small-world networks more satisfactorily. Some other interesting results are also reported by adjusting the parameters in a model degree-distribution function. The developed Watts-Strogatz network model is expected to help in the future analysis of various social problems as well as financial markets with the small-world property
Guerrero, J.; Halldin, S.; Xu, C.; Lundin, L.
2011-12-01
Distributed hydrological models are important tools in water management as they account for the spatial variability of the hydrological data, as well as being able to produce spatially distributed outputs. They can directly incorporate and assess potential changes in the characteristics of our basins. A recognized problem for models in general is equifinality, which is only exacerbated for distributed models who tend to have a large number of parameters. We need to deal with the fundamentally ill-posed nature of the problem that such models force us to face, i.e. a large number of parameters and very few variables that can be used to constrain them, often only the catchment discharge. There is a growing but yet limited literature showing how the internal states of a distributed model can be used to calibrate/validate its predictions. In this paper, a distributed version of WASMOD, a conceptual rainfall runoff model with only three parameters, combined with a routing algorithm based on the high-resolution HydroSHEDS data was used to simulate the discharge in the Paso La Ceiba basin in Honduras. The parameter space was explored using Monte-Carlo simulations and the region of space containing the parameter-sets that were considered behavioral according to two different criteria was delimited using the geometric concept of alpha-shapes. The discharge data from five internal sub-basins was used to aid in the calibration of the model and to answer the following questions: Can this information improve the simulations at the outlet of the catchment, or decrease their uncertainty? Also, after reducing the number of model parameters needing calibration through sensitivity analysis: Is it possible to relate them to basin characteristics? The analysis revealed that in most cases the internal discharge data can be used to reduce the uncertainty in the discharge at the outlet, albeit with little improvement in the overall simulation results.
Dynamical Analysis of SIR Epidemic Models with Distributed Delay
Directory of Open Access Journals (Sweden)
Wencai Zhao
2013-01-01
Full Text Available SIR epidemic models with distributed delay are proposed. Firstly, the dynamical behaviors of the model without vaccination are studied. Using the Jacobian matrix, the stability of the equilibrium points of the system without vaccination is analyzed. The basic reproduction number R is got. In order to study the important role of vaccination to prevent diseases, the model with distributed delay under impulsive vaccination is formulated. And the sufficient conditions of globally asymptotic stability of “infection-free” periodic solution and the permanence of the model are obtained by using Floquet’s theorem, small-amplitude perturbation skills, and comparison theorem. Lastly, numerical simulation is presented to illustrate our main conclusions that vaccination has significant effects on the dynamical behaviors of the model. The results can provide effective tactic basis for the practical infectious disease prevention.
Model for the angular distribution of sky radiance
Energy Technology Data Exchange (ETDEWEB)
Hooper, F C; Brunger, A P
1979-08-01
A flexible mathematical model is introduced which describes the radiance of the dome of the sky under various conditions. This three-component continuous distribution (TCCD) model is compounded by the superposition of three separate terms, the isotropic, circumsolar and horizon brightening terms, each representing the contribution of a particular sky characteristic. In use a particular sky condition is characterized by the values of the coefficients of each of these three terms, defining the distribution of the total diffuse component. The TCCD model has been demonstrated to fit both the normalized clear sky data and the normalized overcast sky data with an RMS error of about ten percent of the man overall sky radiance. By extension the model could describe variable or partly clouded sky conditions. The model can aid in improving the prediction of solar collector performance.
UV Stellar Distribution Model for the Derivation of Payload
Directory of Open Access Journals (Sweden)
Young-Jun Choi
1999-12-01
Full Text Available We present the results of a model calculation of the stellar distribution in a UV and centered at 2175Å corresponding to the well-known bump in the interstellar extinction curve. The stellar distribution model used here is based on the Bahcall-Soneira galaxy model (1980. The source code for model calculation was designed by Brosch (1991 and modified to investigate various designing factors for UV satellite payload. The model predicts UV stellar densities in different sky directions, and its results are compared with the TD-1 star counts for a number of sky regions. From this study, we can determine the field of view, size of optics, angular resolution, and number of stars in one orbit. There will provide the basic constrains in designing a satellite payload for UV observations.
Bjorck, Ulric
Students' use of distributed Problem-Based Learning (dPBL) in university courses in social economy was studied. A sociocultural framework was used to analyze the actions of students focusing on their mastery of dPBL. The main data material consisted of messages written in an asynchronous conferencing system by 50 Swedish college students in 2…
Koper, Rob
2003-01-01
Please refer to: Koper, R. (2004). Use of the Semantic Web to Solve Some Basic Problems in Education: Increase Flexible, Distributed Lifelong Learning, Decrease Teacher's Workload. Journal of Interactive Media in Education, 2004 (6). Special Issue on the Educational Semantic Web. ISSN:1365-893X [
International Nuclear Information System (INIS)
Khalfin, L.A.
1975-01-01
On the basis of the strong energy-momentum conservation law, the induced singularities of mass distributions of unstable particles connected with cascade decay are investigated. The possible solution of the CP-problem in the decay of Kaon neutral - Antikaon neutral mesons based on the mechanism of the induced singularities is proposed
The classical Stefan problem basic concepts, modelling and analysis
Gupta, SC
2003-01-01
This volume emphasises studies related toclassical Stefan problems. The term "Stefan problem" isgenerally used for heat transfer problems with phase-changes suchas from the liquid to the solid. Stefan problems have somecharacteristics that are typical of them, but certain problemsarising in fields such as mathematical physics and engineeringalso exhibit characteristics similar to them. The term``classical" distinguishes the formulation of these problems fromtheir weak formulation, in which the solution need not possessclassical derivatives. Under suitable assumptions, a weak solutioncould be as good as a classical solution. In hyperbolic Stefanproblems, the characteristic features of Stefan problems arepresent but unlike in Stefan problems, discontinuous solutions areallowed because of the hyperbolic nature of the heat equation. Thenumerical solutions of inverse Stefan problems, and the analysis ofdirect Stefan problems are so integrated that it is difficult todiscuss one without referring to the other. So no...
Species distribution models of tropical deep-sea snappers.
Directory of Open Access Journals (Sweden)
Céline Gomez
Full Text Available Deep-sea fisheries provide an important source of protein to Pacific Island countries and territories that are highly dependent on fish for food security. However, spatial management of these deep-sea habitats is hindered by insufficient data. We developed species distribution models using spatially limited presence data for the main harvested species in the Western Central Pacific Ocean. We used bathymetric and water temperature data to develop presence-only species distribution models for the commercially exploited deep-sea snappers Etelis Cuvier 1828, Pristipomoides Valenciennes 1830, and Aphareus Cuvier 1830. We evaluated the performance of four different algorithms (CTA, GLM, MARS, and MAXENT within the BIOMOD framework to obtain an ensemble of predicted distributions. We projected these predictions across the Western Central Pacific Ocean to produce maps of potential deep-sea snapper distributions in 32 countries and territories. Depth was consistently the best predictor of presence for all species groups across all models. Bathymetric slope was consistently the poorest predictor. Temperature at depth was a good predictor of presence for GLM only. Model precision was highest for MAXENT and CTA. There were strong regional patterns in predicted distribution of suitable habitat, with the largest areas of suitable habitat (> 35% of the Exclusive Economic Zone predicted in seven South Pacific countries and territories (Fiji, Matthew & Hunter, Nauru, New Caledonia, Tonga, Vanuatu and Wallis & Futuna. Predicted habitat also varied among species, with the proportion of predicted habitat highest for Aphareus and lowest for Etelis. Despite data paucity, the relationship between deep-sea snapper presence and their environments was sufficiently strong to predict their distribution across a large area of the Pacific Ocean. Our results therefore provide a strong baseline for designing monitoring programs that balance resource exploitation and
Real-time modeling and simulation of distribution feeder and distributed resources
Singh, Pawan
The analysis of the electrical system dates back to the days when analog network analyzers were used. With the advent of digital computers, many programs were written for power-flow and short circuit analysis for the improvement of the electrical system. Real-time computer simulations can answer many what-if scenarios in the existing or the proposed power system. In this thesis, the standard IEEE 13-Node distribution feeder is developed and validated on a real-time platform OPAL-RT. The concept and the challenges of the real-time simulation are studied and addressed. Distributed energy resources include some of the commonly used distributed generation and storage devices like diesel engine, solar photovoltaic array, and battery storage system are modeled and simulated on a real-time platform. A microgrid encompasses a portion of an electric power distribution which is located downstream of the distribution substation. Normally, the microgrid operates in paralleled mode with the grid; however, scheduled or forced isolation can take place. In such conditions, the microgrid must have the ability to operate stably and autonomously. The microgrid can operate in grid connected and islanded mode, both the operating modes are studied in the last chapter. Towards the end, a simple microgrid controller modeled and simulated on the real-time platform is developed for energy management and protection for the microgrid.
Progress and problems in modelling HTR core dynamics
International Nuclear Information System (INIS)
Scherer, W.; Gerwin, H.
1991-01-01
In recent years greater effort has been made to establish theoretical models for HTR core dynamics. At KFA Juelich the TINTE (TIme dependent Neutronics and TEmperatures) code system has been developed, which is able to model the primary circuit of an HTR plant using modern numerical techniques and taking into account the mutual interference of the relevant physical variables. The HTR core is treated in 2-D R-Z geometry for both nucleonics and thermo-fluid-dynamics. 2-energy-group diffusion theory is used in the nuclear part including 6 groups of delayed neutron precursors and 14 groups of decay heat producers. Local and non-local heat sources are incorporated, thus simulating gamma ray transport. The thermo-fluid-dynamics module accounts for heterogeneity effects due to the pebble bed structure. Pipes and other components of the primary loop are modelled in 1-D geometry. Forced convection may be treated as well as natural convection in case of blower breakdown accidents. Validation of TINTE has started using the results of a comprehensive experimental program that has been performed at the Arbeitsgemeinschaft Versuchsreaktor GmbH (AVR) high temperature pebble bed reactor at Juelich. In the frame of this program power transients were initiated by varying the helium blower rotational speed or by moving the control rods. In most cases a good accordance between experiment and calculation was found. Problems in modelling the special AVR reactor geometry in two dimensions are described and suggestions for overcoming the uncertainties of experimentally determined control rod reactivities are given. The influence of different polynomial expansions of xenon cross sections to long term transients is discussed together with effects of burnup during that time. Up to now the TINTE code has proven its general applicability to operational core transients of HTR. The effects of water ingress on reactivity, fuel element corrosion and cooling gas properties are now being
Fitting the Probability Distribution Functions to Model Particulate Matter Concentrations
International Nuclear Information System (INIS)
El-Shanshoury, Gh.I.
2017-01-01
The main objective of this study is to identify the best probability distribution and the plotting position formula for modeling the concentrations of Total Suspended Particles (TSP) as well as the Particulate Matter with an aerodynamic diameter<10 μm (PM 10 ). The best distribution provides the estimated probabilities that exceed the threshold limit given by the Egyptian Air Quality Limit value (EAQLV) as well the number of exceedance days is estimated. The standard limits of the EAQLV for TSP and PM 10 concentrations are 24-h average of 230 μg/m 3 and 70 μg/m 3 , respectively. Five frequency distribution functions with seven formula of plotting positions (empirical cumulative distribution functions) are compared to fit the average of daily TSP and PM 10 concentrations in year 2014 for Ain Sokhna city. The Quantile-Quantile plot (Q-Q plot) is used as a method for assessing how closely a data set fits a particular distribution. A proper probability distribution that represents the TSP and PM 10 has been chosen based on the statistical performance indicator values. The results show that Hosking and Wallis plotting position combined with Frechet distribution gave the highest fit for TSP and PM 10 concentrations. Burr distribution with the same plotting position follows Frechet distribution. The exceedance probability and days over the EAQLV are predicted using Frechet distribution. In 2014, the exceedance probability and days for TSP concentrations are 0.052 and 19 days, respectively. Furthermore, the PM 10 concentration is found to exceed the threshold limit by 174 days
STOCHASTIC MODEL OF THE SPIN DISTRIBUTION OF DARK MATTER HALOS
Energy Technology Data Exchange (ETDEWEB)
Kim, Juhan [Center for Advanced Computation, Korea Institute for Advanced Study, Heogiro 85, Seoul 130-722 (Korea, Republic of); Choi, Yun-Young [Department of Astronomy and Space Science, Kyung Hee University, Gyeonggi 446-701 (Korea, Republic of); Kim, Sungsoo S.; Lee, Jeong-Eun [School of Space Research, Kyung Hee University, Gyeonggi 446-701 (Korea, Republic of)
2015-09-15
We employ a stochastic approach to probing the origin of the log-normal distributions of halo spin in N-body simulations. After analyzing spin evolution in halo merging trees, it was found that a spin change can be characterized by a stochastic random walk of angular momentum. Also, spin distributions generated by random walks are fairly consistent with those directly obtained from N-body simulations. We derived a stochastic differential equation from a widely used spin definition and measured the probability distributions of the derived angular momentum change from a massive set of halo merging trees. The roles of major merging and accretion are also statistically analyzed in evolving spin distributions. Several factors (local environment, halo mass, merging mass ratio, and redshift) are found to influence the angular momentum change. The spin distributions generated in the mean-field or void regions tend to shift slightly to a higher spin value compared with simulated spin distributions, which seems to be caused by the correlated random walks. We verified the assumption of randomness in the angular momentum change observed in the N-body simulation and detected several degrees of correlation between walks, which may provide a clue for the discrepancies between the simulated and generated spin distributions in the voids. However, the generated spin distributions in the group and cluster regions successfully match the simulated spin distribution. We also demonstrated that the log-normality of the spin distribution is a natural consequence of the stochastic differential equation of the halo spin, which is well described by the Geometric Brownian Motion model.
Integrated modeling of natural and human systems - problems and initiatives
Kessler, H.; Giles, J.; Gunnink, J.; Hughes, A.; Moore, R. V.; Peach, D.
2009-12-01
's system, e.g. the flow of groundwater to an abstraction borehole or the availability of water for irrigation. Particular problems arise when model data from two or more disciplines are incompatible in terms of data formats, scientific concepts or language. Other barriers include the cultural segregation within and between science disciplines as well as impediments to data exchange due to ownership and copyright restrictions. OpenMI and GeoSciML are initiatives that are trying to overcome these barriers by building international communities that share vocabularies and data formats. This paper will give examples of the successful merging of geological and hydrological models from the UK and the Netherlands and will introduce the vision of an open Environmental Modelling Platform which aims to link data, knowledge and concepts seamlessly to numerical process models. Last but not least there is an urgent need to create a Subsurface Management System akin to a Geographic Information System in which all results of subsurface modelling can be visualised and analysed in an integrated manner.
International Nuclear Information System (INIS)
1996-01-01
The Natural Gas Transmission and Distribution Model (NGTDM) of the National Energy Modeling System is developed and maintained by the Energy Information Administration (EIA), Office of Integrated Analysis and Forecasting. This report documents the archived version of the NGTDM that was used to produce the natural gas forecasts presented in the Annual Energy Outlook 1996, (DOE/EIA-0383(96)). The purpose of this report is to provide a reference document for model analysts, users, and the public that defines the objectives of the model, describes its basic approach, and provides detail on the methodology employed. Previously this report represented Volume I of a two-volume set. Volume II reported on model performance, detailing convergence criteria and properties, results of sensitivity testing, comparison of model outputs with the literature and/or other model results, and major unresolved issues
Energy Technology Data Exchange (ETDEWEB)
NONE
1996-02-26
The Natural Gas Transmission and Distribution Model (NGTDM) of the National Energy Modeling System is developed and maintained by the Energy Information Administration (EIA), Office of Integrated Analysis and Forecasting. This report documents the archived version of the NGTDM that was used to produce the natural gas forecasts presented in the Annual Energy Outlook 1996, (DOE/EIA-0383(96)). The purpose of this report is to provide a reference document for model analysts, users, and the public that defines the objectives of the model, describes its basic approach, and provides detail on the methodology employed. Previously this report represented Volume I of a two-volume set. Volume II reported on model performance, detailing convergence criteria and properties, results of sensitivity testing, comparison of model outputs with the literature and/or other model results, and major unresolved issues.
Gauge hierarchy problem and a nonscaling SU(5) model
International Nuclear Information System (INIS)
Tajnov, Eh.A.
1987-01-01
It is shown that the problems of gauge hierarchy anf Higgs hierarchy have a common origin, and a way is proposed for their combined solution in a no-scale supersymmetric SU(5) gauge model, related to the N=1 supergravity. A reason of appearance of the grand unification scale, M G , is the dimensional transmutation owing to quantum corrections to the classical potential. In this model, the Higgs hierarchy is established automatically by means of the singlet mechanism, which does not require a fine tuning of the superpotential parameters. The effective potential for the singlet field X has a minimum at =M G =2.2x10 16 GeV. The scale parameter M G does not depend on the gravitino mass and initial values of the Yukawa coupling constants but depends on initial values of the gauge constant α -1 (0)=22.7 and on the trilinear supergravity constant A=1.84 at a scale M=M p /√8π=2.43x10 18 GeV
A simplified model of saltcake moisture distribution. Letter report
International Nuclear Information System (INIS)
Simmons, C.S.
1995-09-01
This letter report describes the formulation of a simplified model for finding the moisture distribution in a saltcake waste profile that has been stabilized by pumping out the drainable interstitial liquid. The model is based on assuming that capillarity mainly governs the distribution of moisture in the porous saltcake waste. A stead upward flow of moisture driven by evaporation from the waste surface is conceptualized to occur for isothermal conditions. To obtain hydraulic parameters for unsaturated conditions, the model is calibrated or matched to the relative saturation distribution as measured by neutron probe scans. The model is demonstrated on Tanks 104-BY and 105-TX as examples. A value of the model is that it identifies the key physical parameters that control the surface moisture content in a waste profile. Moreover, the model can be used to estimate the brine application rate at the waste surface that would raise the moisture content there to a safe level. Thus, the model can be applied to help design a strategy for correcting the moisture conditions in a saltcake waste tank
Nucleon parton distributions in a light-front quark model
International Nuclear Information System (INIS)
Gutsche, Thomas; Lyubovitskij, Valery E.; Schmidt, Ivan
2017-01-01
Continuing our analysis of parton distributions in the nucleon, we extend our light-front quark model in order to obtain both the helicity-independent and the helicity-dependent parton distributions, analytically matching the results of global fits at the initial scale μ∝ 1 GeV; they also contain the correct Dokshitzer-Gribov-Lipatov-Altarelli-Parisi evolution. We also calculate the transverse parton, Wigner and Husimi distributions from a unified point of view, using our light-front wave functions and expressing them in terms of the parton distributions q_v(x) and δq_v(x). Our results are very relevant for the current and future program of the COMPASS experiment at SPS (CERN). (orig.)