WorldWideScience

Sample records for modeling distribution problems

  1. Using Model Checking for Analyzing Distributed Power Control Problems

    DEFF Research Database (Denmark)

    Brihaye, Thomas; Jungers, Marc; Lasaulce, Samson

    2010-01-01

    Model checking (MC) is a formal verification technique which has been known and still knows a resounding success in the computer science community. Realizing that the distributed power control ( PC) problem can be modeled by a timed game between a given transmitter and its environment, the authors...

  2. A new model for solution of complex distributed constrained problems

    CERN Document Server

    Al-Maqtari, Sami; Babkin, Eduard

    2010-01-01

    In this paper we describe an original computational model for solving different types of Distributed Constraint Satisfaction Problems (DCSP). The proposed model is called Controller-Agents for Constraints Solving (CACS). This model is intended to be used which is an emerged field from the integration between two paradigms of different nature: Multi-Agent Systems (MAS) and the Constraint Satisfaction Problem paradigm (CSP) where all constraints are treated in central manner as a black-box. This model allows grouping constraints to form a subset that will be treated together as a local problem inside the controller. Using this model allows also handling non-binary constraints easily and directly so that no translating of constraints into binary ones is needed. This paper presents the implementation outlines of a prototype of DCSP solver, its usage methodology and overview of the CACS application for timetabling problems.

  3. Using Model Checking for Analyzing Distributed Power Control Problems

    Directory of Open Access Journals (Sweden)

    Thomas Brihaye

    2010-01-01

    Full Text Available Model checking (MC is a formal verification technique which has been known and still knows a resounding success in the computer science community. Realizing that the distributed power control (PC problem can be modeled by a timed game between a given transmitter and its environment, the authors wanted to know whether this approach can be applied to distributed PC. It turns out that it can be applied successfully and allows one to analyze realistic scenarios including the case of discrete transmit powers and games with incomplete information. The proposed methodology is as follows. We state some objectives a transmitter-receiver pair would like to reach. The network is modeled by a game where transmitters are considered as timed automata interacting with each other. The objectives are then translated into timed alternating-time temporal logic formulae and MC is exploited to know whether the desired properties are verified and determine a winning strategy.

  4. Predicting weed problems in maize cropping by species distribution modelling

    Directory of Open Access Journals (Sweden)

    Bürger, Jana

    2014-02-01

    Full Text Available Increasing maize cultivation and changed cropping practices promote the selection of typical maize weeds that may also profit strongly from climate change. Predicting potential weed problems is of high interest for plant production. Within the project KLIFF, experiments were combined with species distribution modelling for this task in the region of Lower Saxony, Germany. For our study, we modelled ecological and damage niches of nine weed species that are significant and wide spread in maize cropping in a number of European countries. Species distribution models describe the ecological niche of a species, these are the environmental conditions under which a species can maintain a vital population. It is also possible to estimate a damage niche, i.e. the conditions under which a species causes damage in agricultural crops. For this, we combined occurrence data of European national data bases with high resolution climate, soil and land use data. Models were also projected to simulated climate conditions for the time horizon 2070 - 2100 in order to estimate climate change effects. Modelling results indicate favourable conditions for typical maize weed occurrence virtually all over the study region, but only a few species are important in maize cropping. This is in good accordance with the findings of an earlier maize weed monitoring. Reaction to changing climate conditions is species-specific, for some species neutral (E. crus-galli, other species may gain (Polygonum persicaria or loose (Viola arvensis large areas of suitable habitats. All species with damage potential under present conditions will remain important in maize cropping, some more species will gain regional importance (Calystegia sepium, Setara viridis.

  5. A New Algebraic Modelling Approach to Distributed Problem-Solving in MAS

    Institute of Scientific and Technical Information of China (English)

    帅典勋; 邓志东

    2002-01-01

    This paper is devoted to a new algebraic modelling approach to distributed problem-solving in multi-agent systems (MAS), which is featured by a unified framework for describing and treating social behaviors, social dynamics and social intelligence. A conceptual architecture of algebraic modelling is presented. The algebraic modelling of typical social behaviors, social situation and social dynamics is discussed in the context of distributed problemsolving in MAS. The comparison and simulation on distributed task allocations and resource assignments in MAS show more advantages of the algebraic approach than other conventional methods.

  6. The production-distribution problem with order acceptance and package delivery: models and algorithm

    Directory of Open Access Journals (Sweden)

    Khalili Majid

    2016-01-01

    Full Text Available The production planning and distribution are among the most important decisions in the supply chain. Classically, in this problem, it is assumed that all orders have to produced and separately delivered; while, in practice, an order may be rejected if the cost that it brings to the supply chain exceeds its revenue. Moreover, orders can be delivered in a batch to reduce the related costs. This paper considers the production planning and distribution problem with order acceptance and package delivery to maximize the profit. At first, a new mathematical model based on mixed integer linear programming is developed. Using commercial optimization software, the model can optimally solve small or even medium sized instances. For large instances, a solution method, based on imperialist competitive algorithms, is also proposed. Using numerical experiments, the proposed model and algorithm are evaluated.

  7. A Data Flow Model to Solve the Data Distribution Changing Problem in Machine Learning

    Directory of Open Access Journals (Sweden)

    Shang Bo-Wen

    2016-01-01

    Full Text Available Continuous prediction is widely used in broad communities spreading from social to business and the machine learning method is an important method in this problem.When we use the machine learning method to predict a problem. We use the data in the training set to fit the model and estimate the distribution of data in the test set.But when we use machine learning to do the continuous prediction we get new data as time goes by and use the data to predict the future data, there may be a problem. As the size of the data set increasing over time, the distribution changes and there will be many garbage data in the training set.We should remove the garbage data as it reduces the accuracy of the prediction. The main contribution of this article is using the new data to detect the timeliness of historical data and remove the garbage data.We build a data flow model to describe how the data flow among the test set, training set, validation set and the garbage set and improve the accuracy of prediction. As the change of the data set, the best machine learning model will change.We design a hybrid voting algorithm to fit the data set better that uses seven machine learning models predicting the same problem and uses the validation set putting different weights on the learning models to give better model more weights. Experimental results show that, when the distribution of the data set changes over time, our time flow model can remove most of the garbage data and get a better result than the traditional method that adds all the data to the data set; our hybrid voting algorithm has a better prediction result than the average accuracy of other predict models

  8. A heterogeneous fleet vehicle routing model for solving the LPG distribution problem: A case study

    Science.gov (United States)

    Onut, S.; Kamber, M. R.; Altay, G.

    2014-03-01

    Vehicle Routing Problem (VRP) is an important management problem in the field of distribution and logistics. In VRPs, routes from a distribution point to geographically distributed points are designed with minimum cost and considering customer demands. All points should be visited only once and by one vehicle in one route. Total demand in one route should not exceed the capacity of the vehicle that assigned to that route. VRPs are varied due to real life constraints related to vehicle types, number of depots, transportation conditions and time periods, etc. Heterogeneous fleet vehicle routing problem is a kind of VRP that vehicles have different capacity and costs. There are two types of vehicles in our problem. In this study, it is used the real world data and obtained from a company that operates in LPG sector in Turkey. An optimization model is established for planning daily routes and assigned vehicles. The model is solved by GAMS and optimal solution is found in a reasonable time.

  9. Distributed Storage Allocation Problems

    OpenAIRE

    Leong, Derek; Dimakis, Alexandros G.; Ho, Tracey

    2009-01-01

    We investigate the problem of using several storage nodes to store a data object, subject to an aggregate storage budget or redundancy constraint. It is challenging to find the optimal allocation that maximizes the probability of successful recovery by the data collector because of the large space of possible symmetric and nonsymmetric allocations, and the nonconvexity of the problem. For the special case of probability-l recovery, we show that the optimal allocatio...

  10. Presenting a Bi-objective Integrated Model for Production-Distribution Problem in a Multi-level Supply Chain Network

    Directory of Open Access Journals (Sweden)

    Abolfazl Kazemi

    2015-02-01

    Full Text Available In this study, a bi-objective model for integrated planning of production-distribution in a multi-level supply chain network with multiple product types and multi time periods is presented. The supply chain network including manufacturers, distribution centers, retailers and final customers is proposed. The proposed model minimizes the total supply chain costs and transforming time of products for customers in the chain. The proposed model is in the class of linear integer programming problems. The complexity of the problem is large and in the literatur, this problem has been shown to be NP-hard. Therefore, for solving this problem, two multi objective meta-heuristic approaches based on Pareto method including non-dominated Sorting Genetic Algorithm-II (NSGA-II and non-dominated Ranking Genetic Algorithm (NRGA have been suggested. Since the output of meta- heuristic algorithms are highly dependent on the input parameters of the algorithm, Taguchi method (Taguchi is used to tune the parameters. Finally, in order to evaluate the performance of the proposed solution methods, different test problems with different dimensions have been produced and the performances of the proposed algorithms on the test problems have been analyzed.

  11. The distributed wireless gathering problem

    NARCIS (Netherlands)

    Bonifaci, V.; Korteweg, P.; Spaccamela, A. Marchetti; Stougie, L.

    2011-01-01

    We address the problem of data gathering in a wireless network using multi-hop communication; our main goal is the analysis of simple algorithms suitable for implementation in realistic scenarios. We study the performance of distributed algorithms, which do not use any form of local coordination,

  12. ConceptModeller: a Problem-Oriented Visual SDK for Globally Distributed Enterprise Systems

    OpenAIRE

    Zykov, Sergey V.

    2006-01-01

    The paper describes problem-oriented approach to software development. The approach is a part of the original integrated methodology of enterprise Internet-based software design and implementation. All aspects of software development, from theory to implementation, are covered.

  13. Modeling mass transport in aquifers: The distributed-source problem. Research report, July 1988-June 1990

    Energy Technology Data Exchange (ETDEWEB)

    Serrano, S.E.

    1990-08-01

    A new methodology to model the time and space evolution of groundwater variables in a system of acquifers when certain components of the model, such as the geohydrologic information, the boundary conditions, the magnitude and variability of the sources or physical parameters are uncertain and defined in stochastic terms. This facilitates a more realistic statistical representation of groundwater flow and groundwater pollution forecasting for either the saturated or the unsaturated zone. The method is based on applications of modern mathematics to the solution of the resulting stochastic transport equations. The procedure exhibits considerable advantages over the existing stochastic modeling techniques.

  14. Elucidating the sign problem through noise distributions

    CERN Document Server

    Nicholson, Amy N; Kaplan, David B

    2012-01-01

    Due to the presence of light pions in the theory, lattice QCD at finite densities suffers from issues with noise in both grand canonical and canonical formulations. We study two different formulations of the Nambu-Jona-Lasinio model reduced to 2+1 dimensions at large N, where N is the number of flavors. At finite chemical potential one formulation has a severe sign problem and a fermion correlator which displays a broad probability distribution with small mean. In the other we find no sign problem and a distribution amenable to the cumulant expansion techniques developed in earlier work.

  15. NETWORK MODEL AND ALGORITHM FOR SOLVING PROBLEM PERTAINING TO OPTIMUM DISTRIBUTION OF CAPITAL INVESTMENT WHILE MODERNIZING ENTERPRISES OF HEATING SYSTEMS

    Directory of Open Access Journals (Sweden)

    V. A. Sednin

    2009-01-01

    Full Text Available The paper presents a problem statement, a developed mathematical model and proposed algorithm for  solving  optimization of capital  investments in modernization  (introduction of  automatic  controlsystems of thermal processes of large systems of centralized heat supply which are based on application of network model.The formulated problem refers to the problems of combinatory (discrete optimization. Methods of «branches and boundaries» or dynamic programming are applied nowadays for solving problems of this type. These methods are not considered as universal ones because they greatly depend on description  of  solution feasible area. As a result of it it is not possible to develop a universal software for solving any assignments which can be formulated as problems of combinatory optimization.The presented network model of the investigated problem does not have above-mentioned disadvantages and an algorithm is proposed for solving this problem which admits a simple programming realization. 

  16. A model for distribution centers location-routing problem on a multimodal transportation network with a meta-heuristic solving approach

    Science.gov (United States)

    Fazayeli, Saeed; Eydi, Alireza; Kamalabadi, Isa Nakhai

    2017-07-01

    Nowadays, organizations have to compete with different competitors in regional, national and international levels, so they have to improve their competition capabilities to survive against competitors. Undertaking activities on a global scale requires a proper distribution system which could take advantages of different transportation modes. Accordingly, the present paper addresses a location-routing problem on multimodal transportation network. The introduced problem follows four objectives simultaneously which form main contribution of the paper; determining multimodal routes between supplier and distribution centers, locating mode changing facilities, locating distribution centers, and determining product delivery tours from the distribution centers to retailers. An integer linear programming is presented for the problem, and a genetic algorithm with a new chromosome structure proposed to solve the problem. Proposed chromosome structure consists of two different parts for multimodal transportation and location-routing parts of the model. Based on published data in the literature, two numerical cases with different sizes generated and solved. Also, different cost scenarios designed to better analyze model and algorithm performance. Results show that algorithm can effectively solve large-size problems within a reasonable time which GAMS software failed to reach an optimal solution even within much longer times.

  17. Distributed Systems: The Hard Problems

    CERN Document Server

    CERN. Geneva

    2015-01-01

    **Nicholas Bellerophon** works as a client services engineer at Basho Technologies, helping customers setup and run distributed systems at scale in the wild. He has also worked in massively multiplayer games, and recently completed a live scalable simulation engine. He is an avid TED-watcher with interests in many areas of the arts, science, and engineering, including of course high-energy physics.

  18. A Framework for Distributed Problem Solving

    Science.gov (United States)

    Leone, Joseph; Shin, Don G.

    1989-03-01

    This work explores a distributed problem solving (DPS) approach, namely the AM/AG model, to cooperative memory recall. The AM/AG model is a hierarchic social system metaphor for DPS based on the Mintzberg's model of organizations. At the core of the model are information flow mechanisms, named amplification and aggregation. Amplification is a process of expounding a given task, called an agenda, into a set of subtasks with magnified degree of specificity and distributing them to multiple processing units downward in the hierarchy. Aggregation is a process of combining the results reported from multiple processing units into a unified view, called a resolution, and promoting the conclusion upward in the hierarchy. The combination of amplification and aggregation can account for a memory recall process which primarily relies on the ability of making associations between vast amounts of related concepts, sorting out the combined results, and promoting the most plausible ones. The amplification process is discussed in detail. An implementation of the amplification process is presented. The process is illustrated by an example.

  19. Logistics distribution centers location problem and algorithm under fuzzy environment

    Science.gov (United States)

    Yang, Lixing; Ji, Xiaoyu; Gao, Ziyou; Li, Keping

    2007-11-01

    Distribution centers location problem is concerned with how to select distribution centers from the potential set so that the total relevant cost is minimized. This paper mainly investigates this problem under fuzzy environment. Consequentially, chance-constrained programming model for the problem is designed and some properties of the model are investigated. Tabu search algorithm, genetic algorithm and fuzzy simulation algorithm are integrated to seek the approximate best solution of the model. A numerical example is also given to show the application of the algorithm.

  20. Distributed memory compiler design for sparse problems

    Science.gov (United States)

    Wu, Janet; Saltz, Joel; Berryman, Harry; Hiranandani, Seema

    1991-01-01

    A compiler and runtime support mechanism is described and demonstrated. The methods presented are capable of solving a wide range of sparse and unstructured problems in scientific computing. The compiler takes as input a FORTRAN 77 program enhanced with specifications for distributing data, and the compiler outputs a message passing program that runs on a distributed memory computer. The runtime support for this compiler is a library of primitives designed to efficiently support irregular patterns of distributed array accesses and irregular distributed array partitions. A variety of Intel iPSC/860 performance results obtained through the use of this compiler are presented.

  1. Coordinating complex problem-solving among distributed intelligent agents

    Science.gov (United States)

    Adler, Richard M.

    1992-01-01

    A process-oriented control model is described for distributed problem solving. The model coordinates the transfer and manipulation of information across independent networked applications, both intelligent and conventional. The model was implemented using SOCIAL, a set of object-oriented tools for distributing computing. Complex sequences of distributed tasks are specified in terms of high level scripts. Scripts are executed by SOCIAL objects called Manager Agents, which realize an intelligent coordination model that routes individual tasks to suitable server applications across the network. These tools are illustrated in a prototype distributed system for decision support of ground operations for NASA's Space Shuttle fleet.

  2. Problems in Cybersemiotic Modelling

    DEFF Research Database (Denmark)

    Brier, Søren

    2014-01-01

    the Peircean theory of the observer as the phaneroscopic foundation. 4. Cobley points out that both models, as they are combined in Cybersemiotics lacks to integrate a theory of interest and power. They are too consensual in their view on communication. This is a general problem in both theories. Still Luhmann......Going from an empirical to an informational paradigm of cognition and communication, does not really help us to analyze, how the living systems manage to make a meaningful interpretation of environment that is useful for their survival and procreation. Other models are needed. 1. There is von...... Uexküll’s cybernetic-behavioral model, which has the problem of being placed in a Platonic, static worldview. The Umwelt of an animal is a construction limited of its functional realism of survival. It is connected to the species. 2. Ture von Uexküll and Søren Brier both realized that Maturana and Varela...

  3. NETWORK MODEL AND ALGORITHM FOR SOLVING PROBLEM OF OPTIMUM THERMAL LOAD DISTRIBUTION AMONG HEAT-SOURCES OF ENTERPRISE HEATING SYSTEM

    Directory of Open Access Journals (Sweden)

    V. A. Sednin

    2010-01-01

    Full Text Available The paper  presents an algorithm for optimization of thermal load distribution among heat-sources in the system of centralized heat supply. The algorithm can be used while elaborating plans for development of heat supply systems in cities and settlements.

  4. Problems in distributions and partial differential equations

    CERN Document Server

    Zuily, C

    1988-01-01

    The aim of this book is to provide a comprehensive introduction to the theory of distributions, by the use of solved problems. Although written for mathematicians, it can also be used by a wider audience, including engineers and physicists.The first six chapters deal with the classical theory, with special emphasis on the concrete aspects. The reader will find many examples of distributions and learn how to work with them. At the beginning of each chapter the relevant theoretical material is briefly recalled. The last chapter is a short introduction to a very wide and important field in analys

  5. Integrating packing and distribution problems and optimization through mathematical programming

    Directory of Open Access Journals (Sweden)

    Fabio Miguel

    2016-06-01

    Full Text Available This paper analyzes the integration of two combinatorial problems that frequently arise in production and distribution systems. One is the Bin Packing Problem (BPP problem, which involves finding an ordering of some objects of different volumes to be packed into the minimal number of containers of the same or different size. An optimal solution to this NP-Hard problem can be approximated by means of meta-heuristic methods. On the other hand, we consider the Capacitated Vehicle Routing Problem with Time Windows (CVRPTW, which is a variant of the Travelling Salesman Problem (again a NP-Hard problem with extra constraints. Here we model those two problems in a single framework and use an evolutionary meta-heuristics to solve them jointly. Furthermore, we use data from a real world company as a test-bed for the method introduced here.

  6. Distributed generation systems model

    Energy Technology Data Exchange (ETDEWEB)

    Barklund, C.R.

    1994-12-31

    A slide presentation is given on a distributed generation systems model developed at the Idaho National Engineering Laboratory, and its application to a situation within the Idaho Power Company`s service territory. The objectives of the work were to develop a screening model for distributed generation alternatives, to develop a better understanding of distributed generation as a utility resource, and to further INEL`s understanding of utility concerns in implementing technological change.

  7. Distributed Algorithms for Optimal Power Flow Problem

    CERN Document Server

    Lam, Albert Y S; Tse, David

    2011-01-01

    Optimal power flow (OPF) is an important problem for power generation and it is in general non-convex. With the employment of renewable energy, it will be desirable if OPF can be solved very efficiently so its solution can be used in real time. With some special network structure, e.g. trees, the problem has been shown to have a zero duality gap and the convex dual problem yields the optimal solution. In this paper, we propose a primal and a dual algorithm to coordinate the smaller subproblems decomposed from the convexified OPF. We can arrange the subproblems to be solved sequentially and cumulatively in a central node or solved in parallel in distributed nodes. We test the algorithms on IEEE radial distribution test feeders, some random tree-structured networks, and the IEEE transmission system benchmarks. Simulation results show that the computation time can be improved dramatically with our algorithms over the centralized approach of solving the problem without decomposition, especially in tree-structured...

  8. A coupled radiative transfer and diffusion approximation model for the solution of the forward problem and the a-priori fluorophore distribution estimation in fluorescence imaging

    Science.gov (United States)

    Gorpas, D.; Yova, D.; Politopoulos, K.

    2009-02-01

    Although fluorescence imaging has been applied in tumour diagnosis from the early 90s, just the last few years it has met an increasing scientific interest due to the advances in the biophotonics field and the combined technological progress of the acquisition and computational systems. In addition there are expectations that fluorescence imaging will be further developed and applied in deep tumour diagnosis in the years to come. However, this evolving field of imaging sciences has still to encounter important challenges. Among them is the expression of an accurate forward model for the solution of the reconstruction problem. The scope of this work is to introduce a three dimensional coupled radiative transfer and diffusion approximation model, applicable on the fluorescence imaging. Furthermore, the solver incorporates the super-ellipsoid models and sophisticated image processing algorithms to additionally provide a-priori estimation about the fluorophores distribution, information that is very important for the solution of the inverse problem. Simulation experiments have proven that the proposed methodology preserves the accuracy levels of the radiative transfer equation and the time efficacy of the diffusion approximation, while in the same time shows extended success on the registration between acquired and simulated images.

  9. Vehicle Routing Problem Models

    Directory of Open Access Journals (Sweden)

    Tonči Carić

    2004-01-01

    Full Text Available The Vehicle Routing Problem cannot always be solved exactly,so that in actual application this problem is solved heuristically.The work describes the concept of several concrete VRPmodels with simplified initial conditions (all vehicles are ofequal capacity and start from a single warehouse, suitable tosolve problems in cases with up to 50 users.

  10. A distributed approach to the OPF problem

    Science.gov (United States)

    Erseghe, Tomaso

    2015-12-01

    This paper presents a distributed approach to optimal power flow (OPF) in an electrical network, suitable for application in a future smart grid scenario where access to resource and control is decentralized. The non-convex OPF problem is solved by an augmented Lagrangian method, similar to the widely known ADMM algorithm, with the key distinction that penalty parameters are constantly increased. A (weak) assumption on local solver reliability is required to always ensure convergence. A certificate of convergence to a local optimum is available in the case of bounded penalty parameters. For moderate sized networks (up to 300 nodes, and even in the presence of a severe partition of the network), the approach guarantees a performance very close to the optimum, with an appreciably fast convergence speed. The generality of the approach makes it applicable to any (convex or non-convex) distributed optimization problem in networked form. In the comparison with the literature, mostly focused on convex SDP approximations, the chosen approach guarantees adherence to the reference problem, and it also requires a smaller local computational complexity effort.

  11. Cooperated Bayesian algorithm for distributed scheduling problem

    Institute of Scientific and Technical Information of China (English)

    QIANG Lei; XIAO Tian-yuan

    2006-01-01

    This paper presents a new distributed Bayesian optimization algorithm (BOA) to overcome the efficiency problem when solving NP scheduling problems.The proposed approach integrates BOA into the co-evolutionary schema,which builds up a concurrent computing environment.A new search strategy is also introduced for local optimization process.It integrates the reinforcement learning(RL) mechanism into the BOA search processes,and then uses the mixed probability information from BOA (post-probability) and RL (pre-probability) to enhance the cooperation between different local controllers,which improves the optimization ability of the algorithm.The experiment shows that the new algorithm does better in both optimization (2.2%) and convergence (11.7%),compared with classic BOA.

  12. An analytic-geometric model of the effect of spherically distributed injection errors for Galileo and Ulysses spacecraft - The multi-stage problem

    Science.gov (United States)

    Longuski, James M.; Mcronald, Angus D.

    1988-01-01

    In previous work the problem of injecting the Galileo and Ulysses spacecraft from low earth orbit into their respective interplanetary trajectories has been discussed for the single stage (Centaur) vehicle. The central issue, in the event of spherically distributed injection errors, is what happens to the vehicle? The difficulties addressed in this paper involve the multi-stage problem since both Galileo and Ulysses will be utilizing the two-stage IUS system. Ulysses will also include a third stage: the PAM-S. The solution is expressed in terms of probabilities for total percentage of escape, orbit decay and reentry trajectories. Analytic solutions are found for Hill's Equations of Relative Motion (more recently called Clohessy-Wiltshire Equations) for multi-stage injections. These solutions are interpreted geometrically on the injection sphere. The analytic-geometric models compare well with numerical solutions, provide insight into the behavior of trajectories mapped on the injection sphere and simplify the numerical two-dimensional search for trajectory families.

  13. Bounding species distribution models

    Directory of Open Access Journals (Sweden)

    Thomas J. STOHLGREN, Catherine S. JARNEVICH, Wayne E. ESAIAS,Jeffrey T. MORISETTE

    2011-10-01

    Full Text Available Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for “clamping” model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART and maximum entropy (Maxent models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5: 642–647, 2011].

  14. Bounding Species Distribution Models

    Science.gov (United States)

    Stohlgren, Thomas J.; Jarnevich, Cahterine S.; Morisette, Jeffrey T.; Esaias, Wayne E.

    2011-01-01

    Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5): 642-647, 2011].

  15. Algorithms and ordering heuristics for distributed constraint satisfaction problems

    CERN Document Server

    Wahbi , Mohamed

    2013-01-01

    DisCSP (Distributed Constraint Satisfaction Problem) is a general framework for solving distributed problems arising in Distributed Artificial Intelligence.A wide variety of problems in artificial intelligence are solved using the constraint satisfaction problem paradigm. However, there are several applications in multi-agent coordination that are of a distributed nature. In this type of application, the knowledge about the problem, that is, variables and constraints, may be logically or geographically distributed among physical distributed agents. This distribution is mainly due to p

  16. Modeled ground water age distributions

    Science.gov (United States)

    Woolfenden, Linda R.; Ginn, Timothy R.

    2009-01-01

    The age of ground water in any given sample is a distributed quantity representing distributed provenance (in space and time) of the water. Conventional analysis of tracers such as unstable isotopes or anthropogenic chemical species gives discrete or binary measures of the presence of water of a given age. Modeled ground water age distributions provide a continuous measure of contributions from different recharge sources to aquifers. A numerical solution of the ground water age equation of Ginn (1999) was tested both on a hypothetical simplified one-dimensional flow system and under real world conditions. Results from these simulations yield the first continuous distributions of ground water age using this model. Complete age distributions as a function of one and two space dimensions were obtained from both numerical experiments. Simulations in the test problem produced mean ages that were consistent with the expected value at the end of the model domain for all dispersivity values tested, although the mean ages for the two highest dispersivity values deviated slightly from the expected value. Mean ages in the dispersionless case also were consistent with the expected mean ages throughout the physical model domain. Simulations under real world conditions for three dispersivity values resulted in decreasing mean age with increasing dispersivity. This likely is a consequence of an edge effect. However, simulations for all three dispersivity values tested were mass balanced and stable demonstrating that the solution of the ground water age equation can provide estimates of water mass density distributions over age under real world conditions.

  17. A Generalized Cauchy Distribution Framework for Problems Requiring Robust Behavior

    Science.gov (United States)

    Carrillo, Rafael E.; Aysal, Tuncer C.; Barner, Kenneth E.

    2010-12-01

    Statistical modeling is at the heart of many engineering problems. The importance of statistical modeling emanates not only from the desire to accurately characterize stochastic events, but also from the fact that distributions are the central models utilized to derive sample processing theories and methods. The generalized Cauchy distribution (GCD) family has a closed-form pdf expression across the whole family as well as algebraic tails, which makes it suitable for modeling many real-life impulsive processes. This paper develops a GCD theory-based approach that allows challenging problems to be formulated in a robust fashion. Notably, the proposed framework subsumes generalized Gaussian distribution (GGD) family-based developments, thereby guaranteeing performance improvements over traditional GCD-based problem formulation techniques. This robust framework can be adapted to a variety of applications in signal processing. As examples, we formulate four practical applications under this framework: (1) filtering for power line communications, (2) estimation in sensor networks with noisy channels, (3) reconstruction methods for compressed sensing, and (4) fuzzy clustering.

  18. A Generalized Cauchy Distribution Framework for Problems Requiring Robust Behavior

    Directory of Open Access Journals (Sweden)

    Carrillo RafaelE

    2010-01-01

    Full Text Available Statistical modeling is at the heart of many engineering problems. The importance of statistical modeling emanates not only from the desire to accurately characterize stochastic events, but also from the fact that distributions are the central models utilized to derive sample processing theories and methods. The generalized Cauchy distribution (GCD family has a closed-form pdf expression across the whole family as well as algebraic tails, which makes it suitable for modeling many real-life impulsive processes. This paper develops a GCD theory-based approach that allows challenging problems to be formulated in a robust fashion. Notably, the proposed framework subsumes generalized Gaussian distribution (GGD family-based developments, thereby guaranteeing performance improvements over traditional GCD-based problem formulation techniques. This robust framework can be adapted to a variety of applications in signal processing. As examples, we formulate four practical applications under this framework: (1 filtering for power line communications, (2 estimation in sensor networks with noisy channels, (3 reconstruction methods for compressed sensing, and (4 fuzzy clustering.

  19. Bounding species distribution models

    Institute of Scientific and Technical Information of China (English)

    Thomas J. STOHLGREN; Catherine S. JARNEVICH; Wayne E. ESAIAS; Jeffrey T. MORISETTE

    2011-01-01

    Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern.Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development,yet there is no recommended best practice for “clamping” model extrapolations.We relied on two commonly used modeling approaches:classification and regression tree (CART) and maximum entropy (Maxent) models,and we tested a simple alteration of the model extrapolations,bounding extrapolations to the maximum and minimum values of primary environmental predictors,to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States.Findings suggest that multiple models of bounding,and the most conservative bounding of species distribution models,like those presented here,should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5):642-647,2011].

  20. Distributed Parameter Modelling Applications

    DEFF Research Database (Denmark)

    2011-01-01

    Here the issue of distributed parameter models is addressed. Spatial variations as well as time are considered important. Several applications for both steady state and dynamic applications are given. These relate to the processing of oil shale, the granulation of industrial fertilizers and the d......Here the issue of distributed parameter models is addressed. Spatial variations as well as time are considered important. Several applications for both steady state and dynamic applications are given. These relate to the processing of oil shale, the granulation of industrial fertilizers...... sands processing. The fertilizer granulation model considers the dynamics of MAP-DAP (mono and diammonium phosphates) production within an industrial granulator, that involves complex crystallisation, chemical reaction and particle growth, captured through population balances. A final example considers...

  1. Optimizing Distribution Problems using WinQSB Software

    Directory of Open Access Journals (Sweden)

    Daniel Mihai Amariei

    2015-07-01

    Full Text Available In the present paper we are presenting a problem of distribution using the Network Modeling Module of the WinQSB software, were we have 5 athletes which we must assign the optimal sample, function of the obtained time, so as to obtain the maximum output of the athletes. Also we analyzed the case of an accident of 2 athletes, the coupling of 3 athletes with 5 various athletic events causing the maximum coupling, done using the Hungarian algorithm.

  2. Multiscale Modelling and Inverse Problems

    CERN Document Server

    Nolen, J; Stuart, A M

    2010-01-01

    The need to blend observational data and mathematical models arises in many applications and leads naturally to inverse problems. Parameters appearing in the model, such as constitutive tensors, initial conditions, boundary conditions, and forcing can be estimated on the basis of observed data. The resulting inverse problems are often ill-posed and some form of regularization is required. These notes discuss parameter estimation in situations where the unknown parameters vary across multiple scales. We illustrate the main ideas using a simple model for groundwater flow. We will highlight various approaches to regularization for inverse problems, including Tikhonov and Bayesian methods. We illustrate three ideas that arise when considering inverse problems in the multiscale context. The first idea is that the choice of space or set in which to seek the solution to the inverse problem is intimately related to whether a homogenized or full multiscale solution is required. This is a choice of regularization. The ...

  3. Models of distributive justice.

    Science.gov (United States)

    Wolff, Jonathan

    2007-01-01

    Philosophical disagreement about justice rages over at least two questions. The most immediate is a substantial question, concerning the conditions under which particular distributive arrangements can be said to be just or unjust. The second, deeper, question concerns the nature of justice itself. What is justice? Here we can distinguish three views. First, justice as mutual advantage sees justice as essentially a matter of the outcome of a bargain. There are times when two parties can both be better off by making some sort of agreement. Justice, on this view, concerns the distribution of the benefits and burdens of the agreement. Second, justice as reciprocity takes a different approach, looking not at bargaining but at the idea of a fair return or just price, attempting to capture the idea of justice as equal exchange. Finally justice as impartiality sees justice as 'taking the other person's point of view' asking 'how would you like it if it happened to you?' Each model has significantly different consequences for the question of when issues of justice arise and how they should be settled. It is interesting to consider whether any of these models of justice could regulate behaviour between non-human animals.

  4. Mathematical problems in meteorological modelling

    CERN Document Server

    Csomós, Petra; Faragó, István; Horányi, András; Szépszó, Gabriella

    2016-01-01

    This book deals with mathematical problems arising in the context of meteorological modelling. It gathers and presents some of the most interesting and important issues from the interaction of mathematics and meteorology. It is unique in that it features contributions on topics like data assimilation, ensemble prediction, numerical methods, and transport modelling, from both mathematical and meteorological perspectives. The derivation and solution of all kinds of numerical prediction models require the application of results from various mathematical fields. The present volume is divided into three parts, moving from mathematical and numerical problems through air quality modelling, to advanced applications in data assimilation and probabilistic forecasting. The book arose from the workshop “Mathematical Problems in Meteorological Modelling” held in Budapest in May 2014 and organized by the ECMI Special Interest Group on Numerical Weather Prediction. Its main objective is to highlight the beauty of the de...

  5. [Models and computation methods of EEG forward problem].

    Science.gov (United States)

    Zhang, Yinghcun; Zou, Ling; Zhu, Shanan

    2004-04-01

    The research of EEG is of grat significance and clinical importance in studying the cognitive function and neural activity of the brain. There are two key problems in the field of EEG, EEG forward problem and EEG inverse problem. EEG forward problem which aims to get the distribution of the scalp potential due to the known current distribution in the brain is the basis of the EEG inverse problem. Generally, EEG inverse problem depends on the accuracy and efficiency of the computational method of EEG forward problem. This paper gives a review of the head model and corresponding computational method about EEG forward problem studied in recent years.

  6. The Distributed Assembly Parallel Machine Scheduling Problem with eligibility constraints.

    Directory of Open Access Journals (Sweden)

    Sara Hatami

    2015-01-01

    Full Text Available In this paper we jointly consider realistic scheduling extensions: First we study the distributed unrelated parallel machines problems by which there is a set of identical factories with parallel machines in a production stage. Jobs have to be assigned to factories and to machines. Additionally, there is an assembly stage with a single assembly machine. Finished jobs at the manufacturing stage are assembled into final products in this second assembly stage. These two joint features are referred to as the distributed assembly parallel machine scheduling problem or DAPMSP. The objective is to minimize the makespan in the assembly stage. Due to technological constraints, machines cannot be left empty and some jobs might be processed on certain factories only. We propose a mathematical model and two high performing heuristics. The model is tested with two state-of-the-art solvers and, together with the heuristics, 2220 instances are solved in a comprehensive computational experiments. Results show that the proposed model is able to solve moderately-sized instances and one of the heuristics is fast, giving close to optimal solutions in less than half a second in the worst case.

  7. Higher-order transformation and the distributed data problem.

    Energy Technology Data Exchange (ETDEWEB)

    Winter, Victor Lono (University of Nebraska at Omaha, Omaha, NE); Subramaniam, Mahadevan (University of Nebraska at Omaha, Omaha, NE)

    2003-12-01

    The distributed data problem, is characterized by the desire to bring together semantically related data from syntactically unrelated portions of a term. Two strategic combinators, dynamic and transient, are introduced in the context of a classical strategic programming framework. The impact of the resulting system on instances of the distributed data problem is then explored.

  8. Graphical Models for Bandit Problems

    CERN Document Server

    Amin, Kareem; Syed, Umar

    2012-01-01

    We introduce a rich class of graphical models for multi-armed bandit problems that permit both the state or context space and the action space to be very large, yet succinctly specify the payoffs for any context-action pair. Our main result is an algorithm for such models whose regret is bounded by the number of parameters and whose running time depends only on the treewidth of the graph substructure induced by the action space.

  9. A model for routing problem in quay management problem

    Science.gov (United States)

    Zirour, Mourad; Oughalime, Ahmed; Liong, Choong-Yeun; Ismail, Wan Rosmanira; Omar, Khairuddin

    2014-06-01

    Quadratic Assignment Problem (QAP), like Vehicle Routing Problem, is one of those optimization problems that interests many researchers in the last decades. The Quay Management Problem is a specific problem which could be presented as a QAP which involves a double assignment of customers and products toward loading positions using lifting trucks. This study focuses on the routing problem while delivering the customers' demands. In this problem, lifting trucks will route around the storage sections to collect the products then deliver to the customers who are assigned to specific loading positions. The objective of minimizing the residence time for each customer is sought. This paper presents the problem and the proposed model.

  10. Distributed Prognostics based on Structural Model Decomposition

    Science.gov (United States)

    Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, I.

    2014-01-01

    Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based models are constructed that describe the operation of a system and how it fails. Such approaches consist of an estimation phase, in which the health state of the system is first identified, and a prediction phase, in which the health state is projected forward in time to determine the end of life. Centralized solutions to these problems are often computationally expensive, do not scale well as the size of the system grows, and introduce a single point of failure. In this paper, we propose a novel distributed model-based prognostics scheme that formally describes how to decompose both the estimation and prediction problems into independent local subproblems whose solutions may be easily composed into a global solution. The decomposition of the prognostics problem is achieved through structural decomposition of the underlying models. The decomposition algorithm creates from the global system model a set of local submodels suitable for prognostics. Independent local estimation and prediction problems are formed based on these local submodels, resulting in a scalable distributed prognostics approach that allows the local subproblems to be solved in parallel, thus offering increases in computational efficiency. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the distributed approach, compare the performance with a centralized approach, and establish its scalability. Index Terms-model-based prognostics, distributed prognostics, structural model decomposition ABBREVIATIONS

  11. Fastest Distributed Consensus Averaging Problem on Chain of Rhombus Networks

    CERN Document Server

    Jafarizadeh, Saber

    2010-01-01

    Distributed consensus has appeared as one of the most important and primary problems in the context of distributed computation and it has received renewed interest in the field of sensor networks (due to recent advances in wireless communications), where solving fastest distributed consensus averaging problem over networks with different topologies is one of the primary problems in this issue. Here in this work analytical solution for the problem of fastest distributed consensus averaging algorithm over Chain of Rhombus networks is provided, where the solution procedure consists of stratification of associated connectivity graph of the network and semidefinite programming, particularly solving the slackness conditions, where the optimal weights are obtained by inductive comparing of the characteristic polynomials initiated by slackness conditions. Also characteristic polynomial together with its roots corresponding to eigenvalues of weight matrix including SLEM of network is determined inductively. Moreover t...

  12. Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution

    Energy Technology Data Exchange (ETDEWEB)

    Hamadameen, Abdulqader Othman [Optimization, Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia); Zainuddin, Zaitul Marlizawati [Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia)

    2014-06-19

    This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.

  13. Numerical models for differential problems

    CERN Document Server

    Quarteroni, Alfio

    2014-01-01

    In this text, we introduce the basic concepts for the numerical modelling of partial differential equations. We consider the classical elliptic, parabolic and hyperbolic linear equations, but also the diffusion, transport, and Navier-Stokes equations, as well as equations representing conservation laws, saddle-point problems and optimal control problems. Furthermore, we provide numerous physical examples which underline such equations. We then analyze numerical solution methods based on finite elements, finite differences, finite volumes, spectral methods and domain decomposition methods, and reduced basis methods. In particular, we discuss the algorithmic and computer implementation aspects and provide a number of easy-to-use programs. The text does not require any previous advanced mathematical knowledge of partial differential equations: the absolutely essential concepts are reported in a preliminary chapter. It is therefore suitable for students of bachelor and master courses in scientific disciplines, an...

  14. An ant colony optimization heuristic for an integrated production and distribution scheduling problem

    Science.gov (United States)

    Chang, Yung-Chia; Li, Vincent C.; Chiang, Chia-Ju

    2014-04-01

    Make-to-order or direct-order business models that require close interaction between production and distribution activities have been adopted by many enterprises in order to be competitive in demanding markets. This article considers an integrated production and distribution scheduling problem in which jobs are first processed by one of the unrelated parallel machines and then distributed to corresponding customers by capacitated vehicles without intermediate inventory. The objective is to find a joint production and distribution schedule so that the weighted sum of total weighted job delivery time and the total distribution cost is minimized. This article presents a mathematical model for describing the problem and designs an algorithm using ant colony optimization. Computational experiments illustrate that the algorithm developed is capable of generating near-optimal solutions. The computational results also demonstrate the value of integrating production and distribution in the model for the studied problem.

  15. Distributed Interior-point Method for Loosely Coupled Problems

    DEFF Research Database (Denmark)

    Pakazad, Sina Khoshfetrat; Hansson, Anders; Andersen, Martin Skovgaard

    2014-01-01

    ’s method and utilizes proximal splitting to distribute the computations for calculating the Newton step at each iteration. A combination of this algorithm and the interior-point method is then used to introduce a distributed algorithm for solving constrained loosely coupled problems. We also provide...

  16. A possible solution to the solar neutrino problem: Relativistic corrections to the Maxwellian velocity distribution

    OpenAIRE

    2001-01-01

    The relativistic corrections to the Maxwellian velocity distribution are needed for standard solar models. Relativistic equilibrium velocity distribution, if adopted in standard solar models, will lower solar neutrino fluxes and change solar neutrino energy spectra but keep solar sound speeds. It is possibly a solution to the solar neutrino problem.

  17. Selecting radiotherapy dose distributions by means of constrained optimization problems.

    Science.gov (United States)

    Alfonso, J C L; Buttazzo, G; García-Archilla, B; Herrero, M A; Núñez, L

    2014-05-01

    The main steps in planning radiotherapy consist in selecting for any patient diagnosed with a solid tumor (i) a prescribed radiation dose on the tumor, (ii) bounds on the radiation side effects on nearby organs at risk and (iii) a fractionation scheme specifying the number and frequency of therapeutic sessions during treatment. The goal of any radiotherapy treatment is to deliver on the tumor a radiation dose as close as possible to that selected in (i), while at the same time conforming to the constraints prescribed in (ii). To this day, considerable uncertainties remain concerning the best manner in which such issues should be addressed. In particular, the choice of a prescription radiation dose is mostly based on clinical experience accumulated on the particular type of tumor considered, without any direct reference to quantitative radiobiological assessment. Interestingly, mathematical models for the effect of radiation on biological matter have existed for quite some time, and are widely acknowledged by clinicians. However, the difficulty to obtain accurate in vivo measurements of the radiobiological parameters involved has severely restricted their direct application in current clinical practice.In this work, we first propose a mathematical model to select radiation dose distributions as solutions (minimizers) of suitable variational problems, under the assumption that key radiobiological parameters for tumors and organs at risk involved are known. Second, by analyzing the dependence of such solutions on the parameters involved, we then discuss the manner in which the use of those minimizers can improve current decision-making processes to select clinical dosimetries when (as is generally the case) only partial information on model radiosensitivity parameters is available. A comparison of the proposed radiation dose distributions with those actually delivered in a number of clinical cases strongly suggests that solutions of our mathematical model can be

  18. An Optimal Design Model for New Water Distribution Networks in ...

    African Journals Online (AJOL)

    An Optimal Design Model for New Water Distribution Networks in Kigali City. ... a Linear Programming Problem (LPP) which involves the design of a new network of water distribution considering the cost in the form of unit price ... Article Metrics.

  19. Integrated Production-Distribution Scheduling Problem with Multiple Independent Manufacturers

    Directory of Open Access Journals (Sweden)

    Jianhong Hao

    2015-01-01

    Full Text Available We consider the nonstandard parts supply chain with a public service platform for machinery integration in China. The platform assigns orders placed by a machinery enterprise to multiple independent manufacturers who produce nonstandard parts and makes production schedule and batch delivery schedule for each manufacturer in a coordinate manner. Each manufacturer has only one plant with parallel machines and is located at a location far away from other manufacturers. Orders are first processed at the plants and then directly shipped from the plants to the enterprise in order to be finished before a given deadline. We study the above integrated production-distribution scheduling problem with multiple manufacturers to maximize a weight sum of the profit of each manufacturer under the constraints that all orders are finished before the deadline and the profit of each manufacturer is not negative. According to the optimal condition analysis, we formulate the problem as a mixed integer programming model and use CPLEX to solve it.

  20. Hydronic distribution system computer model

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, J.W.; Strasser, J.J.

    1994-10-01

    A computer model of a hot-water boiler and its associated hydronic thermal distribution loop has been developed at Brookhaven National Laboratory (BNL). It is intended to be incorporated as a submodel in a comprehensive model of residential-scale thermal distribution systems developed at Lawrence Berkeley. This will give the combined model the capability of modeling forced-air and hydronic distribution systems in the same house using the same supporting software. This report describes the development of the BNL hydronics model, initial results and internal consistency checks, and its intended relationship to the LBL model. A method of interacting with the LBL model that does not require physical integration of the two codes is described. This will provide capability now, with reduced up-front cost, as long as the number of runs required is not large.

  1. A DISTRIBUTED HYPERMAP MODEL FOR INTERNET GIS

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The rapid development of Internet technology makes it possible to integrate GIS with the Internet,forming Internet GIS.Internet GIS is based on a distributed client/server architecture and TCP/IP & IIOP.When constructing and designing Internet GIS,we face the problem of how to express information units of Internet GIS.In order to solve this problem,this paper presents a distributed hypermap model for Internet GIS.This model provides a solution to organize and manage Internet GIS information units.It also illustrates relations between two information units and in an internal information unit both on clients and servers.On the basis of this model,the paper contributes to the expressions of hypermap relations and hypermap operations.The usage of this model is shown in the implementation of a prototype system.

  2. The problem of distribution of resources of the insurance company as the problem of dynamic programming

    Directory of Open Access Journals (Sweden)

    Elena P. Rostova

    2011-05-01

    Full Text Available In the article is the problem of distribution of resources of the insurance company, as well as its mathematical record. The distribution of resources is carried out between the insurance services for a certain number of time intervals.

  3. Modelling the nuclear parton distributions

    CERN Document Server

    Kulagin, S A

    2016-01-01

    We review a semi-microscopic model of nuclear parton distributions, which takes into account a number of nuclear effects including Fermi motion and nuclear binding, nuclear meson-exchange currents and off-shell corrections to bound nucleon distributions as well as nuclear shadowing effect. We also discuss applications of the model to the lepton-nuclear deep-inelastic scattering, Drell-Yan process and neutrino total cross sections.

  4. Distribution system modeling and analysis

    CERN Document Server

    Kersting, William H

    2002-01-01

    For decades, distribution engineers did not have the sophisticated tools developed for analyzing transmission systems-often they had only their instincts. Things have changed, and we now have computer programs that allow engineers to simulate, analyze, and optimize distribution systems. Powerful as these programs are, however, without a real understanding of the operating characteristics of a distribution system, engineers using the programs can easily make serious errors in their designs and operating procedures.Distribution System Modeling and Analysis helps prevent those errors. It gives re

  5. A distribution-free newsvendor problem with nonlinear holding cost

    Science.gov (United States)

    Pal, Brojeswar; Sankar Sana, Shib; Chaudhuri, Kripasindhu

    2015-05-01

    In this paper, we analyse a single-period newsvendor model to determine the optimal order quantity where the customers' balking occurs.This scenario occurs when the customers are opposed to buy a product for various reasons, such as decreasing quality of product, product is not as good as fresh when it reaches under a threshold level, etc. The model is investigated by assuming that the holding cost function depends on order quantity and the inventory level at which customer balking occurs depends on holding cost. The model allows partial backlogging and permits part of the backlogged shortages to turn into lost sales. We develop the model without taking any specific distributional form of demand, only assuming the mean and the variance of the distribution of demand. Finally, we illustrate the model by numerical examples and compare our distribution-free model with the specific distributional form of demand.

  6. Application of a general risk management model to portfolio optimization problems with elliptical distributed returns for risk neutral and risk averse decision makers.

    NARCIS (Netherlands)

    B. Kaynar; S.I. Birbil (Ilker); J.B.G. Frenk (Hans)

    2007-01-01

    textabstractWe discuss a class of risk measures for portfolio optimization with linear loss functions, where the random returns of financial instruments have a multivariate elliptical distribution. Under this setting we pay special attention to two risk measures, Value-at-Risk and Conditional-Value-

  7. SAMICS marketing and distribution model

    Science.gov (United States)

    1978-01-01

    A SAMICS (Solar Array Manufacturing Industry Costing Standards) was formulated as a computer simulation model. Given a proper description of the manufacturing technology as input, this model computes the manufacturing price of solar arrays for a broad range of production levels. This report presents a model for computing these marketing and distribution costs, the end point of the model being the loading dock of the final manufacturer.

  8. Organizational problems of Water Distribution in Khorezm, Uzbekistan

    NARCIS (Netherlands)

    Wegerich, K.

    2004-01-01

    The paper addresses problems of water resource management on the district and provincial level in the Khorezm province, Uzbekistan. The district water organizations are responsible for equitable water distribution to the agricultural users. These organizations do not have the necessary logistical

  9. Finite element modeling of stress distributions and problems for multi-slice longwall mining in Bangladesh, with special reference to the Barapukuria coal mine

    Energy Technology Data Exchange (ETDEWEB)

    Islam, Md. Rafiqul; Hayashi, Daigoro [Simulation Tectonics Laboratory, Department of Physics and Earth Sciences, University of the Ryukyus, Okinawa, 903-0213 (Japan); Kamruzzaman, A.B.M. [Geology Division, Barapukuria Coal Mining Company Limited, Chowhati, Parbatipur, Dinajpur (Bangladesh)

    2009-04-01

    This paper deals with current coal mining operations under a mega-aquifer in NW Bangladesh, and presents a case study of underground mining in Barapukuria. The study uses numerical analyses to evaluate stress redistribution, strata failure, and water inflow enhancements that result from these coal extraction operations. A total of three models (A, B, and C) are presented in this study. Two-dimensional numerical modeling was performed to analyze the deformation and failure behavior of rock elements for two different models (A and B). For model A, we used an elastic finite element software package considering a Mohr-Coulomb failure criterion. For model B, we used boundary element method (BEM). The first two models were applied to determine the stress patterns. Model A provides the tectonic stress pattern of the basin, whereas model B represents the mining-induced stress field. The third model is a schematic model. The results of model A show that tensional failure of rock elements is concentrated in the Gondwana coal sequences as well as within the Eastern Boundary Fault (EBF) and its surroundings. Failure occurs in the middle to lower part of the model, and the magnitude of tensional stress in the shallow part is much greater than in the deeper part. Contours of {tau}{sub max} magnitudes are attributed to up-bending of the overburden, which would create numerous upward propagating fissures/fractures. The results of model B show that fracture propagation would be about 240 m upward for single-slice (height 3 m) mining extraction. From the contours of mean stress magnitudes, it is observed that the high range of fracture propagation increased upward for multi-slice extraction of coal. It is apparent from the fracture heights that large amounts of caving would occur towards the roof due to the multi-slice extraction of coal, and finally would be linked with the water-bearing Dupi Tila Formation. If this happened, it would ultimately cause a major water inflow hazard in

  10. Obtaining sparse distributions in 2D inverse problems

    Science.gov (United States)

    Reci, A.; Sederman, A. J.; Gladden, L. F.

    2017-08-01

    The mathematics of inverse problems has relevance across numerous estimation problems in science and engineering. L1 regularization has attracted recent attention in reconstructing the system properties in the case of sparse inverse problems; i.e., when the true property sought is not adequately described by a continuous distribution, in particular in Compressed Sensing image reconstruction. In this work, we focus on the application of L1 regularization to a class of inverse problems; relaxation-relaxation, T1-T2, and diffusion-relaxation, D-T2, correlation experiments in NMR, which have found widespread applications in a number of areas including probing surface interactions in catalysis and characterizing fluid composition and pore structures in rocks. We introduce a robust algorithm for solving the L1 regularization problem and provide a guide to implementing it, including the choice of the amount of regularization used and the assignment of error estimates. We then show experimentally that L1 regularization has significant advantages over both the Non-Negative Least Squares (NNLS) algorithm and Tikhonov regularization. It is shown that the L1 regularization algorithm stably recovers a distribution at a signal to noise ratio direct spectroscopic discrimination is impossible, and hence measurement of chemical composition within porous media, such as catalysts or rocks, is possible while still being stable to high levels of noise.

  11. Modeling a four-layer location-routing problem

    Directory of Open Access Journals (Sweden)

    Mohsen Hamidi

    2012-01-01

    Full Text Available Distribution is an indispensable component of logistics and supply chain management. Location-Routing Problem (LRP is an NP-hard problem that simultaneously takes into consideration location, allocation, and vehicle routing decisions to design an optimal distribution network. Multi-layer and multi-product LRP is even more complex as it deals with the decisions at multiple layers of a distribution network where multiple products are transported within and between layers of the network. This paper focuses on modeling a complicated four-layer and multi-product LRP which has not been tackled yet. The distribution network consists of plants, central depots, regional depots, and customers. In this study, the structure, assumptions, and limitations of the distribution network are defined and the mathematical optimization programming model that can be used to obtain the optimal solution is developed. Presented by a mixed-integer programming model, the LRP considers the location problem at two layers, the allocation problem at three layers, the vehicle routing problem at three layers, and a transshipment problem. The mathematical model locates central and regional depots, allocates customers to plants, central depots, and regional depots, constructs tours from each plant or open depot to customers, and constructs transshipment paths from plants to depots and from depots to other depots. Considering realistic assumptions and limitations such as producing multiple products, limited production capacity, limited depot and vehicle capacity, and limited traveling distances enables the user to capture the real world situations.

  12. An Application of the MP Method for Solving the Problem of Distribution

    Directory of Open Access Journals (Sweden)

    Tunjo Perić

    2015-02-01

    Full Text Available In this paper, we present an application of a method for solving the multi-objective programming problem (the MP method, which was introduced in [1]. This method is used to solve the problem of distribution (the problem of cost/ profit allocation. The method is based on the principles of cooperative games and linear programming. In the paper, we consider the standard case (proportional distribution and the generalized case in which the basic ideas of coalitions have been incorporated. The presented theory is applied and explained on an investment model for economic recovery.

  13. Quasispecies distribution of Eigen model

    Institute of Scientific and Technical Information of China (English)

    Chen Jia; Li Sheng; Ma Hong-Ru

    2007-01-01

    We have studied sharp peak landscapes of the Eigen model from a new perspective about how the quasispecies are distributed in the sequence space. To analyse the distribution more carefully, we bring in two tools. One tool is the variance of Hamming distance of the sequences at a given generation. It not only offers us a different avenue for accurately locating the error threshold and illustrates how the configuration of the distribution varies with copying fidelity q in the sequence space, but also divides the copying fidelity into three distinct regimes. The other tool is the similarity network of a certain Hamming distance do, by which we can gain a visual and in-depth result about how the sequences are distributed. We find that there are several local similarity optima around the centre (global similarity optimum) in the distribution of the sequences reproduced near the threshold. Furthermore, it is interesting that the distribution of clustering coefficient C(k) follows lognormal distribution and the curve of clustering coefficient C of the network versus d0 appears to be linear near the threshold.

  14. Quasispecies distribution of Eigen model

    Science.gov (United States)

    Chen, Jia; Li, Sheng; Ma, Hong-Ru

    2007-09-01

    We have studied sharp peak landscapes of the Eigen model from a new perspective about how the quasispecies are distributed in the sequence space. To analyse the distribution more carefully, we bring in two tools. One tool is the variance of Hamming distance of the sequences at a given generation. It not only offers us a different avenue for accurately locating the error threshold and illustrates how the configuration of the distribution varies with copying fidelity q in the sequence space, but also divides the copying fidelity into three distinct regimes. The other tool is the similarity network of a certain Hamming distance d0, by which we can gain a visual and in-depth result about how the sequences are distributed. We find that there are several local similarity optima around the centre (global similarity optimum) in the distribution of the sequences reproduced near the threshold. Furthermore, it is interesting that the distribution of clustering coefficient C(k) follows lognormal distribution and the curve of clustering coefficient C of the network versus d0 appears to be linear near the threshold.

  15. Galerkin approximation for inverse problems for nonautonomous nonlinear distributed systems

    Science.gov (United States)

    Banks, H. T.; Reich, Simeon; Rosen, I. G.

    1988-01-01

    An abstract framework and convergence theory is developed for Galerkin approximation for inverse problems involving the identification of nonautonomous nonlinear distributed parameter systems. A set of relatively easily verified conditions is provided which are sufficient to guarantee the existence of optimal solutions and their approximation by a sequence of solutions to a sequence of approximating finite dimensional identification problems. The approach is based on the theory of monotone operators in Banach spaces and is applicable to a reasonably broad class of nonlinear distributed systems. Operator theoretic and variational techniques are used to establish a fundamental convergence result. An example involving evolution systems with dynamics described by nonstationary quasilinear elliptic operators along with some applications are presented and discussed.

  16. Fastest Distribution Consensus Problem on Fusion of Two Star Networks

    CERN Document Server

    Jafarizadeh, Saber

    2010-01-01

    Finding optimal weights for the problem of Fastest Distributed Consensus on networks with different topologies has been an active area of research for a number of years. Here in this work we present an analytical solution for the problem of Fastest Distributed Consensus for a network formed from fusion of two different symmetric star networks or in other words a network consists of two different symmetric star networks which share the same central node. The solution procedure consists of stratification of associated connectivity graph of network and Semidefinite Programming (SDP), particularly solving the slackness conditions, where the optimal weights are obtained by inductive comparing of the characteristic polynomials initiated by slackness conditions. Some numerical simulations are carried out to investigate the trade-off between the parameters of two fused star networks, namely the length and number of branches.

  17. Stochastic inverse problems: Models and metrics

    Energy Technology Data Exchange (ETDEWEB)

    Sabbagh, Elias H.; Sabbagh, Harold A.; Murphy, R. Kim [Victor Technologies, LLC, Bloomington, IN 47407-7706 (United States); Aldrin, John C. [Computational Tools, Gurnee, IL 60031 (United States); Annis, Charles [Statistical Engineering, Palm Beach Gardens, FL 33418 (United States); Knopp, Jeremy S. [Air Force Research Laboratory (AFRL/RXCA), Wright Patterson AFB, OH 45433-7817 (United States)

    2015-03-31

    In past work, we introduced model-based inverse methods, and applied them to problems in which the anomaly could be reasonably modeled by simple canonical shapes, such as rectangular solids. In these cases the parameters to be inverted would be length, width and height, as well as the occasional probe lift-off or rotation. We are now developing a formulation that allows more flexibility in modeling complex flaws. The idea consists of expanding the flaw in a sequence of basis functions, and then solving for the expansion coefficients of this sequence, which are modeled as independent random variables, uniformly distributed over their range of values. There are a number of applications of such modeling: 1. Connected cracks and multiple half-moons, which we have noted in a POD set. Ideally we would like to distinguish connected cracks from one long shallow crack. 2. Cracks of irregular profile and shape which have appeared in cold work holes during bolt-hole eddy-current inspection. One side of such cracks is much deeper than other. 3. L or C shaped crack profiles at the surface, examples of which have been seen in bolt-hole cracks. By formulating problems in a stochastic sense, we are able to leverage the stochastic global optimization algorithms in NLSE, which is resident in VIC-3D®, to answer questions of global minimization and to compute confidence bounds using the sensitivity coefficient that we get from NLSE. We will also address the issue of surrogate functions which are used during the inversion process, and how they contribute to the quality of the estimation of the bounds.

  18. Lattice Boltzmann modeling of water entry problems

    Science.gov (United States)

    Zarghami, A.; Falcucci, G.; Jannelli, E.; Succi, S.; Porfiri, M.; Ubertini, S.

    2014-12-01

    This paper deals with the simulation of water entry problems using the lattice Boltzmann method (LBM). The dynamics of the free surface is treated through the mass and momentum fluxes across the interface cells. A bounce-back boundary condition is utilized to model the contact between the fluid and the moving object. The method is implemented for the analysis of a two-dimensional flow physics produced by a symmetric wedge entering vertically a weakly-compressible fluid at a constant velocity. The method is used to predict the wetted length, the height of water pile-up, the pressure distribution and the overall force on the wedge. The accuracy of the numerical results is demonstrated through comparisons with data reported in the literature.

  19. Distributed Arithmetic Coding for the Asymmetric Slepian-Wolf problem

    CERN Document Server

    Grangetto, M; Olmo, G

    2007-01-01

    Distributed source coding schemes are typically based on the use of channels codes as source codes. In this paper we propose a new paradigm, termed "distributed arithmetic coding", which exploits the fact that arithmetic codes are good source as well as channel codes. In particular, we propose a distributed binary arithmetic coder for Slepian-Wolf coding with decoder side information, along with a soft joint decoder. The proposed scheme provides several advantages over existing Slepian-Wolf coders, especially its good performance at small block lengths, and the ability to incorporate arbitrary source models in the encoding process, e.g. context-based statistical models. We have compared the performance of distributed arithmetic coding with turbo codes and low-density parity-check codes, and found that the proposed approach has very competitive performance.

  20. Geriatric care and distributive justice: problems and prospects.

    Science.gov (United States)

    Gill, D G; Ingman, S R

    1986-01-01

    This paper introduces a series of 16 essays on cross-national perspectives in geriatric care and distributive justice. Gill and Ingman first provide an overview of the "broad parameters under which distributive justice decisions have been and are being taken in the American medical care system," with special reference to John Rawls' A Theory of Justice. They then briefly summarize the topics of the other essays, which are organized into three sections: I. The U.S.A.: Underdevelopment of the Welfare State and Limited Geriatric Care; II. Nursing Homes: Industry or Public Service?; and III. Geriatric Care in Other Selected Countries. The authors conclude that funding health care for the elderly in terms of distributive justice is creating a dilemma for all societies. They propose the decommodification of medical services as a solution to the problem in the United States.

  1. Applying Soft Arc Consistency to Distributed Constraint Optimization Problems

    Science.gov (United States)

    Matsui, Toshihiro; Silaghi, Marius C.; Hirayama, Katsutoshi; Yokoo, Makot; Matsuo, Hiroshi

    The Distributed Constraint Optimization Problem (DCOP) is a fundamental framework of multi-agent systems. With DCOPs a multi-agent system is represented as a set of variables and a set of constraints/cost functions. Distributed task scheduling and distributed resource allocation can be formalized as DCOPs. In this paper, we propose an efficient method that applies directed soft arc consistency to a DCOP. In particular, we focus on DCOP solvers that employ pseudo-trees. A pseudo-tree is a graph structure for a constraint network that represents a partial ordering of variables. Some pseudo-tree-based search algorithms perform optimistic searches using explicit/implicit backtracking in parallel. However, for cost functions taking a wide range of cost values, such exact algorithms require many search iterations. Therefore additional improvements are necessary to reduce the number of search iterations. A previous study used a dynamic programming-based preprocessing technique that estimates the lower bound values of costs. However, there are opportunities for further improvements of efficiency. In addition, modifications of the search algorithm are necessary to use the estimated lower bounds. The proposed method applies soft arc consistency (soft AC) enforcement to DCOP. In the proposed method, directed soft AC is performed based on a pseudo-tree in a bottom up manner. Using the directed soft AC, the global lower bound value of cost functions is passed up to the root node of the pseudo-tree. It also totally reduces values of binary cost functions. As a result, the original problem is converted to an equivalent problem. The equivalent problem is efficiently solved using common search algorithms. Therefore, no major modifications are necessary in search algorithms. The performance of the proposed method is evaluated by experimentation. The results show that it is more efficient than previous methods.

  2. GREEDY NON-DOMINATED SORTING IN GENETIC ALGORITHM-II FOR VEHICLE ROUTING PROBLEM IN DISTRIBUTION

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Vehicle routing problem in distribution (VRPD) is a widely used type of vehicle routing problem (VRP), which has been proved as NP-Hard, and it is usually modeled as single objective optimization problem when modeling. For multi-objective optimization model, most researches consider two objectives. A multi-objective mathematical model for VRP is proposed, which considers the number of vehicles used, the length of route and the time arrived at each client. Genetic algorithm is one of the most widely used algorithms to solve VRP. As a type of genetic algorithm (GA), non-dominated sorting in genetic algorithm-Ⅱ(NSGA-Ⅱ) also suffers from premature convergence and enclosure competition. In order to avoid these kinds of shortage, a greedy NSGA-Ⅱ (GNSGA-Ⅱ) is proposed for VRP problem. Greedy algorithm is implemented in generating the initial population, cross-over and mutation. All these procedures ensure that NSGA-Ⅱ is prevented from premature convergence and refine the performance of NSGA-Ⅱ at each step. In the distribution problem of a distribution center in Michigan, US, the GNSGA-Ⅱ is compared with NSGA-Ⅱ. As a result, the GNSGA-II is the most efficient one and can get the most optimized solution to VRP problem. Also, in GNSGA-II, premature convergence is better avoided and search efficiency has been improved sharply.

  3. Modeling diffuse pollution with a distributed approach.

    Science.gov (United States)

    León, L F; Soulis, E D; Kouwen, N; Farquhar, G J

    2002-01-01

    The transferability of parameters for non-point source pollution models to other watersheds, especially those in remote areas without enough data for calibration, is a major problem in diffuse pollution modeling. A water quality component was developed for WATFLOOD (a flood forecast hydrological model) to deal with sediment and nutrient transport. The model uses a distributed group response unit approach for water quantity and quality modeling. Runoff, sediment yield and soluble nutrient concentrations are calculated separately for each land cover class, weighted by area and then routed downstream. The distributed approach for the water quality model for diffuse pollution in agricultural watersheds is described in this paper. Integrating the model with data extracted using GIS technology (Geographical Information Systems) for a local watershed, the model is calibrated for the hydrologic response and validated for the water quality component. With the connection to GIS and the group response unit approach used in this paper, model portability increases substantially, which will improve non-point source modeling at the watershed scale level.

  4. Broadband model of the distribution network

    DEFF Research Database (Denmark)

    Jensen, Martin Høgdahl

    of the four-wire cable, but above and below the natural frequency there is good agreement between simulation and measurements. The problem with the natural frequency is not IV related specificly with the four-wire cable model, but is a general problem related with the distributed nature of transmission lines...... measurement and simulation, once the Phase model is used. No explanation is found on why the new material properties cause error in the Phase model. At the kyndby 10 kV test site a non-linear load is inserted on the secondary side of normal distribution transformer and the phase voltage and current...... is measured. The measurement are performed with and without the four-wire cable inserted between the transformer and load. The 10 kV test-site is modelled in EMTDC with standard components. Similarly, the non-linear load is modelled as a six-pulse diode bridge loaded with a resistor on the DC...

  5. A biological solution to a fundamental distributed computing problem.

    Science.gov (United States)

    Afek, Yehuda; Alon, Noga; Barad, Omer; Hornstein, Eran; Barkai, Naama; Bar-Joseph, Ziv

    2011-01-14

    Computational and biological systems are often distributed so that processors (cells) jointly solve a task, without any of them receiving all inputs or observing all outputs. Maximal independent set (MIS) selection is a fundamental distributed computing procedure that seeks to elect a set of local leaders in a network. A variant of this problem is solved during the development of the fly's nervous system, when sensory organ precursor (SOP) cells are chosen. By studying SOP selection, we derived a fast algorithm for MIS selection that combines two attractive features. First, processors do not need to know their degree; second, it has an optimal message complexity while only using one-bit messages. Our findings suggest that simple and efficient algorithms can be developed on the basis of biologically derived insights.

  6. Topology optimization of mass distribution problems in Stokes flow

    DEFF Research Database (Denmark)

    Gersborg-Hansen, Allan; Berggren, Martin; Dammann, Bernd

    We consider topology optimization of mass distribution problems in 2D and 3D Stokes flow with the aim of designing devices that meet target outflow rates. For the purpose of validation, the designs have been post processed using the image processing tools available in FEMLAB. In turn, this has...... enabled an evaluation of the design with a body fitted mesh in a standard analysis software relevant in engineering practice prior to design manufacturing. This work investigates the proper choice of a maximum penalization value during the optimization process that ensures that the target outflow rates...

  7. Optimization of the imported air express cargo distribution problem

    Directory of Open Access Journals (Sweden)

    Hwang, T.L.

    2013-03-01

    Full Text Available This study examines the delivering network of imported air express cargo as an integrated multi-depot vehicle routing problem. Integrated multi-depot vehicle routing problem attempts to decide which service centers should be used and how much freight should be unloaded in each service center. The role of an exchange point which is allowing the delivery vans and shuttles to exchange imported and exported goods is also addressed. Test results demonstrate the feasibility of the four models so these are highly promising for use in a diverse array of applications, such as in home delivery and reverse logistics.

  8. Water Distribution and Removal Model

    Energy Technology Data Exchange (ETDEWEB)

    Y. Deng; N. Chipman; E.L. Hardin

    2005-08-26

    The design of the Yucca Mountain high level radioactive waste repository depends on the performance of the engineered barrier system (EBS). To support the total system performance assessment (TSPA), the Engineered Barrier System Degradation, Flow, and Transport Process Model Report (EBS PMR) is developed to describe the thermal, mechanical, chemical, hydrological, biological, and radionuclide transport processes within the emplacement drifts, which includes the following major analysis/model reports (AMRs): (1) EBS Water Distribution and Removal (WD&R) Model; (2) EBS Physical and Chemical Environment (P&CE) Model; (3) EBS Radionuclide Transport (EBS RNT) Model; and (4) EBS Multiscale Thermohydrologic (TH) Model. Technical information, including data, analyses, models, software, and supporting documents will be provided to defend the applicability of these models for their intended purpose of evaluating the postclosure performance of the Yucca Mountain repository system. The WD&R model ARM is important to the site recommendation. Water distribution and removal represents one component of the overall EBS. Under some conditions, liquid water will seep into emplacement drifts through fractures in the host rock and move generally downward, potentially contacting waste packages. After waste packages are breached by corrosion, some of this seepage water will contact the waste, dissolve or suspend radionuclides, and ultimately carry radionuclides through the EBS to the near-field host rock. Lateral diversion of liquid water within the drift will occur at the inner drift surface, and more significantly from the operation of engineered structures such as drip shields and the outer surface of waste packages. If most of the seepage flux can be diverted laterally and removed from the drifts before contacting the wastes, the release of radionuclides from the EBS can be controlled, resulting in a proportional reduction in dose release at the accessible environment. The purposes

  9. Inverse problems in spin models

    CERN Document Server

    Sessak, Vitor

    2010-01-01

    Several recent experiments in biology study systems composed of several interacting elements, for example neuron networks. Normally, measurements describe only the collective behavior of the system, even if in most cases we would like to characterize how its different parts interact. The goal of this thesis is to extract information about the microscopic interactions as a function of their collective behavior for two different cases. First, we will study a system described by a generalized Ising model. We find explicit formulas for the couplings as a function of the correlations and magnetizations. In the following, we will study a system described by a Hopfield model. In this case, we find not only explicit formula for inferring the patterns, but also an analytical result that allows one to estimate how much data is necessary for a good inference.

  10. Ising model for distribution networks

    CERN Document Server

    Hooyberghs, H; Giuraniuc, C; Van Schaeybroeck, B; Indekeu, J O

    2012-01-01

    An elementary Ising spin model is proposed for demonstrating cascading failures (break-downs, blackouts, collapses, avalanches, ...) that can occur in realistic networks for distribution and delivery by suppliers to consumers. A ferromagnetic Hamiltonian with quenched random fields results from policies that maximize the gap between demand and delivery. Such policies can arise in a competitive market where firms artificially create new demand, or in a solidary environment where too high a demand cannot reasonably be met. Network failure in the context of a policy of solidarity is possible when an initially active state becomes metastable and decays to a stable inactive state. We explore the characteristics of the demand and delivery, as well as the topological properties, which make the distribution network susceptible of failure. An effective temperature is defined, which governs the strength of the activity fluctuations which can induce a collapse. Numerical results, obtained by Monte Carlo simulations of t...

  11. Distributed Object Medical Imaging Model

    CERN Document Server

    Noor, Ahmad Shukri Mohd

    2009-01-01

    Digital medical informatics and images are commonly used in hospitals today,. Because of the interrelatedness of the radiology department and other departments, especially the intensive care unit and emergency department, the transmission and sharing of medical images has become a critical issue. Our research group has developed a Java-based Distributed Object Medical Imaging Model(DOMIM) to facilitate the rapid development and deployment of medical imaging applications in a distributed environment that can be shared and used by related departments and mobile physiciansDOMIM is a unique suite of multimedia telemedicine applications developed for the use by medical related organizations. The applications support realtime patients' data, image files, audio and video diagnosis annotation exchanges. The DOMIM enables joint collaboration between radiologists and physicians while they are at distant geographical locations. The DOMIM environment consists of heterogeneous, autonomous, and legacy resources. The Common...

  12. Community problem-solving framed as a distributed information use environment: bridging research and practice

    Directory of Open Access Journals (Sweden)

    Joan C. Durrance

    2006-01-01

    Full Text Available Introduction. This article results from a qualitative study of 1 information behavior in community problem-solving framed as a distributed information use environment and 2 approaches used by a best-practice library to anticipate information needs associated with community problem solving. Method. Several approaches to data collection were used - focus groups, interviews, observation of community and library meetings, and analysis of supporting documents. We focused first on the information behaviour of community groups. Finding that the library supported these activities we sought to understand its approach. Analysis. Data were coded thematically for both information behaviour concepts and themes germane to problem-solving activity. A grounded theory approach was taken to capture aspects of the library staff's practice. Themes evolved from the data; supporting documentation - reports, articles and library communication - was also coded. Results. The study showed 1 how information use environment components (people, setting, problems, problem resolutions combine in this distributed information use environment to determine specific information needs and uses; and 2 how the library contributed to the viability of this distributed information use environment. Conclusion. Community problem solving, here explicated as a distributed IUE, is likely to be seen in multiple communities. The library model presented demonstrates that by reshaping its information practice within the framework of an information use environment, a library can anticipate community information needs as they are generated and where they are most relevant.

  13. Oscillations in SIRS model with distributed delays

    Science.gov (United States)

    Gonçalves, S.; Abramson, G.; Gomes, M. F. C.

    2011-06-01

    The ubiquity of oscillations in epidemics presents a long standing challenge for the formulation of epidemic models. Whether they are external and seasonally driven, or arise from the intrinsic dynamics is an open problem. It is known that fixed time delays destabilize the steady state solution of the standard SIRS model, giving rise to stable oscillations for certain parameters values. In this contribution, starting from the classical SIRS model, we make a general treatment of the recovery and loss of immunity terms. We present oscillation diagrams (amplitude and period) in terms of the parameters of the model, showing how oscillations can be destabilized by the shape of the distributions of the two characteristic (infectious and immune) times. The formulation is made in terms of delay equations which are both numerically integrated and linearized. Results from simulations are included showing where they support the linear analysis and explaining why not where they do not. Considerations and comparison with real diseases are presented along.

  14. The Aalborg Model and The Problem

    DEFF Research Database (Denmark)

    Qvist, Palle

    To know the definition of a problem in is an important implication for the possibility to identify and formulate the problem1, the starting point of the learning process in the Aalborg Model2 3. For certification it has been suggested that: A problem grows out of students’ wondering within differ...... – a wondering - that something is different from what is expected, something novel and unexpected or inexplicable; astonishment mingled with perplexity or bewildered curiosity?...

  15. Probabilistic Fuzzy Goal Programming Problems Involving Pareto Distribution: Some Additive Approaches

    Directory of Open Access Journals (Sweden)

    S.K. Barik

    2015-06-01

    Full Text Available In many real-life decision making problems, probabilistic fuzzy goal programming problems are used where some of the input parameters of the problem are considered as random variables with fuzzy aspiration levels. In the present paper, a linearly constrained probabilistic fuzzy goal programming programming problem is presented where the right hand side parameters in some constraints follows Pareto distribution with known mean and variance. Also the aspiration levels are considered as fuzzy. Further, simple, weighted, and preemptive additive approaches are discussed for probabilistic fuzzy goal programming model. These additive approaches are employed to aggregating the membership values and form crisp equivalent deterministic models. The resulting models are then solved by using standard linear mathematical programming techniques. The developed methodology and solution procedures are illustrated with a numerical example.

  16. Contact of boundary-value problems and nonlocal problems in mathematical models of heat transfer

    Science.gov (United States)

    Lyashenko, V.; Kobilskaya, O.

    2015-10-01

    In this paper the mathematical models in the form of nonlocal problems for the two-dimensional heat equation are considered. Relation of a nonlocal problem and a boundary value problem, which describe the same physical heating process, is investigated. These problems arise in the study of the temperature distribution during annealing of the movable wire and the strip by permanent or periodically operating internal and external heat sources. The first and the second nonlocal problems in the mobile area are considered. Stability and convergence of numerical algorithms for the solution of a nonlocal problem with piecewise monotone functions in the equations and boundary conditions are investigated. Piecewise monotone functions characterize the heat sources and heat transfer conditions at the boundaries of the area that is studied. Numerous experiments are conducted and temperature distributions are plotted under conditions of internal and external heat sources operation. These experiments confirm the effectiveness of attracting non-local terms to describe the thermal processes. Expediency of applying nonlocal problems containing nonlocal conditions - thermal balance conditions - to such models is shown. This allows you to define heat and mass transfer as the parameters of the process control, in particular heat source and concentration of the substance.

  17. Transfer function modeling of damping mechanisms in distributed parameter models

    Science.gov (United States)

    Slater, J. C.; Inman, D. J.

    1994-01-01

    This work formulates a method for the modeling of material damping characteristics in distributed parameter models which may be easily applied to models such as rod, plate, and beam equations. The general linear boundary value vibration equation is modified to incorporate hysteresis effects represented by complex stiffness using the transfer function approach proposed by Golla and Hughes. The governing characteristic equations are decoupled through separation of variables yielding solutions similar to those of undamped classical theory, allowing solution of the steady state as well as transient response. Example problems and solutions are provided demonstrating the similarity of the solutions to those of the classical theories and transient responses of nonviscous systems.

  18. Distributed Object Medical Imaging Model

    Directory of Open Access Journals (Sweden)

    Ahmad Shukri Mohd Noor

    2009-09-01

    Full Text Available Digital medical informatics and images are commonly used in hospitals today. Because of the interrelatedness of the radiology department and other departments, especially the intensive care unit and emergency department, the transmission and sharing of medical images has become a critical issue. Our research group has developed a Java-based Distributed Object Medical Imaging Model(DOMIM to facilitate the rapid development and deployment of medical imaging applications in a distributed environment that can be shared and used by related departments and mobile physiciansDOMIM is a unique suite of multimedia telemedicine applications developed for the use by medical related organizations. The applications support realtime patients' data, image files, audio and video diagnosis annotation exchanges. The DOMIM enables joint collaboration between radiologists and physicians while they are at distant geographical locations. The DOMIM environment consists of heterogeneous, autonomous, and legacy resources. The Common Object Request Broker Architecture (CORBA, Java Database Connectivity (JDBC, and Java language provide the capability to combine the DOMIM resources into an integrated, interoperable, and scalable system. The underneath technology, including IDL ORB, Event Service, IIOP JDBC/ODBC, legacy system wrapping and Java implementation are explored. This paper explores a distributed collaborative CORBA/JDBC based framework that will enhance medical information management requirements and development. It encompasses a new paradigm for the delivery of health services that requires process reengineering, cultural changes, as well as organizational changes.

  19. Distribution-valued weak solutions to a parabolic problem arising in financial mathematics

    Directory of Open Access Journals (Sweden)

    Michael Eydenberg

    2009-07-01

    Full Text Available We study distribution-valued solutions to a parabolic problem that arises from a model of the Black-Scholes equation in option pricing. We give a minor generalization of known existence and uniqueness results for solutions in bounded domains $Omega subset mathbb{R}^{n+1}$ to give existence of solutions for certain classes of distributions $fin mathcal{D}'(Omega$. We also study growth conditions for smooth solutions of certain parabolic equations on $mathbb{R}^nimes (0,T$ that have initial values in the space of distributions.

  20. Modeling particle size distributions by the Weibull distribution function

    Energy Technology Data Exchange (ETDEWEB)

    Fang, Zhigang (Rogers Tool Works, Rogers, AR (United States)); Patterson, B.R.; Turner, M.E. Jr (Univ. of Alabama, Birmingham, AL (United States))

    1993-10-01

    A method is proposed for modeling two- and three-dimensional particle size distributions using the Weibull distribution function. Experimental results show that, for tungsten particles in liquid phase sintered W-14Ni-6Fe, the experimental cumulative section size distributions were well fit by the Weibull probability function, which can also be used to compute the corresponding relative frequency distributions. Modeling the two-dimensional section size distributions facilitates the use of the Saltykov or other methods for unfolding three-dimensional (3-D) size distributions with minimal irregularities. Fitting the unfolded cumulative 3-D particle size distribution with the Weibull function enables computation of the statistical distribution parameters from the parameters of the fit Weibull function.

  1. Exploiting Linkage Information and Problem-Specific Knowledge in Evolutionary Distribution Network Expansion Planning.

    Science.gov (United States)

    Luong, Ngoc Hoang; Poutré, Han La; Bosman, Peter A N

    2017-04-07

    This article tackles the Distribution Network Expansion Planning (DNEP) problemthat has to be solved by distribution network operators to decide which, where, and/or when enhancements to electricity networks should be introduced to satisfy the future power demands. Because of many real-world details involved, the structure of the problem is not exploited easily using mathematical programming techniques, for which reason we consider solving this problem with evolutionary algorithms (EAs). We compare three types of EAs for optimizing expansion plans: the classic genetic algorithm (GA), the estimation-of-distribution algorithm (EDA), and the Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA). Not fully knowing the structure of the problem, we study the effect of linkage learning through the use of three linkage models: univariate, marginal product, and linkage tree. We furthermore experiment with the impact of incorporating different levels of problem-specific knowledge in the variation operators. Experiments show that the use of problem-specific variation operators is far more important for the classic GA to find high-quality solutions. In all EAs, the marginal product model and its linkage learning procedure have difficulty in capturing and exploiting the DNEP problemstructure. GOMEA, especiallywhen combined with the linkage tree structure, is found to have the most robust performance by far, even when an out-of-the-box variant is used that does not exploit problem-specific knowledge. Based on experiments, we suggest that when selecting optimization algorithms for power system expansion planning problems, EAs that have the ability to effectively model and efficiently exploit problem structures, such as GOMEA, should be given priority, especially in the case of black-box or grey-box optimization.

  2. Solving Packing Problems by a Distributed Global Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Nian-Ze Hu

    2012-01-01

    Full Text Available Packing optimization problems aim to seek the best way of placing a given set of rectangular boxes within a minimum volume rectangular box. Current packing optimization methods either find it difficult to obtain an optimal solution or require too many extra 0-1 variables in the solution process. This study develops a novel method to convert the nonlinear objective function in a packing program into an increasing function with single variable and two fixed parameters. The original packing program then becomes a linear program promising to obtain a global optimum. Such a linear program is decomposed into several subproblems by specifying various parameter values, which is solvable simultaneously by a distributed computation algorithm. A reference solution obtained by applying a genetic algorithm is used as an upper bound of the optimal solution, used to reduce the entire search region.

  3. Otter Distribution, Status and Conservation Problems in Hungary

    Directory of Open Access Journals (Sweden)

    Kemenes I.

    1991-02-01

    Full Text Available The river otter Lutra lutra has been protected in Hungary since 1974 and became strictly protected in 1978. However, the first and so far only survey of its distribution was carried out by me in 1987-88. I now report the results of this survey and discuss the present status and conservation problems of otter in Hungary. Otters are most plentiful in the south west. Because of contamination of many water courses, otters are dependent on fish farms of various sizes. Until now, these were subsidised, but now they are in private hands, but no compensation for otter-related losses are available, owners on low incomes cannot afford expensive mitigation measures. They would welcome the live-trapping and removal of problem otters, but the government would need to sponsor suitable areas for release of these animals. Conservationists in Hungary are seeking support for such a scheme. We think that it would be in the interest of the conservationists of Europe to help to maintain Hungary as one of the strongholds of otter and we are inviting suggestions and ideas on how to achieve this.

  4. Configuration of Distributed Message Converter Systems using Performance Modeling

    NARCIS (Netherlands)

    Aberer, Karl; Risse, Thomas; Wombacher, Andreas

    2001-01-01

    To find a configuration of a distributed system satisfying performance goals is a complex search problem that involves many design parameters, like hardware selection, job distribution and process configuration. Performance models are a powerful tools to analyse potential system configurations, howe

  5. Analysis and Comparison of Typical Models within Distribution Network Design

    DEFF Research Database (Denmark)

    Jørgensen, Hans Jacob; Larsen, Allan; Madsen, Oli B.G.

    a number of important issues which have been identified when addressing the Distribution Network Design problem from a modelling angle. More specifically, we present an analysis of the research which has been performed in utilizing operational research in developing and optimising distribution systems....

  6. A Hybrid Autonomic Computing-Based Approach to Distributed Constraint Satisfaction Problems

    Directory of Open Access Journals (Sweden)

    Abhishek Bhatia

    2015-03-01

    Full Text Available Distributed constraint satisfaction problems (DisCSPs are among the widely endeavored problems using agent-based simulation. Fernandez et al. formulated sensor and mobile tracking problem as a DisCSP, known as SensorDCSP In this paper, we adopt a customized ERE (environment, reactive rules and entities algorithm for the SensorDCSP, which is otherwise proven as a computationally intractable problem. An amalgamation of the autonomy-oriented computing (AOC-based algorithm (ERE and genetic algorithm (GA provides an early solution of the modeled DisCSP. Incorporation of GA into ERE facilitates auto-tuning of the simulation parameters, thereby leading to an early solution of constraint satisfaction. This study further contributes towards a model, built up in the NetLogo simulation environment, to infer the efficacy of the proposed approach.

  7. Bayesian Updating of Demand and Backorder Distributions in a Newsvendor Inventory Problem

    Science.gov (United States)

    Gürler, Ü.; Berk, E.; Akbay, U.

    2008-10-01

    We consider Bayesian updating of demand and backorder distributions in a partial back-order Newsvendor model. In inventory problems usually the demand distribution is assumed to be known and when stock-outs occur it is commonly assumed that the excess demand is either lost or fully backordered. In this paper we consider a partial backorder setting, where the unsatisfied demand is backordered with a certain probability. Both demand and backorder probabilities are assumed to be random variables and Bayesian estimation methods are used to update the distributions of these variables as data accumulates. We develop expressions for the exact posteriors where the prior distributions are chosen from natural conjugate families. In particular, we assume that the demand within a period is Poisson and the bacorder probability has a Beta distribution.

  8. Assessment of distributed arterial network models.

    Science.gov (United States)

    Segers, P; Stergiopulos, N; Verdonck, P; Verhoeven, R

    1997-11-01

    The aim of this study is to evaluate the relative importance of elastic non-linearities, viscoelasticity and resistance vessel modelling on arterial pressure and flow wave contours computed with distributed arterial network models. The computational results of a non-linear (time-domain) and a linear (frequency-domain) mode were compared using the same geometrical configuration and identical upstream and downstream boundary conditions and mechanical properties. pressures were computed at the ascending aorta, brachial and femoral artery. In spite of the identical problem definition, computational differences were found in input impedance modulus (max. 15-20%), systolic pressure (max. 5%) and pulse pressure (max. 10%). For the brachial artery, the ratio of pulse pressure to aortic pulse pressure was practically identical for both models (3%), whereas for the femoral artery higher values are found for the linear model (+10%). The aortic/brachial pressure transfer function indicates that pressure harmonic amplification is somewhat higher in the linear model for frequencies lower than 6 Hz while the opposite is true for higher frequencies. These computational disparities were attributed to conceptual model differences, such as the treatment of geometric tapering, rather than to elastic or convective non-linearities. Compared to the effect of viscoelasticity, the discrepancy between the linear and non-linear model is of the same importance. At peripheral locations, the correct representation of terminal impedance outweight the computational differences between the linear and non-linear models.

  9. Problems In Indoor Mapping and Modelling

    Science.gov (United States)

    Zlatanova, S.; Sithole, G.; Nakagawa, M.; Zhu, Q.

    2013-11-01

    Research in support of indoor mapping and modelling (IMM) has been active for over thirty years. This research has come in the form of As-Built surveys, Data structuring, Visualisation techniques, Navigation models and so forth. Much of this research is founded on advancements in photogrammetry, computer vision and image analysis, computer graphics, robotics, laser scanning and many others. While IMM used to be the privy of engineers, planners, consultants, contractors, and designers, this is no longer the case as commercial enterprises and individuals are also beginning to apply indoor models in their business process and applications. There are three main reasons for this. Firstly, the last two decades have seen greater use of spatial information by enterprises and the public. Secondly, IMM has been complimented by advancements in mobile computing and internet communications, making it easier than ever to access and interact with spatial information. Thirdly, indoor modelling has been advanced geometrically and semantically, opening doors for developing user-oriented, context-aware applications. This reshaping of the public's attitude and expectations with regards to spatial information has realised new applications and spurred demand for indoor models and the tools to use them. This paper examines the present state of IMM and considers the research areas that deserve attention in the future. In particular the paper considers problems in IMM that are relevant to commercial enterprises and the general public, groups this paper expects will emerge as the greatest users IMM. The subject of indoor modelling and mapping is discussed here in terms of Acquisitions and Sensors, Data Structures and Modelling, Visualisation, Applications, Legal Issues and Standards. Problems are discussed in terms of those that exist and those that are emerging. Existing problems are those that are currently being researched. Emerging problems are those problems or demands that are

  10. Finding Multiple Optimal Solutions to Optimal Load Distribution Problem in Hydropower Plant

    Directory of Open Access Journals (Sweden)

    Xinhao Jiang

    2012-05-01

    Full Text Available Optimal load distribution (OLD among generator units of a hydropower plant is a vital task for hydropower generation scheduling and management. Traditional optimization methods for solving this problem focus on finding a single optimal solution. However, many practical constraints on hydropower plant operation are very difficult, if not impossible, to be modeled, and the optimal solution found by those models might be of limited practical uses. This motivates us to find multiple optimal solutions to the OLD problem, which can provide more flexible choices for decision-making. Based on a special dynamic programming model, we use a modified shortest path algorithm to produce multiple solutions to the problem. It is shown that multiple optimal solutions exist for the case study of China’s Geheyan hydropower plant, and they are valuable for assessing the stability of generator units, showing the potential of reducing occurrence times of units across vibration areas.

  11. Fuzzy Goal Programming Approach for Integrating Production and Distribution Problem in Milk Supply Chain

    Directory of Open Access Journals (Sweden)

    Touil Achraf

    2016-01-01

    Full Text Available In this paper, a bi-objective mixed integer programming model is proposed to deal with the production-distribution problem found in a dairy company in Morocco. The supply chain containing three echelons: multi-sites, multi-distribution centers and multi-customers. The model seeks to integrate two conflicting simultaneous objectives: maximizing benefit by considering the shelf life of products and the total cost (quantitative objective, including production, storage, and distribution, as well as maximizing the service level (qualitative objective, which relates to providing satisfactory services to customers. This is subject to several technological constraints that typically arise in the dairy industry, such as sequence-dependent changeover time, machine speed and storage capacity. Due to imprecise aspiration levels of goals, an interactive approach is proposed based on fuzzy goal additive variants to find an efficient compromise solution. Numerical results are reported to demonstrate the efficiency and applicability of the proposed model.

  12. A Dynamic Distribution Model for Combat Logistics

    Science.gov (United States)

    1999-11-23

    develop a heuristic algorithm for a similar problem, only capacity expansion can occur in any amount (modeled with continuous variables) while in...and Rutenberg (1977) solve it with a heuristic algorithm . Our problem is also related to the dynamic facility location problem. This problem seeks to

  13. A case study of heterogeneous fleet vehicle routing problem: Touristic distribution application in Alanya

    Directory of Open Access Journals (Sweden)

    Kenan Karagül

    2014-07-01

    Full Text Available In this study, Fleet Size and Mix Vehicle Routing Problem is considered in order to optimize the distribution of the tourists who have traveled between the airport and the hotels in the shortest distance by using the minimum cost. The initial solution space for the related methods are formed as a combination of Savings algorithm, Sweep algorithm and random permutation alignment. Then, two well-known solution methods named as Standard Genetic Algorithms and random search algorithms are used for changing the initial solutions. Computational power of the machine and heuristic algorithms are used instead of human experience and human intuition in order to solve the distribution problem of tourists coming to hotels in Alanya region from Antalya airport. For this case study, daily data of tourist distributions performed by an agency operating in Alanya region are considered. These distributions are then modeled as Vehicle Routing Problem to calculate the solutions for various applications. From the comparisons with the decision of a human expert, it is seen that the proposed methods produce better solutions with respect to human experience and insight. Random search method produces a solution more favorable in terms of time. As a conclusion, it is seen that, owing to the distribution plans offered by the obtained solutions, the agencies may reduce the costs by achieving savings up to 35%.

  14. A Heuristic Approach to the Theater Distribution Problem

    Science.gov (United States)

    2014-03-27

    Problem with Time Windows A generalization of the vehicle routing problem with time windows, the PDPTW is concerned with constructing optimal routes... vehicle routing problem . [28] and [31] are very early greedy algorithms for the popular multidimensional knapsack problem that use an effective gradient...search strategies applied to the vehicle routing problem . Their results give comparative statistics on 5 different Tabu search based metaheuristics

  15. Comparative Distributions of Hazard Modeling Analysis

    Directory of Open Access Journals (Sweden)

    Rana Abdul Wajid

    2006-07-01

    Full Text Available In this paper we present the comparison among the distributions used in hazard analysis. Simulation technique has been used to study the behavior of hazard distribution modules. The fundamentals of Hazard issues are discussed using failure criteria. We present the flexibility of the hazard modeling distribution that approaches to different distributions.

  16. EXPLICIT EXPRESSIONS FOR SOME DISTRIBUTIONS RELATED TO RUIN PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    党兰芬; 杨丽明

    2003-01-01

    The classical risk process that is perturbed by diffusion is studied. The explicitexpressions for the ruin probability and the surplus distribution of the risk process atthe time of ruin are obtained when the claim amount distribution is a finite mixture ofexponential distributions or a Gamma (2, α) distribution.

  17. Implementing Problem Resolution Models in Remedy

    CERN Document Server

    Marquina, M A; Ramos, R

    2000-01-01

    This paper defines the concept of Problem Resolution Model (PRM) and describes the current implementation made by the User Support unit at CERN. One of the main challenges of User Support services in any High Energy Physics institute/organization is to address solving of the computing-relatedproblems faced by their researchers. The User Support group at CERN is the IT unit in charge of modeling the operations of the Help Desk and acts as asecond level support to some of the support lines whose problems are receptioned at the Help Desk. The motivation behind the use of a PRM is to provide well defined procedures and methods to react in an efficient way to a request for solving a problem,providing advice, information etc. A PRM is materialized on a workflow which has a set of defined states in which a problem can be. Problems move from onestate to another according to actions as decided by the person who is handling them. A PRM can be implemented by a computer application, generallyreferred to as Problem Report...

  18. Mathematical Models for Room Air Distribution - Addendum

    DEFF Research Database (Denmark)

    Nielsen, Peter V.

    1982-01-01

    A number of different models on the air distribution in rooms are introduced. This includes the throw model, a model on penetration length of a cold wall jet and a model for maximum velocity in the dimensioning of an air distribution system in highly loaded rooms and shows that the amount of heat...

  19. Mathematical Models for Room Air Distribution

    DEFF Research Database (Denmark)

    Nielsen, Peter V.

    1982-01-01

    A number of different models on the air distribution in rooms are introduced. This includes the throw model, a model on penetration length of a cold wall jet and a model for maximum velocity in the dimensioning of an air distribution system in highly loaded rooms and shows that the amount of heat...

  20. Solving Vertex Cover Problem Using DNA Tile Assembly Model

    Directory of Open Access Journals (Sweden)

    Zhihua Chen

    2013-01-01

    Full Text Available DNA tile assembly models are a class of mathematically distributed and parallel biocomputing models in DNA tiles. In previous works, tile assembly models have been proved be Turing-universal; that is, the system can do what Turing machine can do. In this paper, we use tile systems to solve computational hard problem. Mathematically, we construct three tile subsystems, which can be combined together to solve vertex cover problem. As a result, each of the proposed tile subsystems consists of Θ(1 types of tiles, and the assembly process is executed in a parallel way (like DNA’s biological function in cells; thus the systems can generate the solution of the problem in linear time with respect to the size of the graph.

  1. Problem signatures from enhanced vector autoregressive modeling

    Science.gov (United States)

    Andriamanalimanana, Bruno R.; Sengupta, Saumen S.

    2001-09-01

    The work reported in this paper concerns the enhancement of mutivariate autoregressive (AR) models with geometric shape analysis data and stochastic causal relations. The study aims at producing numerical signatures characterizing operating problems, from multivariate time series of data collected in an application and operating environment domain. Since the information content of an AR model does not appear sufficient to characterize observed vector values fully, both geometric and stochastic modeling techniques are applied to refine causal inferences further. The specific application domain used for this study is real-time network traffic monitoring. However, other domains utilizing vector models might benefit as well. A partial Java implementation is being used for experimentation.

  2. A Fuzzy Goal Programming for a Multi-Depot Distribution Problem

    Science.gov (United States)

    Nunkaew, Wuttinan; Phruksaphanrat, Busaba

    2010-10-01

    A fuzzy goal programming model for solving a Multi-Depot Distribution Problem (MDDP) is proposed in this research. This effective proposed model is applied for solving in the first step of Assignment First-Routing Second (AFRS) approach. Practically, a basic transportation model is firstly chosen to solve this kind of problem in the assignment step. After that the Vehicle Routing Problem (VRP) model is used to compute the delivery cost in the routing step. However, in the basic transportation model, only depot to customer relationship is concerned. In addition, the consideration of customer to customer relationship should also be considered since this relationship exists in the routing step. Both considerations of relationships are solved using Preemptive Fuzzy Goal Programming (P-FGP). The first fuzzy goal is set by a total transportation cost and the second fuzzy goal is set by a satisfactory level of the overall independence value. A case study is used for describing the effectiveness of the proposed model. Results from the proposed model are compared with the basic transportation model that has previously been used in this company. The proposed model can reduce the actual delivery cost in the routing step owing to the better result in the assignment step. Defining fuzzy goals by membership functions are more realistic than crisps. Furthermore, flexibility to adjust goals and an acceptable satisfactory level for decision maker can also be increased and the optimal solution can be obtained.

  3. Integer Programming Models for Computational Biology Problems

    Institute of Scientific and Technical Information of China (English)

    Giuseppe Lancia

    2004-01-01

    The recent years have seen an impressive increase in the use of Integer Programming models for the solution of optimization problems originating in Molecular Biology. In this survey, some of the most successful Integer Programming approaches are described, while a broad overview of application areas being is given in modern Computational Molecular Biology.

  4. Modelling Robust Design Problems via Conic Optimization

    NARCIS (Netherlands)

    Chaerani, D.

    2006-01-01

    This thesis deals with optimization problems with uncertain data. Uncertainty here means that the data is not known exactly at the time when its solution has to be determined. In many models the uncertainty is ignored and a representative nominal value of the data is used. The uncertainty may be due

  5. Simulation of product distribution at PT Anugrah Citra Boga by using capacitated vehicle routing problem method

    Science.gov (United States)

    Lamdjaya, T.; Jobiliong, E.

    2017-01-01

    PT Anugrah Citra Boga is a food processing industry that produces meatballs as their main product. The distribution system of the products must be considered, because it needs to be more efficient in order to reduce the shipment cost. The purpose of this research is to optimize the distribution time by simulating the distribution channels with capacitated vehicle routing problem method. Firstly, the distribution route is observed in order to calculate the average speed, time capacity and shipping costs. Then build the model using AIMMS software. A few things that are required to simulate the model are customer locations, distances, and the process time. Finally, compare the total distribution cost obtained by the simulation and the historical data. It concludes that the company can reduce the shipping cost around 4.1% or Rp 529,800 per month. By using this model, the utilization rate can be more optimal. The current value for the first vehicle is 104.6% and after the simulation it becomes 88.6%. Meanwhile, the utilization rate of the second vehicle is increase from 59.8% to 74.1%. The simulation model is able to produce the optimal shipping route with time restriction, vehicle capacity, and amount of vehicle.

  6. Distributed Maximality based CTL Model Checking

    Directory of Open Access Journals (Sweden)

    Djamel Eddine Saidouni

    2010-05-01

    Full Text Available In this paper we investigate an approach to perform a distributed CTL Model checker algorithm on a network of workstations using Kleen three value logic, the state spaces is partitioned among the network nodes, We represent the incomplete state spaces as a Maximality labeled Transition System MLTS which are able to express true concurrency. we execute in parallel the same algorithm in each node, for a certain property on an incomplete MLTS , this last compute the set of states which satisfy or which if they fail are assigned the value .The third value mean unknown whether true or false because the partial state space lacks sufficient information needed for a precise answer concerning the complete state space .To solve this problem each node exchange the information needed to conclude the result about the complete state space. The experimental version of the algorithm is currently being implemented using the functional programming language Erlang.

  7. Mathematical model in economic environmental problems

    Energy Technology Data Exchange (ETDEWEB)

    Nahorski, Z. [Polish Academy of Sciences, Systems Research Inst. (Poland); Ravn, H.F. [Risoe National Lab. (Denmark)

    1996-12-31

    The report contains a review of basic models and mathematical tools used in economic regulation problems. It starts with presentation of basic models of capital accumulation, resource depletion, pollution accumulation, and population growth, as well as construction of utility functions. Then the one-state variable model is discussed in details. The basic mathematical methods used consist of application of the maximum principle and phase plane analysis of the differential equations obtained as the necessary conditions of optimality. A summary of basic results connected with these methods is given in appendices. (au) 13 ills.; 17 refs.

  8. Problem-Solving Methods for the Prospective Development of Urban Power Distribution Network

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2014-01-01

    Full Text Available This article succeeds the former A. P. K nko’ and A. I. Kuzmina’ ubl t on titled "A mathematical model of urban distribution electro-network considering its future development" (electronic scientific and technical magazine "Science and education" No. 5, 2014.The article offers a model of urban power distribution network as a set of transformer and distribution substations and cable lines. All elements of the network and new consumers are determined owing to vectors of parameters consistent with them.A problem of the urban power distribution network design, taking into account a prospective development of the city, is presented as a problem of discrete programming. It is in deciding on the optimal option to connect new consumers to the power supply network, on the number and sites to build new substations, and on the option to include them in the power supply network.Two methods, namely a reduction method for a set the nested tasks of global minimization and a decomposition method are offered to solve the problem.In reduction method the problem of prospective development of power supply network breaks into three subtasks of smaller dimension: a subtask to define the number and sites of new transformer and distribution substations, a subtask to define the option to connect new consumers to the power supply network, and a subtask to include new substations in the power supply network. The vector of the varied parameters is broken into three subvectors consistent with the subtasks. Each subtask is solved using an area of admissible vector values of the varied parameters at the fixed components of the subvectors obtained when solving the higher subtasks.In decomposition method the task is presented as a set of three, similar to reduction method, reductions of subtasks and a problem of coordination. The problem of coordination specifies a sequence of the subtasks solution, defines the moment of calculation termination. Coordination is realized by

  9. A Distributed Problem Solving Environment (PSE) for Partial Differential Equation Based Problems

    National Research Council Canada - National Science Library

    TERAMOTO, Takayuki; NAKAMURA, Takashi; KAWATA, Shigeo; MATIDE, Syunsuke; HAYASAKA, Koji; NONAKA, Hidetaka; SASAKI, Eiji; SANADA, Yasuhiro

    2001-01-01

    ...) for partial differential equation (PDE) based problems. The system inputs a problem information including a discretization and computation scheme, and outputs a program flow and also a C-language source code for the problem...

  10. Wealth distribution models: analisys and applications

    Directory of Open Access Journals (Sweden)

    Camilo Dagum

    2008-03-01

    Full Text Available After Pareto developed his Type I model in 1895, a large number of income distribution models have been specified. However, the important issue of wealth distribution called the attention of researchers more than sixty years later. It started with the contributions by Wold and Whittle, and Sargan, both published in 1957. The former authors proposed the Pareto Type I model and the latter the lognormal distribution, but they did not empirically validate them. Afterward, other models were proposed: in 1969 the Pareto Types I and II by Stiglitz; in 1975, the loglogistic by Atkinson and the Pearson Type V by Vaughan. In 1990 and 1994, Dagum developed a general model and his type II as models of wealth distribution. They were validated with real life data from the U.S.A., Canada, Italy and the U.K. In 1999, Dagum further developed his general model of net wealth distribution with support (??,? which contains, as particular cases, his Types I and II model of income and wealth distributions. This study presents and analyzes the proposed models of wealth distribution and their properties. The only model with the flexibility, power, and economic and stochastic foundations to accurately fit net and total wealth distributions is the Dagum general model and its particular cases as validated with the case studies of Ireland, U.K., Italy and U.S.A.

  11. Research Progress on the Problem of Fluid, Heat and Energy Distribution near the Earthquake Source Area

    Institute of Scientific and Technical Information of China (English)

    Yan Rui; Jiang Changsheng; Shao Zhigang; Zhou Longquan; Li Yingchun

    2011-01-01

    As the basic problems in seismology, fluid, heat and energy distribution near earthquake sources during earthquake generation have been the leading subjects of concern to seismologists. Currently, more and more research shows fluid around earthquake source areas, which plays an important role in the process of earthquake preparation and generation. However, there is considerable controversy over the source of fluid in the deep crust. As for the problem of heat around earthquake source areas, different models have been proposed to explain the stress heat flow paradox. Among them, the dynamic weakening model has been thought to be the key to solving the heat flow paradox issue. After large earthquakes, energy distribution is directly related to friction heat. It is of timely and important practical significance to immediately implement deep drilling in-site surveying to gain understanding of fluid, friction heat and energy distribution during earthquake generation. The latest international progress in fluid, heat and energy distribution research has been reviewed in this paper which will bring important inspiration for the understanding of earthquake preparation and occurrence.

  12. A TRUST REGION METHOD FOR SOLVING DISTRIBUTED PARAMETER IDENTIFICATION PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    Yan-fei Wang; Ya-xiang Yuan

    2003-01-01

    This paper is concerned with the ill-posed problems of identifying a parameter in an elliptic equation which appears in many applications in science and industry. Its solution is obtained by applying trust region method to a nonlinear least squares error problem.Trust region method has long been a popular method for well-posed problems. This paper indicates that it is also suitable for ill-posed problems. Numerical experiment is given to compare the trust region method with the Tikhonov regularization method. It seems that the trust region method is more promising.

  13. Distributed models coupling soakaways, urban drainage and groundwater

    DEFF Research Database (Denmark)

    Roldin, Maria Kerstin

    , and how these can be modeled in an integrated environment with distributed urban drainage and groundwater flow models. The thesis: 1. Identifies appropriate models of soakaways for use in an integrated and distributed urban water and groundwater modeling system 2. Develops a modeling concept that is able...... of the literature and on modeling studies, a new modeling concept is proposed which fulfills the need for integrated models coupling distributed urban drainage with groundwater. The suggested solution consists of a base equation for soakaway infiltration and additional components for clogging, upscaling......Alternative methods for stormwater management in urban areas, also called Water Sensitive Urban Design (WSUD) methods, have become increasingly important for the mitigation of urban stormwater management problems such as high runoff volumes, combined sewage overflows, poor water quality...

  14. Sensor Location Problem Optimization for Traffic Network with Different Spatial Distributions of Traffic Information.

    Science.gov (United States)

    Bao, Xu; Li, Haijian; Qin, Lingqiao; Xu, Dongwei; Ran, Bin; Rong, Jian

    2016-10-27

    To obtain adequate traffic information, the density of traffic sensors should be sufficiently high to cover the entire transportation network. However, deploying sensors densely over the entire network may not be realistic for practical applications due to the budgetary constraints of traffic management agencies. This paper describes several possible spatial distributions of traffic information credibility and proposes corresponding different sensor information credibility functions to describe these spatial distribution properties. A maximum benefit model and its simplified model are proposed to solve the traffic sensor location problem. The relationships between the benefit and the number of sensors are formulated with different sensor information credibility functions. Next, expanding models and algorithms in analytic results are performed. For each case, the maximum benefit, the optimal number and spacing of sensors are obtained and the analytic formulations of the optimal sensor locations are derived as well. Finally, a numerical example is proposed to verify the validity and availability of the proposed models for solving a network sensor location problem. The results show that the optimal number of sensors of segments with different model parameters in an entire freeway network can be calculated. Besides, it can also be concluded that the optimal sensor spacing is independent of end restrictions but dependent on the values of model parameters that represent the physical conditions of sensors and roads.

  15. An Efficient Estimation of Distribution Algorithm for Job Shop Scheduling Problem

    Science.gov (United States)

    He, Xiao-Juan; Zeng, Jian-Chao; Xue, Song-Dong; Wang, Li-Fang

    An estimation of distribution algorithm with probability model based on permutation information of neighboring operations for job shop scheduling problem was proposed. The probability model was given using frequency information of pair-wise operations neighboring. Then the structure of optimal individual was marked and the operations of optimal individual were partitioned to some independent sub-blocks. To avoid repeating search in same area and improve search speed, each sub-block was taken as a whole to be adjusted. Also, stochastic adjustment to the operations within each sub-block was introduced to enhance the local search ability. The experimental results show that the proposed algorithm is more robust and efficient.

  16. Distributions with given marginals and statistical modelling

    CERN Document Server

    Fortiana, Josep; Rodriguez-Lallena, José

    2002-01-01

    This book contains a selection of the papers presented at the meeting `Distributions with given marginals and statistical modelling', held in Barcelona (Spain), July 17-20, 2000. In 24 chapters, this book covers topics such as the theory of copulas and quasi-copulas, the theory and compatibility of distributions, models for survival distributions and other well-known distributions, time series, categorical models, definition and estimation of measures of dependence, monotonicity and stochastic ordering, shape and separability of distributions, hidden truncation models, diagonal families, orthogonal expansions, tests of independence, and goodness of fit assessment. These topics share the use and properties of distributions with given marginals, this being the fourth specialised text on this theme. The innovative aspect of the book is the inclusion of statistical aspects such as modelling, Bayesian statistics, estimation, and tests.

  17. Statistical Mechanical Models of Integer Factorization Problem

    Science.gov (United States)

    Nakajima, Chihiro H.; Ohzeki, Masayuki

    2017-01-01

    We formulate the integer factorization problem via a formulation of the searching problem for the ground state of a statistical mechanical Hamiltonian. The first passage time required to find a correct divisor of a composite number signifies the exponential computational hardness. The analysis of the density of states of two macroscopic quantities, i.e., the energy and the Hamming distance from the correct solutions, leads to the conclusion that the ground state (correct solution) is completely isolated from the other low-energy states, with the distance being proportional to the system size. In addition, the profile of the microcanonical entropy of the model has two peculiar features that are each related to two marked changes in the energy region sampled via Monte Carlo simulation or simulated annealing. Hence, we find a peculiar first-order phase transition in our model.

  18. H-infinity Tracking Problems for a Distributed Parameter System

    DEFF Research Database (Denmark)

    Larsen, Mikael

    1997-01-01

    The thesis considers the problem of finding a finite dimensional controller for an infinite dimensional system (A tunnel Pasteurizer) combinedwith a rubustness analysis.......The thesis considers the problem of finding a finite dimensional controller for an infinite dimensional system (A tunnel Pasteurizer) combinedwith a rubustness analysis....

  19. Distributing Flexibility to Enhance Robustness in Task Scheduling Problems

    NARCIS (Netherlands)

    Wilmer, D.; Klos, T.B.; Wilson, M.

    2013-01-01

    Temporal scheduling problems occur naturally in many diverse application domains such as manufacturing, transportation, health and education. A scheduling problem arises if we have a set of temporal events (or variables) and some constraints on those events, and we have to find a schedule, which is

  20. Spreadsheet modelling for solving combinatorial problems: The vendor selection problem

    CERN Document Server

    Ipsilandis, Pandelis G

    2008-01-01

    Spreadsheets have grown up and became very powerful and easy to use tools in applying analytical techniques for solving business problems. Operations managers, production managers, planners and schedulers can work with them in developing solid and practical Do-It-Yourself Decision Support Systems. Small and Medium size organizations, can apply OR methodologies without the presence of specialized software and trained personnel, which in many cases cannot afford anyway. This paper examines an efficient approach in solving combinatorial programming problems with the use of spreadsheets. A practical application, which demonstrates the approach, concerns the development of a spreadsheet-based DSS for the Multi Item Procurement Problem with Fixed Vendor Cost. The DSS has been build using exclusively standard spreadsheet feature and can solve real problems of substantial size. The benefits and limitations of the approach are also discussed.

  1. Voltammetry: mathematical modelling and Inverse Problem

    CERN Document Server

    Koshev, N A; Kuzina, V V

    2016-01-01

    We propose the fast semi-analytical method of modelling the polarization curves in the voltammetric experiment. The method is based on usage of the special func- tions and shows a big calculation speed and a high accuracy and stability. Low computational needs of the proposed algorithm allow us to state the set of Inverse Problems of voltammetry for the reconstruction of metal ions concentrations or the other parameters of the electrolyte under investigation.

  2. Partitioning problems in parallel, pipelined, and distributed computing

    Science.gov (United States)

    Bokhari, Shahid H.

    1988-01-01

    The problem of optimally assigning the modules of a parallel program over the processors of a multiple-computer system is addressed. A sum-bottleneck path algorithm is developed that permits the efficient solution of many variants of this problem under some constraints on the structure of the partitions. In particular, the following problems are solved optimally for a single-host, multiple-satellite system: partitioning multiple chain-structured parallel programs, multiple arbitrarily structured serial programs, and single-tree structured parallel programs. In addition, the problem of partitioning chain-structured parallel programs across chain-connected systems is solved under certain constraints. All solutions for parallel programs are equally applicable to pipelined programs. These results extend prior research in this area by explicitly taking concurrency into account and permit the efficient utilization of multiple-computer architectures for a wide range of problems of practical interest.

  3. Distance distribution in configuration-model networks

    Science.gov (United States)

    Nitzan, Mor; Katzav, Eytan; Kühn, Reimer; Biham, Ofer

    2016-06-01

    We present analytical results for the distribution of shortest path lengths between random pairs of nodes in configuration model networks. The results, which are based on recursion equations, are shown to be in good agreement with numerical simulations for networks with degenerate, binomial, and power-law degree distributions. The mean, mode, and variance of the distribution of shortest path lengths are also evaluated. These results provide expressions for central measures and dispersion measures of the distribution of shortest path lengths in terms of moments of the degree distribution, illuminating the connection between the two distributions.

  4. Clustering, Randomness, and Regularity: Spatial Distributions and Human Performance on the Traveling Salesperson Problem and Minimum Spanning Tree Problem

    Science.gov (United States)

    Dry, Matthew J.; Preiss, Kym; Wagemans, Johan

    2012-01-01

    We investigated human performance on the Euclidean Traveling Salesperson Problem (TSP) and Euclidean Minimum Spanning Tree Problem (MST-P) in regards to a factor that has previously received little attention within the literature: the spatial distributions of TSP and MST-P stimuli. First, we describe a method for quantifying the relative degree of…

  5. Photovoltaic subsystem marketing and distribution model

    Energy Technology Data Exchange (ETDEWEB)

    None

    1982-04-01

    The purpose of the marketing and distribution model is to estimate the costs of selling and transporting photovoltaic solar energy products from the factory to the factory customer. The model adjusts for inflation and regional differences in marketing and distribution costs. The model consists of three major components: the marketing submodel, the distribution submodel, and the financial submodel. What the model can and cannot do, and what data are required is explained. An example for a power conditioning unit demonstrates the application of the model.

  6. Problems in Modelling Charge Output Accelerometers

    Directory of Open Access Journals (Sweden)

    Tomczyk Krzysztof

    2016-12-01

    Full Text Available The paper presents major issues associated with the problem of modelling change output accelerometers. The presented solutions are based on the weighted least squares (WLS method using transformation of the complex frequency response of the sensors. The main assumptions of the WLS method and a mathematical model of charge output accelerometers are presented in first two sections of this paper. In the next sections applying the WLS method to estimation of the accelerometer model parameters is discussed and the associated uncertainties are determined. Finally, the results of modelling a PCB357B73 charge output accelerometer are analysed in the last section of this paper. All calculations were executed using the MathCad software program. The main stages of these calculations are presented in Appendices A−E.

  7. RHOCUBE: 3D density distributions modeling code

    Science.gov (United States)

    Nikutta, Robert; Agliozzo, Claudia

    2016-11-01

    RHOCUBE models 3D density distributions on a discrete Cartesian grid and their integrated 2D maps. It can be used for a range of applications, including modeling the electron number density in LBV shells and computing the emission measure. The RHOCUBE Python package provides several 3D density distributions, including a powerlaw shell, truncated Gaussian shell, constant-density torus, dual cones, and spiralling helical tubes, and can accept additional distributions. RHOCUBE provides convenient methods for shifts and rotations in 3D, and if necessary, an arbitrary number of density distributions can be combined into the same model cube and the integration ∫ dz performed through the joint density field.

  8. On the taxonomy of optimization problems under estimation of distribution algorithms.

    Science.gov (United States)

    Echegoyen, Carlos; Mendiburu, Alexander; Santana, Roberto; Lozano, Jose A

    2013-01-01

    Understanding the relationship between a search algorithm and the space of problems is a fundamental issue in the optimization field. In this paper, we lay the foundations to elaborate taxonomies of problems under estimation of distribution algorithms (EDAs). By using an infinite population model and assuming that the selection operator is based on the rank of the solutions, we group optimization problems according to the behavior of the EDA. Throughout the definition of an equivalence relation between functions it is possible to partition the space of problems in equivalence classes in which the algorithm has the same behavior. We show that only the probabilistic model is able to generate different partitions of the set of possible problems and hence, it predetermines the number of different behaviors that the algorithm can exhibit. As a natural consequence of our definitions, all the objective functions are in the same equivalence class when the algorithm does not impose restrictions to the probabilistic model. The taxonomy of problems, which is also valid for finite populations, is studied in depth for a simple EDA that considers independence among the variables of the problem. We provide the sufficient and necessary condition to decide the equivalence between functions and then we develop the operators to describe and count the members of a class. In addition, we show the intrinsic relation between univariate EDAs and the neighborhood system induced by the Hamming distance by proving that all the functions in the same class have the same number of local optima and that they are in the same ranking positions. Finally, we carry out numerical simulations in order to analyze the different behaviors that the algorithm can exhibit for the functions defined over the search space [Formula: see text].

  9. Distributed E-Service Platform Model

    Institute of Scientific and Technical Information of China (English)

    PING Hu; YANG Hua; CHEN Jia-xun

    2002-01-01

    E-communication is an internet-oriented application platform for distributed system. The architecture of the platform is discussed in detail in this article. By theory and case study, the distributed model is proved flexible and practicable. Using services and core, developers can extend the e-community and leverage their applications.This model will be great helpful to build a new kind distributed, java-based, internet application.

  10. An Electromagnetic Interference Problem via the Mains Distribution Networks

    Directory of Open Access Journals (Sweden)

    BUZDUGAN, M. I.

    2007-11-01

    Full Text Available The paper presents an electromagnetic interference problem, due to the proximity of two radio broadcasting stations which injected especially common mode conducted emissions over the maximal limits specified by the national regulations in the public low voltage mains network. These emissions determined the malfunction of the gas heating centrals Themaclassic Saunier Duval installed in the area. The problem was solved by the retro fitting of an extra EMI filter for the mains network, as presented in the paper.

  11. An Improved Distribution Policy with a Maintenance Aspect for an Urban Logistic Problem

    Directory of Open Access Journals (Sweden)

    Nadia Ndhaief

    2017-07-01

    Full Text Available In this paper, we present an improved distribution plan supporting an urban distribution center (UDC to solve the last mile problem of urban freight. This is motivated by the need of UDCs to satisfy daily demand in time under a high service level in allocated urban areas. Moreover, these demands could not be satisfied in individual cases because the delivery rate can be less than daily demand and/or affected by random failure or maintenance actions of vehicles. The scope of our work is to focus on a UDC, which needs to satisfy demands in a finite horizon. To that end, we consider a distribution policy on two sequential plans, a distribution plan correlated to a maintenance plan using a subcontracting strategy with several potential urban distribution centers (UDCs and performing preventive maintenance to ensure deliveries for their allocated urban area. The choice of subcontractor will depend on distance, environmental and availability criteria. In doing so, we define a mathematical model for searching the best distribution and maintenance plans using a subcontracting strategy. Moreover, we consider delay for the next periods with an expensive penalty. Finally, we present a numerical example illustrating the benefits of our approach.

  12. Complexity Analysis of Pipeline Mapping Problems in Distributed Heterogeneous Networks

    OpenAIRE

    Gu, Yi; Wu, Qishi; Zhu, Mengxia; Nageswara S. V. Rao

    2009-01-01

    Largescale scientific applications require using various system resources to execute complex computing pipelines in distributed networks to support collaborative research. System resources are typically shared in the Internet or over dedicated connections based on their location, availability, capability, and capacity. Optimizing the network performance of computing pipelines in such distributed environments is critical to the success of these applications.We consider two types of largescale ...

  13. Optimization model for the design of distributed wastewater treatment networks

    Directory of Open Access Journals (Sweden)

    Ibrić Nidret

    2012-01-01

    Full Text Available In this paper we address the synthesis problem of distributed wastewater networks using mathematical programming approach based on the superstructure optimization. We present a generalized superstructure and optimization model for the design of the distributed wastewater treatment networks. The superstructure includes splitters, treatment units, mixers, with all feasible interconnections including water recirculation. Based on the superstructure the optimization model is presented. The optimization model is given as a nonlinear programming (NLP problem where the objective function can be defined to minimize the total amount of wastewater treated in treatment operations or to minimize the total treatment costs. The NLP model is extended to a mixed integer nonlinear programming (MINLP problem where binary variables are used for the selection of the wastewater treatment technologies. The bounds for all flowrates and concentrations in the wastewater network are specified as general equations. The proposed models are solved using the global optimization solvers (BARON and LINDOGlobal. The application of the proposed models is illustrated on the two wastewater network problems of different complexity. First one is formulated as the NLP and the second one as the MINLP. For the second one the parametric and structural optimization is performed at the same time where optimal flowrates, concentrations as well as optimal technologies for the wastewater treatment are selected. Using the proposed model both problems are solved to global optimality.

  14. Modeling crowdsourcing as collective problem solving

    CERN Document Server

    Guazzini, Andrea; Donati, Camillo; Nardi, Annalisa; Levnajic, Zoran

    2015-01-01

    Crowdsourcing is a process of accumulating the ideas, thoughts or information from many independent participants, with aim to find the best solution for a given challenge. Modern information technologies allow for massive number of subjects to be involved in a more or less spontaneous way. Still, the full potentials of crowdsourcing are yet to be reached. We introduce a modeling framework through which we study the effectiveness of crowdsourcing in relation to the level of collectivism in facing the problem. Our findings reveal an intricate relationship between the number of participants and the difficulty of the problem, indicating the the optimal size of the crowdsourced group. We discuss our results in the context of modern utilization of crowdsourcing.

  15. Supporting the Construction of Workflows for Biodiversity Problem-Solving Accessing Secure, Distributed Resources

    Directory of Open Access Journals (Sweden)

    J.S. Pahwa

    2006-01-01

    Full Text Available In the Biodiversity World (BDW project we have created a flexible and extensible Web Services-based Grid environment for biodiversity researchers to solve problems in biodiversity and analyse biodiversity patterns. In this environment, heterogeneous and globally distributed biodiversity-related resources such as data sets and analytical tools are made available to be accessed and assembled by users into workflows to perform complex scientific experiments. One such experiment is bioclimatic modelling of the geographical distribution of individual species using climate variables in order to explain past and future climate-related changes in species distribution. Data sources and analytical tools required for such analysis of species distribution are widely dispersed, available on heterogeneous platforms, present data in different formats and lack inherent interoperability. The present BDW system brings all these disparate units together so that the user can combine tools with little thought as to their original availability, data formats and interoperability. The new prototype BDW system architecture not only brings together heterogeneous resources but also enables utilisation of computational resources and provides a secure access to BDW resources via a federated security model. We describe features of the new BDW system and its security model which enable user authentication from a workflow application as part of workflow execution.

  16. Modeling of D-STATCOM in distribution systems load flow

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This paper presents modeling of Distribution STATCOM (D-STATCOM) in load flow calculations for the steadystate voltage compensation. An accurate model for D-STATCOM is derived to use in load flow calculations. The rating of this device as well as the direction of required reactive power injection for voltage compensation in the desired value (1 p.u.) is derived and discussed analytically and mathematically by the phasor diagram method. Furthermore, an efficient method for node and line identification used in load flow calculations is presented. The validity of the proposed model is examined by using two standard distribution systems consisting of 33 and 69 nodes, respectively. The best location of D-STATCOM for under voltage problem mitigation approach in the distribution networks is determined. The results validate the proposed model for DSTATCOM in large distribution systems.

  17. Singularity Problem in Teleparallel Dark Energy Models

    CERN Document Server

    Geng, Chao-Qiang; Lee, Chung-Chi

    2013-01-01

    We study the singularity problem in teleparallel dark energy models. A future singularity may occur due to the non-minimal coupling of the dark energy scalar field to teleparallel gravity that effectively changes the gravitational coupling strength and can even make it diverge. This singularity may be avoided by a binding-type self-potential that keeps the scalar field away from the singularity point. For demonstration we analyze the model with a quadratic potential and show how the (non)occurrence of the singularity depends on the initial conditions and the steepness of the potential, both of which affect the competition between the self-interaction and the non-minimal coupling. To examine the capability of the binding-type potential to fit observational data and meanwhile to avoid the singularity, we perform the data fitting for this model and show that the observationally viable region up to the $3\\sigma$ confidence level is free of the future singularity.

  18. Some Standard model problems and possible solutions

    Science.gov (United States)

    Barranco, J.

    2016-10-01

    Three problems of the standard model of elementary particles are studied from a phenomenological approach. (i) It is shown that the Dirac or the Majorana nature of the neutrino can be studied by looking for differences in the v-electron scattering if the polarization of the neutrino is considered. (ii) The absolute scale of the neutrino mass can be set if a four zero mass matrix texture is considered for the leptons. It is found that m ν3 ∼⃒ 0.05 eV. (iii) It is shown that it is possible -within a certain class of two Higgs model extensions of the standard model- to have a cancelation of the quadratic divergences to the mass of physical Higgs boson.

  19. SUSY CP problem in gauge mediation model

    Energy Technology Data Exchange (ETDEWEB)

    Moroi, Takeo [Department of Physics, University of Tokyo, Tokyo 113-0033 (Japan); Yokozaki, Norimi, E-mail: yokozaki@hep-th.phys.s.u-tokyo.ac.jp [Department of Physics, University of Tokyo, Tokyo 113-0033 (Japan)

    2011-07-27

    SUSY CP problem in the gauge mediation supersymmetry breaking model is reconsidered. We pay particular attention to two sources of CP violating phases whose effects were not seriously studied before; one is the effect of the breaking of the GUT relation among the gaugino masses due to the field responsible for the GUT symmetry breaking, and the other is the supergravity effect on the supersymmetry breaking parameters, in particular, on the bi-linear supersymmetry breaking Higgs mass term. We show that both of them can induce too large electric dipole moments of electron, neutron, and so on, to be consistent with the experimental bounds.

  20. Distributed modeling for road authorities

    NARCIS (Netherlands)

    Luiten, G.T.; Bõhms, H.M.; Nederveen, S. van; Bektas, E.

    2013-01-01

    A great challenge for road authorities is to improve the effectiveness and efficiency of their core processes by improving data exchange and sharing using new technologies such as building information modeling (BIM). BIM has already been successfully implemented in other sectors, such as architectur

  1. Distributed modeling for road authorities

    NARCIS (Netherlands)

    Luiten, G.T.; Bõhms, H.M.; Nederveen, S. van; Bektas, E.

    2013-01-01

    A great challenge for road authorities is to improve the effectiveness and efficiency of their core processes by improving data exchange and sharing using new technologies such as building information modeling (BIM). BIM has already been successfully implemented in other sectors, such as architectur

  2. Wealth Distributions in Asset Exchange Models

    CERN Document Server

    Krapivsky, P L

    2010-01-01

    How do individuals accumulate wealth as they interact economically? We outline the consequences of a simple microscopic model in which repeated pairwise exchanges of assets between individuals build the wealth distribution of a population. This distribution is determined for generic exchange rules --- transactions that involve a fixed amount or a fixed fraction of individual wealth, as well as random or greedy exchanges. In greedy multiplicative exchange, a continuously evolving power law wealth distribution arises, a feature that qualitatively mimics empirical observations.

  3. Distributionally Robust Joint Chance Constrained Problem under Moment Uncertainty

    Directory of Open Access Journals (Sweden)

    Ke-wei Ding

    2014-01-01

    Full Text Available We discuss and develop the convex approximation for robust joint chance constraints under uncertainty of first- and second-order moments. Robust chance constraints are approximated by Worst-Case CVaR constraints which can be reformulated by a semidefinite programming. Then the chance constrained problem can be presented as semidefinite programming. We also find that the approximation for robust joint chance constraints has an equivalent individual quadratic approximation form.

  4. Modeling nuclear parton distribution functions

    CERN Document Server

    Honkanen, H; Guzey, V

    2013-01-01

    The presence of nuclear medium and collective phenomena which involve several nucleons modify the parton distribution functions of nuclei (nPDFs) compared to those of a free nucleon. These modifications have been investigated by different groups using global analyses of high energy nuclear reaction world data resulting in modern nPDF parametrizations with error estimates, such as EPS09(s), HKN07 and nDS. These phenomenological nPDF sets roughly agree within their uncertainty bands, but have antiquarks for large-$x$ and gluons for the whole $x$-range poorly constrained by the available data. In the kinematics accessible at the LHC this has negative impact on the interpretation of the heavy-ion collision data, especially for the $p + A$ benchmarking runs. The EMC region is also sensitive to the proper definition of $x$, where the nuclear binding effects have to be taken into account, and for heavy nuclei one also needs to take into account that a fraction of the nucleus momentum is carried by the equivalent pho...

  5. Probability Distribution Function of Passive Scalars in Shell Models

    Institute of Scientific and Technical Information of China (English)

    LIU Chun-Ping; ZHANG Xiao-Qiang; LIU Yu-Rong; WANG Guang-Rui; HE Da-Ren; CHEN Shi-Gang; ZHU Lu-Jin

    2008-01-01

    A shell-model version of passive scalar problem is introduced, which is inspired by the model of K. Ohkitani and M. Yakhot [K. Ohkitani and M. Yakhot, Phys. Rev. Lett. 60 (1988) 983; K. Ohkitani and M. Yakhot, Prog. Theor. Phys. 81 (1988) 329]. As in the original problem, the prescribed random velocity field is Gaussian and 5 correlated in time. Deterministic differential equations are regarded as nonlinear Langevin equation. Then, the Fokker-Planck equations of PDF for passive scalars axe obtained and solved numerically. In energy input range (n < 5, n is the shell number.), the probability distribution function (PDF) of passive scalars is near the Gaussian distribution. In inertial range (5 < n < 16) and dissipation range (n ≥ 17), the probability distribution function (PDF) of passive scalars has obvious intermittence. And the scaling power of passive scalar is anomalous. The results of numerical simulations are compared with experimental measurements.

  6. Modelling distribution functions and fragmentation functions

    CERN Document Server

    Rodrigues, J; Mulders, P J

    1995-01-01

    We present examples for the calculation of the distribution and fragmentation functions using the representation in terms of non-local matrix elements of quark field operators. As specific examples, we use a simple spectator model to estimate the leading twist quark distribution functions and the fragmentation functions for a quark into a nucleon or a pion.

  7. Software Model Checking for Verifying Distributed Algorithms

    Science.gov (United States)

    2014-10-28

    Verification procedure is an intelligent exhaustive search of the state space of the design Model Checking 6 Verifying Synchronous Distributed App...Distributed App Sagar Chaki, June 11, 2014 © 2014 Carnegie Mellon University Tool Usage Project webpage (http://mcda.googlecode.com) • Tutorial

  8. Economic Models and Algorithms for Distributed Systems

    CERN Document Server

    Neumann, Dirk; Altmann, Jorn; Rana, Omer F

    2009-01-01

    Distributed computing models for sharing resources such as Grids, Peer-to-Peer systems, or voluntary computing are becoming increasingly popular. This book intends to discover fresh avenues of research and amendments to existing technologies, aiming at the successful deployment of commercial distributed systems

  9. Comparison of Two Spatial Optimization Techniques: A Framework to Solve Multiobjective Land Use Distribution Problems

    Science.gov (United States)

    Meyer, Burghard Christian; Lescot, Jean-Marie; Laplana, Ramon

    2009-02-01

    Two spatial optimization approaches, developed from the opposing perspectives of ecological economics and landscape planning and aimed at the definition of new distributions of farming systems and of land use elements, are compared and integrated into a general framework. The first approach, applied to a small river catchment in southwestern France, uses SWAT (Soil and Water Assessment Tool) and a weighted goal programming model in combination with a geographical information system (GIS) for the determination of optimal farming system patterns, based on selected objective functions to minimize deviations from the goals of reducing nitrogen and maintaining income. The second approach, demonstrated in a suburban landscape near Leipzig, Germany, defines a GIS-based predictive habitat model for the search of unfragmented regions suitable for hare populations ( Lepus europaeus), followed by compromise optimization with the aim of planning a new habitat structure distribution for the hare. The multifunctional problem is solved by the integration of the three landscape functions (“production of cereals,” “resistance to soil erosion by water,” and “landscape water retention”). Through the comparison, we propose a framework for the definition of optimal land use patterns based on optimization techniques. The framework includes the main aspects to solve land use distribution problems with the aim of finding the optimal or best land use decisions. It integrates indicators, goals of spatial developments and stakeholders, including weighting, and model tools for the prediction of objective functions and risk assessments. Methodological limits of the uncertainty of data and model outcomes are stressed. The framework clarifies the use of optimization techniques in spatial planning.

  10. Comparison of two spatial optimization techniques: a framework to solve multiobjective land use distribution problems.

    Science.gov (United States)

    Meyer, Burghard Christian; Lescot, Jean-Marie; Laplana, Ramon

    2009-02-01

    Two spatial optimization approaches, developed from the opposing perspectives of ecological economics and landscape planning and aimed at the definition of new distributions of farming systems and of land use elements, are compared and integrated into a general framework. The first approach, applied to a small river catchment in southwestern France, uses SWAT (Soil and Water Assessment Tool) and a weighted goal programming model in combination with a geographical information system (GIS) for the determination of optimal farming system patterns, based on selected objective functions to minimize deviations from the goals of reducing nitrogen and maintaining income. The second approach, demonstrated in a suburban landscape near Leipzig, Germany, defines a GIS-based predictive habitat model for the search of unfragmented regions suitable for hare populations (Lepus europaeus), followed by compromise optimization with the aim of planning a new habitat structure distribution for the hare. The multifunctional problem is solved by the integration of the three landscape functions ("production of cereals," "resistance to soil erosion by water," and "landscape water retention"). Through the comparison, we propose a framework for the definition of optimal land use patterns based on optimization techniques. The framework includes the main aspects to solve land use distribution problems with the aim of finding the optimal or best land use decisions. It integrates indicators, goals of spatial developments and stakeholders, including weighting, and model tools for the prediction of objective functions and risk assessments. Methodological limits of the uncertainty of data and model outcomes are stressed. The framework clarifies the use of optimization techniques in spatial planning.

  11. Modeling interregional freight flow by distribution systems

    NARCIS (Netherlands)

    Davydenko, I.; Tavasszy, L.A.; Blois, C.J. de

    2013-01-01

    Distribution Centers with a warehousing function have an important influence on the flow of goods from production to consumption, generating substantial goods flow and vehicle movements. This paper extends the classical 4-step freight modeling framework with a logistics chain model, explicitly model

  12. Developments of entropy-stable residual distribution methods for conservation laws I: Scalar problems

    Science.gov (United States)

    Ismail, Farzad; Chizari, Hossain

    2017-02-01

    This paper presents preliminary developments of entropy-stable residual distribution methods for scalar problems. Controlling entropy generation is achieved by formulating an entropy conserved signals distribution coupled with an entropy-stable signals distribution. Numerical results of the entropy-stable residual distribution methods are accurate and comparable with the classic residual distribution methods for steady-state problems. High order accurate extensions for the new method on steady-state problems are also demonstrated. Moreover, the new method preserves second order accuracy on unsteady problems using an explicit time integration scheme. The idea of the multi-dimensional entropy-stable residual distribution method is generic enough to be extended to the system of hyperbolic equations, which will be presented in the sequel of this paper.

  13. Heuristic for solving capacitor allocation problems in electric energy radial distribution networks

    Directory of Open Access Journals (Sweden)

    Maria A. Biagio

    2012-04-01

    Full Text Available The goal of the capacitor allocation problem in radial distribution networks is to minimize technical losses with consequential positive impacts on economic and environmental areas. The main objective is to define the size and location of the capacitors while considering load variations in a given horizon. The mathematical formulation for this planning problem is given by an integer nonlinear mathematical programming model that demands great computational effort to be solved. With the goal of solving this problem, this paper proposes a methodology that is composed of heuristics and Tabu Search procedures. The methodology presented explores network system characteristics of the network system reactive loads for identifying regions where procedures of local and intensive searches should be performed. A description of the proposed methodology and an analysis of computational results obtained which are based on several test systems including actual systems are presented. The solutions reached are as good as or better than those indicated by well referenced methodologies. The technique proposed is simple in its use and does not require calibrating an excessive amount of parameters, making it an attractive alternative for companies involved in the planning of radial distribution networks.

  14. Heuristic for solving capacitor allocation problems in electric energy radial distribution networks

    Directory of Open Access Journals (Sweden)

    Maria A. Biagio

    2012-04-01

    Full Text Available The goal of the capacitor allocation problem in radial distribution networks is to minimize technical losses with consequential positive impacts on economic and environmental areas. The main objective is to define the size and location of the capacitors while considering load variations in a given horizon. The mathematical formulation for this planning problem is given by an integer nonlinear mathematical programming model that demands great computational effort to be solved. With the goal of solving this problem, this paper proposes a methodology that is composed of heuristics and Tabu Search procedures. The methodology presented explores network system characteristics of the network system reactive loads for identifying regions where procedures of local and intensive searches should be performed. A description of the proposed methodology and an analysis of computational results obtained which are based on several test systems including actual systems are presented. The solutions reached are as good as or better than those indicated by well referenced methodologies. The technique proposed is simple in its use and does not require calibrating an excessive amount of parameters, making it an attractive alternative for companies involved in the planning of radial distribution networks.

  15. Ebola Virus Infection Modelling and Identifiability Problems

    Directory of Open Access Journals (Sweden)

    Van-Kinh eNguyen

    2015-04-01

    Full Text Available The recent outbreaks of Ebola virus (EBOV infections have underlined the impact of the virus as a major threat for human health. Due to the high biosafety classification of EBOV (level 4, basic research is very limited. Therefore, the development of new avenues of thinking to advance quantitative comprehension of the virus and its interaction with the host cells is urgently neededto tackle this lethal disease. Mathematical modelling of the EBOV dynamics can be instrumental to interpret Ebola infection kinetics on quantitative grounds. To the best of our knowledge, a mathematical modelling approach to unravel the interaction between EBOV and the host cells isstill missing. In this paper, a mathematical model based on differential equations is used to represent the basic interactions between EBOV and wild-type Vero cells in vitro. Parameter sets that represent infectivity of pathogens are estimated for EBOV infection and compared with influenza virus infection kinetics. The average infecting time of wild-type Vero cells in EBOV is slower than in influenza infection. Simulation results suggest that the slow infecting time of EBOV could be compensated by its efficient replication. This study reveals several identifiability problems and what kind of experiments are necessary to advance the quantification of EBOV infection. A first mathematical approach of EBOV dynamics and the estimation of standard parametersin viral infections kinetics is the key contribution of this work, paving the way for future modelling work on EBOV infection.

  16. On the Inverse EEG Problem for a 1D Current Distribution

    Directory of Open Access Journals (Sweden)

    George Dassios

    2014-01-01

    Full Text Available Albanese and Monk (2006 have shown that, it is impossible to recover the support of a three-dimensional current distribution within a conducting medium from the knowledge of the electric potential outside the conductor. On the other hand, it is possible to obtain the support of a current which lives in a subspace of dimension lower than three. In the present work, we actually demonstrate this possibility by assuming a one-dimensional current distribution supported on a small line segment having arbitrary location and orientation within a uniform spherical conductor. The immediate representation of this problem refers to the inverse problem of electroencephalography (EEG with a linear current distribution and the spherical model of the brain-head system. It is shown that the support is identified through the solution of a nonlinear algebraic system which is investigated thoroughly. Numerical tests show that this system has exactly one real solution. Exact solutions are analytically obtained for a couple of special cases.

  17. Distributed simulation a model driven engineering approach

    CERN Document Server

    Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent

    2016-01-01

    Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.

  18. A stylized model for wealth distribution

    CERN Document Server

    Düring, Bertram; Scalas, Enrico

    2016-01-01

    The recent book by T. Piketty (Capital in the Twenty-First Century) promoted the important issue of wealth inequality. In the last twenty years, physicists and mathematicians developed models to derive the wealth distribution using discrete and continuous stochastic processes (random exchange models) as well as related Boltzmann-type kinetic equations. In this literature, the usual concept of equilibrium in Economics is either replaced or completed by statistical equilibrium. In order to illustrate this activity with a concrete example, we present a stylised random exchange model for the distribution of wealth. We first discuss a fully discrete version (a Markov chain with finite state space). We then study its discrete-time continuous-state-space version and we prove the existence of the equilibrium distribution. Finally, we discuss the connection of these models with Boltzmann-like kinetic equations for the marginal distribution of wealth. This paper shows in practice how it is possible to start from a fini...

  19. Mathematical problems in modeling artificial heart

    Directory of Open Access Journals (Sweden)

    Ahmed N. U.

    1995-01-01

    Full Text Available In this paper we discuss some problems arising in mathematical modeling of artificial hearts. The hydrodynamics of blood flow in an artificial heart chamber is governed by the Navier-Stokes equation, coupled with an equation of hyperbolic type subject to moving boundary conditions. The flow is induced by the motion of a diaphragm (membrane inside the heart chamber attached to a part of the boundary and driven by a compressor (pusher plate. On one side of the diaphragm is the blood and on the other side is the compressor fluid. For a complete mathematical model it is necessary to write the equation of motion of the diaphragm and all the dynamic couplings that exist between its position, velocity and the blood flow in the heart chamber. This gives rise to a system of coupled nonlinear partial differential equations; the Navier-Stokes equation being of parabolic type and the equation for the membrane being of hyperbolic type. The system is completed by introducing all the necessary static and dynamic boundary conditions. The ultimate objective is to control the flow pattern so as to minimize hemolysis (damage to red blood cells by optimal choice of geometry, and by optimal control of the membrane for a given geometry. The other clinical problems, such as compatibility of the material used in the construction of the heart chamber, and the membrane, are not considered in this paper. Also the dynamics of the valve is not considered here, though it is also an important element in the overall design of an artificial heart. We hope to model the valve dynamics in later paper.

  20. A practical eco-environmental distribution network planning model including fuel cells and non-renewable distributed energy resources

    Energy Technology Data Exchange (ETDEWEB)

    Soroudi, Alireza [Department of Electrical Engineering, Sharif University of Technology, Tehran (Iran); Ehsan, Mehdi [Department of Electrical Engineering, Sharif University of Technology, Tehran (Iran); Center of Excellence in Power System Management and Control, Sharif University of Technology, Tehran (Iran); Zareipour, Hamidreza [Department of Electrical and Computer Engineering, University of Calgary, Alberta (Canada)

    2011-01-15

    This paper presents a long-term dynamic multi-objective planning model for distribution network expansion along with distributed energy options. The proposed model optimizes two objectives, namely costs and emissions and determines the optimal schemes of sizing, placement and specially the dynamics (i.e., timing) of investments on distributed generation units and network reinforcements over the planning period. An efficient two-stage heuristic method is proposed to solve the formulated planning problem. The effectiveness of the proposed model is demonstrated by applying it to a distribution network and comparing the simulation results with other methods and models. (author)

  1. The inverse gravimetric problem in gravity modelling

    Science.gov (United States)

    Sanso, F.; Tscherning, C. C.

    1989-01-01

    One of the main purposes of geodesy is to determine the gravity field of the Earth in the space outside its physical surface. This purpose can be pursued without any particular knowledge of the internal density even if the exact shape of the physical surface of the Earth is not known, though this seems to entangle the two domains, as it was in the old Stoke's theory before the appearance of Molodensky's approach. Nevertheless, even when large, dense and homogeneous data sets are available, it was always recognized that subtracting from the gravity field the effect of the outer layer of the masses (topographic effect) yields a much smoother field. This is obviously more important when a sparse data set is bad so that any smoothing of the gravity field helps in interpolating between the data without raising the modeling error, this approach is generally followed because it has become very cheap in terms of computing time since the appearance of spectral techniques. The mathematical description of the Inverse Gravimetric Problem (IGP) is dominated mainly by two principles, which in loose terms can be formulated as follows: the knowledge of the external gravity field determines mainly the lateral variations of the density; and the deeper the density anomaly giving rise to a gravity anomaly, the more improperly posed is the problem of recovering the former from the latter. The statistical relation between rho and n (and its inverse) is also investigated in its general form, proving that degree cross-covariances have to be introduced to describe the behavior of rho. The problem of the simultaneous estimate of a spherical anomalous potential and of the external, topographic masses is addressed criticizing the choice of the mixed collection approach.

  2. Limit order placement as an utility maximization problem and the origin of power law distribution of limit order prices

    CERN Document Server

    Lillo, F

    2006-01-01

    I consider the problem of the optimal limit order price of a financial asset in the framework of the maximization of the utility function of the investor. The analytical solution of the problem gives insight on the origin of the recently empirically observed power law distribution of limit order prices. In the framework of the model, the most likely proximate cause of this power law is a power law heterogeneity of traders' investment time horizons .

  3. Modelling lifetime data with multivariate Tweedie distribution

    Science.gov (United States)

    Nor, Siti Rohani Mohd; Yusof, Fadhilah; Bahar, Arifah

    2017-05-01

    This study aims to measure the dependence between individual lifetimes by applying multivariate Tweedie distribution to the lifetime data. Dependence between lifetimes incorporated in the mortality model is a new form of idea that gives significant impact on the risk of the annuity portfolio which is actually against the idea of standard actuarial methods that assumes independent between lifetimes. Hence, this paper applies Tweedie family distribution to the portfolio of lifetimes to induce the dependence between lives. Tweedie distribution is chosen since it contains symmetric and non-symmetric, as well as light-tailed and heavy-tailed distributions. Parameter estimation is modified in order to fit the Tweedie distribution to the data. This procedure is developed by using method of moments. In addition, the comparison stage is made to check for the adequacy between the observed mortality and expected mortality. Finally, the importance of including systematic mortality risk in the model is justified by the Pearson's chi-squared test.

  4. New discrete element models for elastoplastic problems

    Institute of Scientific and Technical Information of China (English)

    Ming Cheng; Weifu Liu; Kaixin Liu

    2009-01-01

    The discrete element method (DEM) has attractive features for problems with severe damages, but lack of theoretical basis for continua behavior especially for nonlinear behavior has seriously restricted its application, The present study proposes a new approach to developing the DEM as a general and robust technique for modeling the elastoplastic behavior of solid materials. New types of connective links between elements are proposed, the inter-element parameters are theoretically determined based on the principle of energy equivalence and a yield criterion and a flow rule for DEM are given for describing nonlinear behavior of materials. Moreover, a numerical scheme, which can be applied to modeling the behavior of a continuum as well as the transformation from a continuum to a discontinuum, is obtained by introducing a fracture criterion and a contact model into the DEM. The elastoplastic stress wave propagations and the tensile failure process of a steel plate are simulated, and the numerical results agree well with those obtained from the finite element method (FEM) and corresponding experiment, and thus the accuracy and efficiency of the DEM scheme are demonstrated.

  5. New discrete element models for elastoplastic problems

    Science.gov (United States)

    Cheng, Ming; Liu, Weifu; Liu, Kaixin

    2009-10-01

    The discrete element method (DEM) has attractive features for problems with severe damages, but lack of theoretical basis for continua behavior especially for nonlinear behavior has seriously restricted its application. The present study proposes a new approach to developing the DEM as a general and robust technique for modeling the elastoplastic behavior of solid materials. New types of connective links between elements are proposed, the inter-element parameters are theoretically determined based on the principle of energy equivalence and a yield criterion and a flow rule for DEM are given for describing nonlinear behavior of materials. Moreover, a numerical scheme, which can be applied to modeling the behavior of a continuum as well as the transformation from a continuum to a discontinuum, is obtained by introducing a fracture criterion and a contact model into the DEM. The elastoplastic stress wave propagations and the tensile failure process of a steel plate are simulated, and the numerical results agree well with those obtained from the finite element method (FEM) and corresponding experiment, and thus the accuracy and efficiency of the DEM scheme are demonstrated.

  6. Population distribution models: species distributions are better modeled using biologically relevant data partitions.

    Science.gov (United States)

    Gonzalez, Sergio C; Soto-Centeno, J Angel; Reed, David L

    2011-09-19

    Predicting the geographic distribution of widespread species through modeling is problematic for several reasons including high rates of omission errors. One potential source of error for modeling widespread species is that subspecies and/or races of species are frequently pooled for analyses, which may mask biologically relevant spatial variation within the distribution of a single widespread species. We contrast a presence-only maximum entropy model for the widely distributed oldfield mouse (Peromyscus polionotus) that includes all available presence locations for this species, with two composite maximum entropy models. The composite models either subdivided the total species distribution into four geographic quadrants or by fifteen subspecies to capture spatially relevant variation in P. polionotus distributions. Despite high Area Under the ROC Curve (AUC) values for all models, the composite species distribution model of P. polionotus generated from individual subspecies models represented the known distribution of the species much better than did the models produced by partitioning data into geographic quadrants or modeling the whole species as a single unit. Because the AUC values failed to describe the differences in the predictability of the three modeling strategies, we suggest using omission curves in addition to AUC values to assess model performance. Dividing the data of a widespread species into biologically relevant partitions greatly increased the performance of our distribution model; therefore, this approach may prove to be quite practical and informative for a wide range of modeling applications.

  7. Statistical model with a standard Γ distribution

    Science.gov (United States)

    Patriarca, Marco; Chakraborti, Anirban; Kaski, Kimmo

    2004-07-01

    We study a statistical model consisting of N basic units which interact with each other by exchanging a physical entity, according to a given microscopic random law, depending on a parameter λ . We focus on the equilibrium or stationary distribution of the entity exchanged and verify through numerical fitting of the simulation data that the final form of the equilibrium distribution is that of a standard Gamma distribution. The model can be interpreted as a simple closed economy in which economic agents trade money and a saving criterion is fixed by the saving propensity λ . Alternatively, from the nature of the equilibrium distribution, we show that the model can also be interpreted as a perfect gas at an effective temperature T(λ) , where particles exchange energy in a space with an effective dimension D(λ) .

  8. Statistical model with a standard Gamma distribution

    Science.gov (United States)

    Chakraborti, Anirban; Patriarca, Marco

    2005-03-01

    We study a statistical model consisting of N basic units which interact with each other by exchanging a physical entity, according to a given microscopic random law, depending on a parameter λ. We focus on the equilibrium or stationary distribution of the entity exchanged and verify through numerical fitting of the simulation data that the final form of the equilibrium distribution is that of a standard Gamma distribution. The model can be interpreted as a simple closed economy in which economic agents trade money and a saving criterion is fixed by the saving propensity λ. Alternatively, from the nature of the equilibrium distribution, we show that the model can also be interpreted as a perfect gas at an effective temperature T (λ), where particles exchange energy in a space with an effective dimension D (λ).

  9. An estimation of distribution algorithm (EDA) variant with QGA for Flowshop scheduling problem

    Science.gov (United States)

    Latif, Muhammad Shahid; Hong, Zhou; Ali, Amir

    2014-04-01

    In this research article, a hybrid approach is presented which based on well-known meta-heuristics algorithms. This study based on integration of Quantum Genetic Algorithm (QGA) and Estimation of Distribution Algorithm, EDA, (for simplicity we use Q-EDA) for flowshop scheduling, a well-known NP hard Problem, while focusing on the total flow time minimization criterion. A relatively new method has been adopted for the encoding of jobs sequence in flowshop known as angel rotations instead of random keys, so QGA become more efficient. Further, EDA has been integrated to update the population of QGA by making a probability model. This probabilistic model is built and used to generate new candidate solutions which comprised on best individuals, obtained after several repetitions of proposed (Q-EDA) approach. As both heuristics based on probabilistic characteristics, so exhibits excellent learning capability and have minimum chances of being trapped in local optima. The results obtained during this study are presented and compared with contemporary approaches in literature. The current hybrid Q-EDA has implemented on different benchmark problems. The experiments has showed better convergence and results. It is concluded that hybrid Q-EDA algorithm can generally produce better results while implemented for Flowshop Scheduling Problem (FSSP).

  10. Modeling interregional freight flow by distribution systems

    NARCIS (Netherlands)

    Davydenko, I.; Tavasszy, L.A.; Blois, C.J. de

    2013-01-01

    Distribution Centers with a warehousing function have an important influence on the flow of goods from production to consumption, generating substantial goods flow and vehicle movements. This paper extends the classical 4-step freight modeling framework with a logistics chain model, explicitly

  11. Modeling Distributed Multimedia Synchronization with DSPN

    Institute of Scientific and Technical Information of China (English)

    宋军; 顾冠群

    1998-01-01

    Multimedia synchronization is the essential technology for the integration of multimedia in distributed multimedia systems.The multimedia synchronization model has been recognized by many researchers as a premise of the implementation of multimedia synchronization.In distributed multimedia systems,the characteristic of multimedia synchronization is dynamic,and the key medium has the priority in multimedia synchronization.The previously proposed multimedia synchronization models cannot meet these requirements.So a new multimedia dynamic synchronization model-DSPN,based on the timed Petri-net has been designed in this paper.This model can not only let the distributed multimedia system keep multimedia synchronization in a more precise and effective manner according to the runtime situation of the system,but also allow the user to interact with the presentation of multimedia.

  12. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    Science.gov (United States)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-05-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  13. Hydrodynamic multibead modeling: problems, pitfalls, and solutions. 2. Proteins.

    Science.gov (United States)

    Zipper, Peter; Durchschlag, Helmut

    2010-02-01

    Hydrodynamic models of proteins have been generated by recourse to crystallographic data and applying a filling model strategy in order to predict both hydrodynamic and scattering parameters. The design of accurate protein models retaining the majority of the molecule peculiarities requires usage of many beads and consideration of many serious problems. Applying the expertise obtained with ellipsoid models and pilot tests on proteins, we succeeded in constructing precise models for several anhydrous and hydrated proteins of different shape, size, and complexity. The models constructed consist of many beads (up to about 11,000) for the protein constituents (atoms, amino acid residues, groups) and preferentially bound water molecules. While in the case of small proteins, parameter predictions are straightforward, computations for giant proteins necessitate drastic reductions of the number of initially available beads. Among several auxiliary programs, our advanced hydration programs, HYDCRYST and HYDMODEL, and modified versions of García de la Torre's program HYDRO were successfully employed. This allowed the generation of realistic protein models by imaging details of their fine structure and enabled the prediction of reliable molecular parameters including intrinsic viscosities. The appearance of the models and the agreement of molecular properties and distance distribution functions p(r) of unreduced and reduced models can be used for a meticulous inspection of the data obtained.

  14. Modelling refrigerant distribution in minichannel evaporators

    DEFF Research Database (Denmark)

    Brix, Wiebke

    to be equal, results in a cooling capacity very close to the optimum. A sensitivity study considering parameter changes shows that the course of the pressure gradient in the channel is significant, considering the magnitude of the capacity reductions due to non-uniform liquid and vapour distribution and non......This thesis is concerned with numerical modelling of flow distribution in a minichannel evaporator for air-conditioning. The study investigates the impact of non-uniform airflow and non-uniform distribution of the liquid and vapour phases in the inlet manifold on the refrigerant mass flow...... distribution and on the cooling capacity of the evaporator. A one dimensional, steady state model of a minichannel evaporator is used for the study. An evaporator consisting of two multiport minichannels in parallel is used as a test case and two different refrigerants, R134a and R744 (CO2), are applied...

  15. Harmony Theory: Problem Solving, Parallel Cognitive Models, and Thermal Physics.

    Science.gov (United States)

    Smolensky, Paul; Riley, Mary S.

    This document consists of three papers. The first, "A Parallel Model of (Sequential) Problem Solving," describes a parallel model designed to solve a class of relatively simple problems from elementary physics and discusses implications for models of problem-solving in general. It is shown that one of the most salient features of problem…

  16. Distributed lag models for hydrological data.

    Science.gov (United States)

    Rushworth, Alastair M; Bowman, Adrian W; Brewer, Mark J; Langan, Simon J

    2013-06-01

    The distributed lag model (DLM), used most prominently in air pollution studies, finds application wherever the effect of a covariate is delayed and distributed through time. We specify modified formulations of DLMs to provide computationally attractive, flexible varying-coefficient models that are applicable in any setting in which lagged covariates are regressed on a time-dependent response. We investigate the application of such models to rainfall and river flow and in particular their role in understanding the impact of hidden variables at work in river systems. We apply two models to data from a Scottish mountain river, and we fit to some simulated data to check the efficacy of our model approach. During heavy rainfall conditions, changes in the influence of rainfall on flow arise through a complex interaction between antecedent ground wetness and a time-delay in rainfall. The models identify subtle changes in responsiveness to rainfall, particularly in the location of peak influence in the lag structure.

  17. Modeling the pion Generalized Parton Distribution

    CERN Document Server

    Mezrag, C

    2015-01-01

    We compute the pion Generalized Parton Distribution (GPD) in a valence dressed quarks approach. We model the Mellin moments of the GPD using Ans\\"atze for Green functions inspired by the numerical solutions of the Dyson-Schwinger Equations (DSE) and the Bethe-Salpeter Equation (BSE). Then, the GPD is reconstructed from its Mellin moment using the Double Distribution (DD) formalism. The agreement with available experimental data is very good.

  18. A Study On Distributed Model Predictive Consensus

    CERN Document Server

    Keviczky, Tamas

    2008-01-01

    We investigate convergence properties of a proposed distributed model predictive control (DMPC) scheme, where agents negotiate to compute an optimal consensus point using an incremental subgradient method based on primal decomposition as described in Johansson et al. [2006, 2007]. The objective of the distributed control strategy is to agree upon and achieve an optimal common output value for a group of agents in the presence of constraints on the agent dynamics using local predictive controllers. Stability analysis using a receding horizon implementation of the distributed optimal consensus scheme is performed. Conditions are given under which convergence can be obtained even if the negotiations do not reach full consensus.

  19. Applications of species distribution modeling to paleobiology

    DEFF Research Database (Denmark)

    Svenning, J.-C.; Fløjgaard, Camilla; A. Marske, Katharine

    2011-01-01

    Species distribution modeling (SDM: statistical and/or mechanistic approaches to the assessment of range determinants and prediction of species occurrence) offers new possibilities for estimating and studying past organism distributions. SDM complements fossil and genetic evidence by providing (i...... the role of Pleistocene glacial refugia in biogeography and evolution, especially in Europe, but also in many other regions. SDM-based approaches are also beginning to contribute to a suite of other research questions, such as historical constraints on current distributions and diversity patterns, the end...

  20. Advanced Distribution Network Modelling with Distributed Energy Resources

    Science.gov (United States)

    O'Connell, Alison

    The addition of new distributed energy resources, such as electric vehicles, photovoltaics, and storage, to low voltage distribution networks means that these networks will undergo major changes in the future. Traditionally, distribution systems would have been a passive part of the wider power system, delivering electricity to the customer and not needing much control or management. However, the introduction of these new technologies may cause unforeseen issues for distribution networks, due to the fact that they were not considered when the networks were originally designed. This thesis examines different types of technologies that may begin to emerge on distribution systems, as well as the resulting challenges that they may impose. Three-phase models of distribution networks are developed and subsequently utilised as test cases. Various management strategies are devised for the purposes of controlling distributed resources from a distribution network perspective. The aim of the management strategies is to mitigate those issues that distributed resources may cause, while also keeping customers' preferences in mind. A rolling optimisation formulation is proposed as an operational tool which can manage distributed resources, while also accounting for the uncertainties that these resources may present. Network sensitivities for a particular feeder are extracted from a three-phase load flow methodology and incorporated into an optimisation. Electric vehicles are the focus of the work, although the method could be applied to other types of resources. The aim is to minimise the cost of electric vehicle charging over a 24-hour time horizon by controlling the charge rates and timings of the vehicles. The results demonstrate the advantage that controlled EV charging can have over an uncontrolled case, as well as the benefits provided by the rolling formulation and updated inputs in terms of cost and energy delivered to customers. Building upon the rolling optimisation, a

  1. Allocation of Capacitors and Voltage Regulators in Unbalanced Distribution Systems: A Multi-objective Problem in Probabilistic Frameworks

    Science.gov (United States)

    Carpinelli, Guido; Noce, Christian; Russo, Angela; Varilone, Pietro

    2014-12-01

    Capacitors and series voltage regulators are used extensively in distribution systems to reduce power losses and improve the voltage profile along the feeders. This paper deals with the problem of contemporaneously choosing optimal locations and sizes for both capacitors and series voltage regulators in three-phase, unbalanced distribution systems. This is a mixed, non-linear, constrained, multi-objective optimization problem that usually is solved in deterministic scenarios. However, distribution systems are stochastic in nature, which can lead to inaccurate deterministic solutions. To take into account the unavoidable uncertainties that affect the input data related to the problem, in this paper, we have formulated and solved the multi-objective optimization problem in probabilistic scenarios. To address the multi-objective optimization problem, algorithms were used in which all the objective functions were combined to form a single function. These algorithms allow us to transform the original multi-objective optimization problem into an equivalent, single-objective, optimization problem, an approach that appeared to be particularly suitable since computational time was an important issue. To further reduce the computational efforts, a linearized form of the equality constraints of the optimization model was used, and a micro-genetic algorithm-based procedure was applied in the solution method.

  2. Smart Card Identification Management Over A Distributed Database Model

    Directory of Open Access Journals (Sweden)

    Olatubosun Olabode

    2011-01-01

    Full Text Available Problem statement: An effective national identification system is a necessity in any national government for the proper implementation and execution of its governmental policies and duties. Approach: Such data can be held in a database relation in a distributed database environment. Till date, The Nigerian government is yet to have an effective and efficient National Identification Management System despite the huge among of money expended on the project. Results: This article presents a Smart Card Identification Management System over a Distributed Database Model. The model was implemented using a client/server architecture between a server and multiple clients. The programmable smart card to store identification detail, including the biometric feature was proposed. Among many other variables stored in the smart card includes individual information on personal identification number, gender, date of birth, place of birth, place of residence, citizenship, continuously updated information on vital status and the identity of parents and spouses. Conclusion/Recommendations: A conceptualization of the database structures and architecture of the distributed database model is presented. The designed distributed database model was intended to solve the lingering problems associated with multiple identification in a society.

  3. Distributed Solutions for Loosely Coupled Feasibility Problems Using Proximal Splitting Methods

    DEFF Research Database (Denmark)

    Pakazad, Sina Khoshfetrat; Andersen, Martin Skovgaard; Hansson, Anders

    2014-01-01

    In this paper,we consider convex feasibility problems (CFPs) where the underlying sets are loosely coupled, and we propose several algorithms to solve such problems in a distributed manner. These algorithms are obtained by applying proximal splitting methods to convex minimization reformulations...... of CFPs. We also put forth distributed convergence tests which enable us to establish feasibility or infeasibility of the problem distributedly, and we provide convergence rate results. Under the assumption that the problem is feasible and boundedly linearly regular, these convergence results are given...... in terms of the distance of the iterates to the feasible set, which are similar to those of classical projection methods. In case the feasibility problem is infeasible, we provide convergence rate results that concern the convergence of certain error bounds....

  4. Distributed Bayesian Networks for User Modeling

    DEFF Research Database (Denmark)

    Tedesco, Roberto; Dolog, Peter; Nejdl, Wolfgang

    2006-01-01

    by such adaptive applications are often partial fragments of an overall user model. The fragments have then to be collected and merged into a global user profile. In this paper we investigate and present algorithms able to cope with distributed, fragmented user models – based on Bayesian Networks – in the context...... of Web-based eLearning platforms. The scenario we are tackling assumes learners who use several systems over time, which are able to create partial Bayesian Networks for user models based on the local system context. In particular, we focus on how to merge these partial user models. Our merge mechanism...... efficiently combines distributed learner models without the need to exchange internal structure of local Bayesian networks, nor local evidence between the involved platforms....

  5. A First Attempt on the Distributed Prize Collecting Steiner Tree Problem

    OpenAIRE

    Rossetti, Niccolò G., 1985-

    2015-01-01

    The goal of this work is to design and simulate a distributed algorithm to solve a novel variant of the Price Collecting Steiner Tree (PCST) problem, where nodes are computing entities with only local information and communication abilities. Our approach is to study existing techniques for the centralized PCST, such as the Primal-Dual Integer Program given by Goemans and Williamson (GW-algorithm), and distributed algorithms for the Minimum Spanning Tree (MST) and minimum Steiner Tree Problem....

  6. Inverse modeling for heat conduction problem in human abdominal phantom.

    Science.gov (United States)

    Huang, Ming; Chen, Wenxi

    2011-01-01

    Noninvasive methods for deep body temperature measurement are based on the principle of heat equilibrium between the thermal sensor and the target location theoretically. However, the measurement position is not able to be definitely determined. In this study, a 2-dimensional mathematical model was built based upon some assumptions for the physiological condition of the human abdomen phantom. We evaluated the feasibility in estimating the internal organs temperature distribution from the readings of the temperature sensors arranged on the skin surface. It is a typical inverse heat conduction problem (IHCP), and is usually mathematically ill-posed. In this study, by integrating some physical and physiological a-priori information, we invoked the quasi-linear (QL) method to reconstruct the internal temperature distribution. The solutions of this method were improved by increasing the accuracy of the sensors and adjusting their arrangement on the outer surface, and eventually reached the state of converging at the best state accurately. This study suggests that QL method is able to reconstruct the internal temperature distribution in this phantom and might be worthy of a further study in an anatomical based model.

  7. Models for the Discrete Berth Allocation Problem: A Computational Comparison

    DEFF Research Database (Denmark)

    Buhrkal, Katja; Zuglian, Sara; Røpke, Stefan

    In this paper we consider the problem of allocating arriving ships to discrete berth locations at container terminals. This problem is recognized as one of the most important processes for any container terminal. We review and describe the three main models of the discrete dynamic berth allocation...... problem, improve the performance of one model, and, through extensive numerical tests, compare all models from a computational perspective. The results indicate that a generalized setpartitioning model outperforms all other existing models....

  8. Models for the discrete berth allocation problem: A computational comparison

    DEFF Research Database (Denmark)

    Buhrkal, Katja Frederik; Zuglian, Sara; Røpke, Stefan;

    2011-01-01

    In this paper we consider the problem of allocating arriving ships to discrete berth locations at container terminals. This problem is recognized as one of the most important processes for any container terminal. We review and describe three main models of the discrete dynamic berth allocation pr...... problem, improve the performance of one model, and, through extensive numerical tests, compare all models from a computational perspective. The results indicate that a generalized set-partitioning model outperforms all other existing models....

  9. Some problems of pulsar physics. [magnetospheric plasma model

    Science.gov (United States)

    Arons, J.

    1979-01-01

    The theories of particle acceleration along polar field lines are reviewed, and the total energization of the charge separated plasma is summarized, when pair creation is absent. The application of these theories and plasma supply to pulsars is discussed, with attention given to the total amount of electron-positron plasma created and its momentum distribution. Various aspects of radiation emission and transport are analyzed, based on a polar current flow model with pair creation, and the phenomenon of marching subpulses is considered. The coronation beaming and the relativistically expanding current sheet models for pulsar emission are also outlined, and the paper concludes with a brief discussion of the relation between the theories of polar flow with pair plasma and the problem of the energization of the Crab Nebula.

  10. The Family Problem: Hints from Heterotic Line Bundle Models

    CERN Document Server

    Constantin, Andrei; Mishra, Challenger

    2015-01-01

    Within the class of heterotic line bundle models, we argue that N=1 vacua which lead to a small number of low-energy chiral families are preferred. By imposing an upper limit on the volume of the internal manifold, as required in order to obtain finite values of the four-dimensional gauge couplings, and validity of the supergravity approximation we show that, for a given manifold, only a finite number of line bundle sums are consistent with supersymmetry. By explicitly scanning over this finite set of line bundle models on certain manifolds we show that, for a sufficiently small volume of the internal manifold, the family number distribution peaks at small values, consistent with three chiral families. The relation between the maximal number of families and the gauge coupling is discussed, which hints towards a possible explanation of the family problem.

  11. EFFECT OF PROBLEM BASED LEARNING AND MODEL CRITICAL THINKING ABILITY TO PROBLEM SOLVING SKILLS

    Directory of Open Access Journals (Sweden)

    Unita S. Zuliani Nasution

    2016-12-01

    Full Text Available The purposes of this research were to analyze the different between physic resolving problem ability by using problem based learning model and direct instruction model, the different of physic resolving problem ability between the students that have critical thinking ability upper the average and the students that have critical thinking ability under the average, and the interaction of problem based learning model toward critical thinking ability and students’ physic resolving problem ability. This research was quasy experimental research that use critical thinking ability tests and physic resolving problem ability tests as the instruments. Result of the research showed that the students’ physic resolving problem ability by using problem based learning model was better than by using direct instruction model, students’ physic resolving problem ability and critical thinking ability upper the average showed better different and result than students’ critical thinking ability under the average, besides there was an interaction between problem based learning model and critical thinking ability in improving students’ physic resolving problem ability.

  12. Community Problem-Solving Framed as a Distributed Information Use Environment: Bridging Research and Practice

    Science.gov (United States)

    Durrance, Joan C.; Souden, Maria; Walker, Dana; Fisher, Karen E.

    2006-01-01

    Introduction: This article results from a qualitative study of 1) information behavior in community problem-solving framed as a distributed information use environment and 2) approaches used by a best-practice library to anticipate information needs associated with community problem solving. Method: Several approaches to data collection were…

  13. Distributed model predictive control made easy

    CERN Document Server

    Negenborn, Rudy

    2014-01-01

    The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems.   This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...

  14. Income distribution: An adaptive heterogeneous model

    Science.gov (United States)

    da Silva, L. C.; de Figueirêdo, P. H.

    2014-02-01

    In this communication an adaptive process is introduced into a many-agent model for closed economic system in order to establish general features of income distribution. In this new version agents are able to modify their exchange parameter ωi of resources through an adaptive process. The conclusions indicate that assuming an instantaneous learning behavior of all agents a Γ-distribution for income is reproduced while a frozen behavior establishes a Pareto’s distribution for income with an exponent ν=0.94±0.02. A third case occurs when a heterogeneous “inertia” behavior is introduced leading us to a Γ-distribution at the low income regime and a power-law decay for the large income values with an exponent ν=2.05±0.05. This method enables investigation of the resources flux in the economic environment and produces also bounding values for the Gini index comparable with data evidences.

  15. Dynamic Distribution Model with Prime Granularity for Parallel Computing

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    Dynamic distribution model is one of the best schemes for parallel volume rendering. However, in homogeneous cluster system, since the granularity is traditionally identical, all processors communicate almost simultaneously and computation load may lose balance. Due to problems above, a dynamic distribution model with prime granularity for parallel computing is presented.Granularities of each processor are relatively prime, and related theories are introduced. A high parallel performance can be achieved by minimizing network competition and using a load balancing strategy that ensures all processors finish almost simultaneously. Based on Master-Slave-Gleaner (MSG) scheme, the parallel Splatting Algorithm for volume rendering is used to test the model on IBM Cluster 1350 system. The experimental results show that the model can bring a considerable improvement in performance, including computation efficiency, total execution time, speed, and load balancing.

  16. Distributionally Robust Return-Risk Optimization Models and Their Applications

    Directory of Open Access Journals (Sweden)

    Li Yang

    2014-01-01

    Full Text Available Based on the risk control of conditional value-at-risk, distributionally robust return-risk optimization models with box constraints of random vector are proposed. They describe uncertainty in both the distribution form and moments (mean and covariance matrix of random vector. It is difficult to solve them directly. Using the conic duality theory and the minimax theorem, the models are reformulated as semidefinite programming problems, which can be solved by interior point algorithms in polynomial time. An important theoretical basis is therefore provided for applications of the models. Moreover, an application of the models to a practical example of portfolio selection is considered, and the example is evaluated using a historical data set of four stocks. Numerical results show that proposed methods are robust and the investment strategy is safe.

  17. Comparison of sparse point distribution models

    DEFF Research Database (Denmark)

    Erbou, Søren Gylling Hemmingsen; Vester-Christensen, Martin; Larsen, Rasmus;

    2010-01-01

    This paper compares several methods for obtaining sparse and compact point distribution models suited for data sets containing many variables. These are evaluated on a database consisting of 3D surfaces of a section of the pelvic bone obtained from CT scans of 33 porcine carcasses. The superior...... model w.r.t. sparsity, reconstruction error and interpretability is found to be a varimax rotated model with a threshold applied to small loadings. The models describe the biological variation in the database and is used for developing robotic tools when automating labor intensive procedures...

  18. Hot Water Distribution System Model Enhancements

    Energy Technology Data Exchange (ETDEWEB)

    Hoeschele, M. [Alliance for Residential Building Innovation (ARBI), Davis, CA (United States); Weitzel, E. [Alliance for Residential Building Innovation (ARBI), Davis, CA (United States)

    2012-11-01

    This project involves enhancement of the HWSIM distribution system model to more accurately model pipe heat transfer. Recent laboratory testing efforts have indicated that the modeling of radiant heat transfer effects is needed to accurately characterize piping heat loss. An analytical methodology for integrating radiant heat transfer was implemented with HWSIM. Laboratory test data collected in another project was then used to validate the model for a variety of uninsulated and insulated pipe cases (copper, PEX, and CPVC). Results appear favorable, with typical deviations from lab results less than 8%.

  19. Geometric Algebra Model of Distributed Representations

    CERN Document Server

    Patyk, Agnieszka

    2010-01-01

    Formalism based on GA is an alternative to distributed representation models developed so far --- Smolensky's tensor product, Holographic Reduced Representations (HRR) and Binary Spatter Code (BSC). Convolutions are replaced by geometric products, interpretable in terms of geometry which seems to be the most natural language for visualization of higher concepts. This paper recalls the main ideas behind the GA model and investigates recognition test results using both inner product and a clipped version of matrix representation. The influence of accidental blade equality on recognition is also studied. Finally, the efficiency of the GA model is compared to that of previously developed models.

  20. Modeling Word Burstiness Using the Dirichlet Distribution

    DEFF Research Database (Denmark)

    Madsen, Rasmus Elsborg; Kauchak, David; Elkan, Charles

    2005-01-01

    Multinomial distributions are often used to model text documents. However, they do not capture well the phenomenon that words in a document tend to appear in bursts: if a word appears once, it is more likely to appear again. In this paper, we propose the Dirichlet compound multinomial model (DCM......) as an alternative to the multinomial. The DCM model has one additional degree of freedom, which allows it to capture burstiness. We show experimentally that the DCM is substantially better than the multinomial at modeling text data, measured by perplexity. We also show using three standard document collections...

  1. Exploiting linkage information and problem-specific knowledge in evolutionary distribution network expansion planning

    NARCIS (Netherlands)

    N.H. Luong (Ngoc Hoang); J.A. La Poutré (Han); P.A.N. Bosman (Peter)

    2017-01-01

    textabstractThis article tackles the Distribution Network Expansion Planning (DNEP) problem that has to be solved by distribution network operators to decide which, where, and/or when enhancements to electricity networks should be introd uced to satisfy the future power demands. Because of many

  2. Exterior 3D lamb problem: Harmonic load distributed over a surface

    Science.gov (United States)

    Il'yasov, Kh. Kh.; Kravtsov, A. V.; Kuznetsov, S. V.; Sekerzh-Zen'kovich, S. Ya.

    2016-06-01

    The solutions of the exterior Lamb problem with a distributed harmonic surface load acting on the boundary of an elastic half-space are studied. A load normal to the surface and distributed over the surface as the Poisson kernel is considered. The solution is constructed with the use of integral transforms and the finite-element method.

  3. PROGRAMMING OF METHODS FOR THE NEEDS OF LOGISTICS DISTRIBUTION SOLVING PROBLEMS

    Directory of Open Access Journals (Sweden)

    Andrea Štangová

    2014-06-01

    Full Text Available Logistics has become one of the dominant factors which is affecting the successful management, competitiveness and mentality of the global economy. Distribution logistics materializes the connesciton of production and consumer marke. It uses different methodology and methods of multicriterial evaluation and allocation. This thesis adresses the problem of the costs of securing the distribution of product. It was therefore relevant to design a software product thet would be helpful in solvin the problems related to distribution logistics. Elodis – electronic distribution logistics program was designed on the basis of theoretical analysis of the issue of distribution logistics and on the analysis of the software products market. The program uses a multicriterial evaluation methods to deremine the appropriate type and mathematical and geometrical method to determine an appropriate allocation of the distribution center, warehouse and company.

  4. Environmental problems indicator under environmental modeling toward sustainable development

    Directory of Open Access Journals (Sweden)

    P. Sutthichaimethee

    2015-09-01

    Full Text Available This research aims to apply a model to the study and analysis of environmental and natural resource costs created in supply chains of goods and services produced in Thailand, and propose indicators for environmental problem management, caused by goods and services production, based on concepts of sustainable production and consumer behavior. The research showed that the highest environmental cost in terms of Natural Resource Materials was from pipelines and gas distribution, while the lowest was for farming coconuts. The highest environmental cost in terms of Energy and Transportation was for iron and steel. The highest environmental cost in the category of Fertilizer and Pesticides was for oil palm. For Sanitation Services, the highest environmental cost was movie theaters. Overall, the lowest environmental cost for all categories, except Natural Resource Materials, was for petroleum and refineries. Based on the cost index, coconut farming gained the highest Real Benefit to the farm owner, while pipelines and gas distribution had the lowest Real Benefit. If Thailand were to use a similar environmental problem indicator, it could be applied to formulate efficient policy and strategy for the country in three areas, namely social, economic, and environmental development.

  5. Finessing atlas data for species distribution models

    NARCIS (Netherlands)

    Niamir, A.; Skidmore, A.K.; Toxopeus, A.G.; Munoz, A.R.; Real, R.

    2011-01-01

    Aim The spatial resolution of species atlases and therefore resulting model predictions are often too coarse for local applications. Collecting distribution data at a finer resolution for large numbers of species requires a comprehensive sampling effort, making it impractical and expensive. This stu

  6. Finessing atlas data for species distribution models

    NARCIS (Netherlands)

    Niamir, A.; Skidmore, A.K.; Toxopeus, A.G.; Munoz, A.R.; Real, R.

    2011-01-01

    Aim The spatial resolution of species atlases and therefore resulting model predictions are often too coarse for local applications. Collecting distribution data at a finer resolution for large numbers of species requires a comprehensive sampling effort, making it impractical and expensive. This

  7. Modeling utilization distributions in space and time.

    Science.gov (United States)

    Keating, Kim A; Cherry, Steve

    2009-07-01

    W. Van Winkle defined the utilization distribution (UD) as a probability density that gives an animal's relative frequency of occurrence in a two-dimensional (x, y) plane. We extend Van Winkle's work by redefining the UD as the relative frequency distribution of an animal's occurrence in all four dimensions of space and time. We then describe a product kernel model estimation method, devising a novel kernel from the wrapped Cauchy distribution to handle circularly distributed temporal covariates, such as day of year. Using Monte Carlo simulations of animal movements in space and time, we assess estimator performance. Although not unbiased, the product kernel method yields models highly correlated (Pearson's r = 0.975) with true probabilities of occurrence and successfully captures temporal variations in density of occurrence. In an empirical example, we estimate the expected UD in three dimensions (x, y, and t) for animals belonging to each of two distinct bighorn sheep (Ovis canadensis) social groups in Glacier National Park, Montana, USA. Results show the method can yield ecologically informative models that successfully depict temporal variations in density of occurrence for a seasonally migratory species. Some implications of this new approach to UD modeling are discussed.

  8. A Review of Decision Support Models for Global Distribution Network Design and Future Model development

    DEFF Research Database (Denmark)

    Reich, Juri; Kinra, Aseem; Kotzab, Herbert

    not offer a comprehensive method that is able to solve the problem in one single decision making process considering all relevant goals and factors. Thus, we attempt to create such a model using existing methods as building blocks, namely mixedinteger linear programming and the analytical hierarchy process.......We look at the global distribution network design problem and the requirements to solve it. This problem typically involves conflicting goals and a magnitude of interdependent input factors, described by qualitative and quantitative information. Our literature review shows that current models do...

  9. A Reference Model for Distribution Grid Control in the 21st Century

    Energy Technology Data Exchange (ETDEWEB)

    Taft, Jeffrey D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); De Martini, Paul [California Inst. of Technology (CalTech), Pasadena, CA (United States); Kristov, Lorenzo [California Independent System Operator, Folsom, CA (United States)

    2015-07-01

    Intensive changes in the structure of the grid due to the penetration of new technologies, coupled with changing societal needs are outpacing the capabilities of traditional grid control systems. The gap is widening at an accelerating rate with the biggest impacts occurring at the distribution level due to the widespread adoption of diverse distribution-connected energy resources (DER) . This paper outlines the emerging distribution grid control environment, defines the new distribution control problem, and provides a distribution control reference model. The reference model offers a schematic representation of the problem domain to inform development of system architecture and control solutions for the high-DER electric system.

  10. A Network Model for Parallel Line Balancing Problem

    OpenAIRE

    Recep Benzer; Hadi Gökçen; Tahsin Çetinyokus; Hakan Çerçioglu

    2007-01-01

    Gökçen et al. (2006) have proposed several procedures and a mathematical model on single-model (product) assembly line balancing (ALB) problem with parallel lines. In parallel ALB problem, the goal is to balance more than one assembly line together. In this paper, a network model for parallel ALB problem has been proposed and illustrated on a numerical example. This model is a new approach for parallel ALB and it provides a different point of view for i...

  11. Analysis of Jingdong Mall Logistics Distribution Model

    Science.gov (United States)

    Shao, Kang; Cheng, Feng

    In recent years, the development of electronic commerce in our country to speed up the pace. The role of logistics has been highlighted, more and more electronic commerce enterprise are beginning to realize the importance of logistics in the success or failure of the enterprise. In this paper, the author take Jingdong Mall for example, performing a SWOT analysis of their current situation of self-built logistics system, find out the problems existing in the current Jingdong Mall logistics distribution and give appropriate recommendations.

  12. Asynchronous Partial Overlay: A New Algorithm for Solving Distributed Constraint Satisfaction Problems

    CERN Document Server

    Lesser, V R; 10.1613/jair.1786

    2011-01-01

    Distributed Constraint Satisfaction (DCSP) has long been considered an important problem in multi-agent systems research. This is because many real-world problems can be represented as constraint satisfaction and these problems often present themselves in a distributed form. In this article, we present a new complete, distributed algorithm called Asynchronous Partial Overlay (APO) for solving DCSPs that is based on a cooperative mediation process. The primary ideas behind this algorithm are that agents, when acting as a mediator, centralize small, relevant portions of the DCSP, that these centralized subproblems overlap, and that agents increase the size of their subproblems along critical paths within the DCSP as the problem solving unfolds. We present empirical evidence that shows that APO outperforms other known, complete DCSP techniques.

  13. A Multiple Period Problem in Distributed Energy Management Systems Considering CO2 Emissions

    Science.gov (United States)

    Muroda, Yuki; Miyamoto, Toshiyuki; Mori, Kazuyuki; Kitamura, Shoichi; Yamamoto, Takaya

    Consider a special district (group) which is composed of multiple companies (agents), and where each agent responds to an energy demand and has a CO2 emission allowance imposed. A distributed energy management system (DEMS) optimizes energy consumption of a group through energy trading in the group. In this paper, we extended the energy distribution decision and optimal planning problem in DEMSs from a single period problem to a multiple periods one. The extension enabled us to consider more realistic constraints such as demand patterns, the start-up cost, and minimum running/outage times of equipment. At first, we extended the market-oriented programming (MOP) method for deciding energy distribution to the multiple periods problem. The bidding strategy of each agent is formulated by a 0-1 mixed non-linear programming problem. Secondly, we proposed decomposing the problem into a set of single period problems in order to solve it faster. In order to decompose the problem, we proposed a CO2 emission allowance distribution method, called an EP method. We confirmed that the proposed method was able to produce solutions whose group costs were close to lower-bound group costs by computational experiments. In addition, we verified that reduction in computational time was achieved without losing the quality of solutions by using the EP method.

  14. Exact solutions to model surface and volume charge distributions

    Science.gov (United States)

    Mukhopadhyay, S.; Majumdar, N.; Bhattacharya, P.; Jash, A.; Bhattacharya, D. S.

    2016-10-01

    Many important problems in several branches of science and technology deal with charges distributed along a line, over a surface and within a volume. Recently, we have made use of new exact analytic solutions of surface charge distributions to develop the nearly exact Boundary Element Method (neBEM) toolkit. This 3D solver has been successful in removing some of the major drawbacks of the otherwise elegant Green's function approach and has been found to be very accurate throughout the computational domain, including near- and far-field regions. Use of truly distributed singularities (in contrast to nodally concentrated ones) on rectangular and right-triangular elements used for discretizing any three-dimensional geometry has essentially removed many of the numerical and physical singularities associated with the conventional BEM. In this work, we will present this toolkit and the development of several numerical models of space charge based on exact closed-form expressions. In one of the models, Particles on Surface (ParSur), the space charge inside a small elemental volume of any arbitrary shape is represented as being smeared on several surfaces representing the volume. From the studies, it can be concluded that the ParSur model is successful in getting the estimates close to those obtained using the first-principles, especially close to and within the cell. In the paper, we will show initial applications of ParSur and other models in problems related to high energy physics.

  15. Distributed parallel computing in stochastic modeling of groundwater systems.

    Science.gov (United States)

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  16. Modeling Mosquito Distribution. Impact of the Landscape

    Science.gov (United States)

    Dumont, Y.

    2011-09-01

    In order to use efficiently vector control tools, like insecticides, and mechanical control, it is necessary to provide mosquito density estimate and mosquito distribution, taking into account the environment and entomological knowledges. Mosquito dispersal modeling, together with a compartmental approach, leads to a quasilinear parabolic system. Using the time splitting approach and appropriate numerical methods for each operator, we construct a reliable numerical scheme. Considering various landscapes, we show that the environment can have a strong influence on mosquito distribution and, thus, in the efficiency or not of vector control.

  17. Distributed Global Function Model Finding for Wireless Sensor Network Data

    Directory of Open Access Journals (Sweden)

    Song Deng

    2016-01-01

    Full Text Available Function model finding has become an important tool for analysis of data collected from wireless sensor networks (WSNs. With the development of WSNs, a large number of sensors have been widely deployed so that the collected data show the characteristics of distribution and mass. For distributed and massive sensor data, traditional centralized function model finding algorithms would lead to a significant decrease in performance. To solve this problem, this paper proposes a distributed global function model finding algorithm for wireless sensor network data (DGFMF-WSND. In DGFMF-WSND, on the basis of gene expression programming (GEP, an adaptive population generation strategy based on sub-population associated evolution is applied to improve the convergence speed of GEP. Secondly, to solve the generation of global function model in distributed wireless sensor networks data, this paper provides a global model generation algorithm based on unconstrained nonlinear least squares. Four representative datasets are used to evaluate the performance of the proposed algorithm. The comparative results show that the improved GEP with adaptive population generation strategy outperforms all other algorithms on the average convergence speed, time-consumption, value of R-square, and prediction accuracy. Meanwhile, experimental results also show that DGFMF-WSND has a clear advantage in terms of time-consumption and error of fitting. Moreover, with increasing of dataset size, DGFMF-WSND also demonstrates good speed-up ratio and scale-up ratio.

  18. The problem of margin calculation and its reduction via the p-median problem model

    NARCIS (Netherlands)

    Goldengorin, B.; Krushynskyi, D.; Kuzmenko, V.; Mastorakis, NE; Demiralp, M; Mladenov,; Bojkovic, Z

    2009-01-01

    The paper deals with a model for calculation of the regulatory margin on brokerage accounts. The model is based on the p-Median problem (PMP) that is known to be NP-hard. We use a pseudo-Boolean representation of the PMP and propose several problem size reduction and preprocessing techniques. Our co

  19. The Effect of Problem Solving and Problem Posing Models and Innate Ability to Students Achievement

    Directory of Open Access Journals (Sweden)

    Ratna Kartika Irawati

    2015-04-01

    Full Text Available Pengaruh Model Problem Solving dan Problem Posing serta Kemampuan Awal terhadap Hasil Belajar Siswa   Abstract: Chemistry concepts understanding features abstract quality and requires higher order thinking skills. Yet, the learning on chemistry has not boost the higher order thinking skills of the students. The use of the learning model of Problem Solving and Problem Posing in observing the innate ability of the student is expected to resolve the issue. This study aims to determine the learning model which is effective to improve the study of the student with different level of innate ability. This study used the quasi-experimental design. The research data used in this research is the quiz/test of the class which consist of 14 multiple choice questions and 5 essay questions. The data analysis used is ANOVA Two Ways. The results showed that Problem Posing is more effective to improve the student compared to Problem Solving, students with high level of innate ability have better outcomes in learning rather than the students with low level of innate ability after being applied with the Problem solving and Problem posing model, further, Problem Solving and Problem Posing is more suitable to be applied to the students with high level of innate ability. Key Words: problem solving, problem posing, higher order thinking skills, innate ability, learning outcomes   Abstrak: Pemahaman konsep-konsep kimia yang bersifat abstrak membutuhkan keterampilan berpikir tingkat tinggi. Pembelajaran kimia belum mendorong siswa melakukan keterampilan berpikir tingkat tinggi. Penggunaan model pembelajaran Problem Solving dan Problem Posing dengan memperhatikan kemampuan awal siswa diduga dapat mengatasi masalah tersebut. Penelitian ini bertujuan untuk mengetahui model pembelajaran yang efektif dalam meningkatkan hasil belajar dengan kemampuan awal siswa yang berbeda. Penelitian ini menggunakan rancangan eksperimen semu. Data penelitian menggunakan tes hasil belajar

  20. A unified constructive network model for problem-solving.

    Science.gov (United States)

    Takahashi, Y

    1996-01-01

    We develop a neural network model that relieves time-consuming trial-and-error computer experiments usually performed in problem-solving with networks where problems, including the traveling salesman problem, pattern matching and pattern classification/learning, are formulated as optimization problems with constraint. First, we specify and uniquely distinguish the model as a set of constituent functions that should comply with restrictive conditions. Next, we demonstrate that it is unified, i.e., it yields most current networks. Finally, we verify that it is constructive, that is, we show a standard method that systematically constructs from a given optimization problem a particular network in that model to solve it.

  1. Distributed static linear Gaussian models using consensus.

    Science.gov (United States)

    Belanovic, Pavle; Valcarcel Macua, Sergio; Zazo, Santiago

    2012-10-01

    Algorithms for distributed agreement are a powerful means for formulating distributed versions of existing centralized algorithms. We present a toolkit for this task and show how it can be used systematically to design fully distributed algorithms for static linear Gaussian models, including principal component analysis, factor analysis, and probabilistic principal component analysis. These algorithms do not rely on a fusion center, require only low-volume local (1-hop neighborhood) communications, and are thus efficient, scalable, and robust. We show how they are also guaranteed to asymptotically converge to the same solution as the corresponding existing centralized algorithms. Finally, we illustrate the functioning of our algorithms on two examples, and examine the inherent cost-performance trade-off.

  2. Building a generalized distributed system model

    Science.gov (United States)

    Mukkamala, R.

    1992-01-01

    The key elements in the second year (1991-92) of our project are: (1) implementation of the distributed system prototype; (2) successful passing of the candidacy examination and a PhD proposal acceptance by the funded student; (3) design of storage efficient schemes for replicated distributed systems; and (4) modeling of gracefully degrading reliable computing systems. In the third year of the project (1992-93), we propose to: (1) complete the testing of the prototype; (2) enhance the functionality of the modules by enabling the experimentation with more complex protocols; (3) use the prototype to verify the theoretically predicted performance of locking protocols, etc.; and (4) work on issues related to real-time distributed systems. This should result in efficient protocols for these systems.

  3. On Problems of Multicomponent System Maintenance Modelling

    Institute of Scientific and Technical Information of China (English)

    Tomasz Nowakowski; Sylwia Werbinka

    2009-01-01

    We present an overview of some recent developments in the area of mathematical modeling of maintenance decisions for multi-unit systems. The emphasis is on three main groups of multicomponent maintenance optimization models: the block replacement models, group maintenance models, and opportunistic maintenance models. Moreover, an example of a two-unit system maintenance process is provided in order to compare various maintenance policies.

  4. Model-based segmentation of medical imagery by matching distributions.

    Science.gov (United States)

    Freedman, Daniel; Radke, Richard J; Zhang, Tao; Jeong, Yongwon; Lovelock, D Michael; Chen, George T Y

    2005-03-01

    The segmentation of deformable objects from three-dimensional (3-D) images is an important and challenging problem, especially in the context of medical imagery. We present a new segmentation algorithm based on matching probability distributions of photometric variables that incorporates learned shape and appearance models for the objects of interest. The main innovation over similar approaches is that there is no need to compute a pixelwise correspondence between the model and the image. This allows for a fast, principled algorithm. We present promising results on difficult imagery for 3-D computed tomography images of the male pelvis for the purpose of image-guided radiotherapy of the prostate.

  5. Fuzzy Approximate Model for Distributed Thermal Solar Collectors Control

    KAUST Repository

    Elmetennani, Shahrazed

    2014-07-01

    This paper deals with the problem of controlling concentrated solar collectors where the objective consists of making the outlet temperature of the collector tracking a desired reference. The performance of the novel approximate model based on fuzzy theory, which has been introduced by the authors in [1], is evaluated comparing to other methods in the literature. The proposed approximation is a low order state representation derived from the physical distributed model. It reproduces the temperature transfer dynamics through the collectors accurately and allows the simplification of the control design. Simulation results show interesting performance of the proposed controller.

  6. Mechanical modeling of porous oxide fuel pellet A Test Problem

    Energy Technology Data Exchange (ETDEWEB)

    Nukala, Phani K [ORNL; Barai, Pallab [ORNL; Simunovic, Srdjan [ORNL; Ott, Larry J [ORNL

    2009-10-01

    A poro-elasto-plastic material model has been developed to capture the response of oxide fuels inside the nuclear reactors under operating conditions. Behavior of the oxide fuel and variation in void volume fraction under mechanical loading as predicted by the developed model has been reported in this article. The significant effect of void volume fraction on the overall stress distribution of the fuel pellet has also been described. An important oxide fuel issue that can have significant impact on the fuel performance is the mechanical response of oxide fuel pellet and clad system. Specifically, modeling the thermo-mechanical response of the fuel pellet in terms of its thermal expansion, mechanical deformation, swelling due to void formation and evolution, and the eventual contact of the fuel with the clad is of significant interest in understanding the fuel-clad mechanical interaction (FCMI). These phenomena are nonlinear and coupled since reduction in the fuel-clad gap affects thermal conductivity of the gap, which in turn affects temperature distribution within the fuel and the material properties of the fuel. Consequently, in order to accurately capture fuel-clad gap closure, we need to account for fuel swelling due to generation, retention, and evolution of fission gas in addition to the usual thermal expansion and mechanical deformation. Both fuel chemistry and microstructure also have a significant effect on the nucleation and growth of fission gas bubbles. Fuel-clad gap closure leading to eventual contact of the fuel with the clad introduces significant stresses in the clad, which makes thermo-mechanical response of the clad even more relevant. The overall aim of this test problem is to incorporate the above features in order to accurately capture fuel-clad mechanical interaction. Because of the complex nature of the problem, a series of test problems with increasing multi-physics coupling features, modeling accuracy, and complexity are defined with the

  7. Some Identification Problems in the Cointegrated Vector Autoregressive Model

    DEFF Research Database (Denmark)

    Johansen, Søren

    2010-01-01

    The paper analyses some identification problems in the cointegrated vector autoregressive model. A criteria for identification by linear restrictions on individual relations is given. The asymptotic distribution of the estimators of a and ß is derived when they are identified by linear restrictions...... on ß , and when they are identified by linear restrictions on a . It it shown that, in the latter case, a component of is asymptotically Gaussian. Finally we discuss identification of shocks by introducing the contemporaneous and permanent effect of a shock and the distinction between permanent...... and transitory shocks, which allows one to identify permanent shocks from the long-run variance and transitory shocks from the short-run variance....

  8. The two-model problem in rational decision making

    NARCIS (Netherlands)

    Boumans, Marcel

    2011-01-01

    A model of a decision problem frames that problem in three dimensions: sample space, target probability and information structure. Each specific model imposes a specific rational decision. As a result, different models may impose different, even contradictory, rational decisions, creating choice ‘an

  9. Robustness of a Distributed Knowledge Management Model

    DEFF Research Database (Denmark)

    Pedersen, Mogens Kuhn; Holm Larsen, Michael

    2003-01-01

    ) and contractbasedknowledge exchange do not obtain networkeffectiveness because of prohibitive transaction costs inreducing uncertainty, we suggest a robust model for peerproduced knowledge within a distributed setting. Thepeer produced knowledge exchange model relies upon adouble loop knowledge conversion...... with symmetricincentives in a network since the production of actorspecific knowledge makes any knowledge appropriationby use of property rights by the actors irrelevant. Withoutproperty rights in knowledge the actor network generatesopportunity for incentive symmetry over a period of time.The model merges specific...... knowledge with knowledgefrom other actors into a decision support system specificfor each actor in the network in recognition of actor roledifferences. The article suggests a set of 9 static and 5dynamic propositions for the model to maintainsymmetric incentives between different actor networks.The model...

  10. Task Scheduling problem in distributed systems considering communication cost and precedence by population-based ACO

    Directory of Open Access Journals (Sweden)

    Hossein Erfani

    2012-09-01

    Full Text Available With regard to the fact of the rapid growth of distributed systems and their large spectrum of usage of proposing and representing controlling solutions and optimization of task execution procedures is one of the most important issues. Task scheduling in distributed systems has determining role in improving efficiency in applications such as communication, routing, production plans and project management. The most important issues of good schedule are minimizing makespan and average of waiting time. However, the recent and previous effort usually focused on minimizing makespan. This article presents and analyze a new method based on Ant Colony Optimization (ACO algorithm with considerations to precedence and communication cost for task scheduling problem. In the mentioned method in addition to optimization of finish time, average of waiting time and number of needed processors are also optimized. In this method, by using of a new heuristic list, an algorithm based on ant colony is proposed. The results obtained in comparison with the latest similar models of random search algorithms, proves the higher efficiency of algorithm.

  11. A New Maximum Entropy Estimation of Distribution Algorithm to Solve Uncertain Information Job-shop Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Lu Lin

    2009-10-01

    Full Text Available Estimation of Distribution Algorithm (EDA is a new kinds of colony evolution algorithm, through counting excellent information of individuals of the present colony EDA construct probability distribution model, then sample the model produces newt generation. To solve the NP-Hard question as EDA searching optimum network structure a new Maximum Entropy Distribution Algorithm (MEEDA is provided. The algorithm takes Jaynes principle as the basis, makes use of the maximum entropy of random variables to estimate the minimum bias probability distribution of random variables, and then regard it as the evolution model of the algorithm, which produces the optimal/near optimal solution. Then this paper presents a rough programming model for job shop scheduling under uncertain information problem. The method overcomes the defects of traditional methods which need pre-set authorized characteristics or amount described attributes, designs multi-objective optimization mechanism and expands the application space of a rough set in the issue of job shop scheduling under uncertain information environment. Due to the complexity of the proposed model, traditional algorithms have low capability in producing a feasible solution. We use MEEDA in order to enable a definition of a solution within a reasonable amount of time. We assume that machine flexibility in processing operations to decrease the complexity of the proposed model. Muth and Thompson’s benchmark problems tests are used to verify and validate the proposed rough programming model and its algorithm. The computational results obtained by MEEDA are compared with GA. The compared results prove the effectiveness of MEEDA in the job shop scheduling problem under uncertain information environment.

  12. Ranking multivariate GARCH models by problem dimension

    NARCIS (Netherlands)

    M. Caporin (Massimiliano); M.J. McAleer (Michael)

    2010-01-01

    textabstractIn the last 15 years, several Multivariate GARCH (MGARCH) models have appeared in the literature. The two most widely known and used are the Scalar BEKK model of Engle and Kroner (1995) and Ding and Engle (2001), and the DCC model of Engle (2002). Some recent research has begun to examin

  13. Nonparametric Estimation of Distributions in Random Effects Models

    KAUST Repository

    Hart, Jeffrey D.

    2011-01-01

    We propose using minimum distance to obtain nonparametric estimates of the distributions of components in random effects models. A main setting considered is equivalent to having a large number of small datasets whose locations, and perhaps scales, vary randomly, but which otherwise have a common distribution. Interest focuses on estimating the distribution that is common to all datasets, knowledge of which is crucial in multiple testing problems where a location/scale invariant test is applied to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article, including R-code and a dataset, are available online. © 2011 American Statistical Association.

  14. A dynamic distribution model for combat logistics

    OpenAIRE

    Gue, Kevin R.

    1999-01-01

    New warfare doctrine for the U.S. Marine Corps emphasizes small, highly mobile forces supported from the sea, rather than from large, land based supply points. The goal of logistics planners is to support these forces with as little inventory on land as possible. We show how to configure the land based distribution system over time to support a given battle plan with minimum inventory. Logistics planners could use the model to support tactical or operational decision making.

  15. Logical Mapping: An Intermedia Synchronization Model for Multimedia Distributed Systems

    Directory of Open Access Journals (Sweden)

    Saul E. Pomares Hernandez

    2008-12-01

    Full Text Available The preservation of temporal dependencies among different media data, such as text, still images, video and audio, and which have simultaneous distributed sources as origin, is an open research area and an important issue for emerging distributed multimedia systems, such as Teleimmersion, Telemedicine, and IPTV. Although there are several works oriented to satisfy temporal dependencies in distributed multimedia systems, they are far from resolving the problem. In this paper we propose a logical synchronization model able to specify at runtime any kind of temporal relationship among the distributed multimedia data involved in a temporal scenario. The synchronization model is based on a new concept that we call logical mapping. A logical mapping, in general terms, translates a temporal relation based on a timeline to be expressed according to its causal dependencies. The logical mappings allow us to avoid the use of global references, such as a wall clock and shared memory. We note that the proposed intermedia synchronization model does not require previous knowledge of when, nor of how long, the media involved of a temporal scenario is executed. Finally, in order to show the viability of the proposed model, a syncrhonization approach is presented.

  16. Optimal Portfolios in Wishart Models and Effects of Discrete Rebalancing on Portfolio Distribution and Strategy Selection

    OpenAIRE

    Li, Zejing

    2012-01-01

    This dissertation is mainly devoted to the research of two problems - the continuous-time portfolio optimization in different Wishart models and the effects of discrete rebalancing on portfolio wealth distribution and optimal portfolio strategy.

  17. A Multiobjective Stochastic Production-Distribution Planning Problem in an Uncertain Environment Considering Risk and Workers Productivity

    Directory of Open Access Journals (Sweden)

    S. M. J. Mirzapour Al-e-Hashem

    2011-01-01

    Full Text Available A multi-objective two stage stochastic programming model is proposed to deal with a multi-period multi-product multi-site production-distribution planning problem for a midterm planning horizon. The presented model involves majority of supply chain cost parameters such as transportation cost, inventory holding cost, shortage cost, production cost. Moreover some respects as lead time, outsourcing, employment, dismissal, workers productivity and training are considered. Due to the uncertain nature of the supply chain, it is assumed that cost parameters and demand fluctuations are random variables and follow from a pre-defined probability distribution. To develop a robust stochastic model, an additional objective functions is added to the traditional production-distribution-planning problem. So, our multi-objective model includes (i the minimization of the expected total cost of supply chain, (ii the minimization of the variance of the total cost of supply chain and (iii the maximization of the workers productivity through training courses that could be held during the planning horizon. Then, the proposed model is solved applying a hybrid algorithm that is a combination of Monte Carlo sampling method, modified -constraint method and L-shaped method. Finally, a numerical example is solved to demonstrate the validity of the model as well as the efficiency of the hybrid algorithm.

  18. Smooth finite-dimensional approximations of distributed optimization problems via control discretization

    Science.gov (United States)

    Chernov, A. V.

    2013-12-01

    Approximating finite-dimensional mathematical programming problems are studied that arise from piecewise constant discretization of controls in the optimization of distributed systems of a fairly broad class. The smoothness of the approximating problems is established. Gradient formulas are derived that make use of the analytical solution of the original control system and its adjoint, thus providing an opportunity for algorithmic separation of numerical optimization and the task of solving a controlled initial-boundary value problem. The approximating problems are proved to converge to the original optimization problem with respect to the functional as the discretization is refined. The application of the approach to optimization problems is illustrated by solving the semilinear wave equation controlled by applying an integral criterion. The results of numerical experiments are analyzed.

  19. Spatio-temporal Modeling of Mosquito Distribution

    Science.gov (United States)

    Dumont, Y.; Dufourd, C.

    2011-11-01

    We consider a quasilinear parabolic system to model mosquito displacement. In order to use efficiently vector control tools, like insecticides, and mechanical control, it is necessary to provide density estimates of mosquito populations, taking into account the environment and entomological knowledges. After a brief introduction to mosquito dispersal modeling, we present some theoretical results. Then, considering a compartmental approach, we get a quasilinear system of PDEs. Using the time splitting approach and appropriate numerical methods for each operator, we construct a reliable numerical scheme. Considering vector control scenarii, we show that the environment can have a strong influence on mosquito distribution and in the efficiency of vector control tools.

  20. Modelling of skin exposure from distributed sources

    DEFF Research Database (Denmark)

    Fogh, C.L.; Andersson, Kasper Grann

    2000-01-01

    A simple model of indoor air pollution concentrations was used together with experimental results on deposition velocities to skin to calculate the skin dose from an outdoor plume of contaminants, The primary pathway was considered to be direct deposition to the skin from a homogeneously distribu...... distributed air source. The model has been used to show that skin deposition was a significant dose contributor for example when compared to inhalation dose. (C) 2000 British Occupational Hygiene Society, Published by Elsevier Science Ltd. All rights reserved....

  1. Modelling refrigerant distribution in microchannel evaporators

    DEFF Research Database (Denmark)

    Brix, Wiebke; Kærn, Martin Ryhl; Elmegaard, Brian

    2009-01-01

    The effects of refrigerant maldistribution in parallel evaporator channels on the heat exchanger performance are investigated numerically. For this purpose a 1D steady state model of refrigerant R134a evaporating in a microchannel tube is built and validated against other evaporator models. A study...... of the refrigerant distribution is carried out for two channels in parallel and for two different cases. In the first case maldistribution of the inlet quality into the channels is considered, and in the second case a non-uniform airflow on the secondary side is considered. In both cases the total mixed superheat...

  2. Fastest Distributed Consensus Averaging Problem on Perfect and Complete n-ary Tree networks

    CERN Document Server

    Jafarizadeh, Saber

    2010-01-01

    Solving fastest distributed consensus averaging problem (i.e., finding weights on the edges to minimize the second-largest eigenvalue modulus of the weight matrix) over networks with different topologies is one of the primary areas of research in the field of sensor networks and one of the well known networks in this issue is tree network. Here in this work we present analytical solution for the problem of fastest distributed consensus averaging algorithm by means of stratification and semidefinite programming, for two particular types of tree networks, namely perfect and complete n-ary tree networks. Our method in this paper is based on convexity of fastest distributed consensus averaging problem, and inductive comparing of the characteristic polynomials initiated by slackness conditions in order to find the optimal weights. Also the optimal weights for the edges of certain types of branches such as perfect and complete n-ary tree branches are determined independently of rest of the network.

  3. Toward Modeling the Intrinsic Complexity of Test Problems

    Science.gov (United States)

    Shoufan, Abdulhadi

    2017-01-01

    The concept of intrinsic complexity explains why different problems of the same type, tackled by the same problem solver, can require different times to solve and yield solutions of different quality. This paper proposes a general four-step approach that can be used to establish a model for the intrinsic complexity of a problem class in terms of…

  4. A first passage problem and its applications to the analysis of a class of stochastic models

    Directory of Open Access Journals (Sweden)

    Lev Abolnikov

    1992-01-01

    Full Text Available A problem of the first passage of a cumulative random process with generally distributed discrete or continuous increments over a fixed level is considered in the article as an essential part of the analysis of a class of stochastic models (bulk queueing systems, inventory control and dam models.

  5. Exponential distribution-based genetic algorithm for solving mixed-integer bilevel programming problems

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Two classes of mixed-integer nonlinear bilevel programming problems are discussed. One is that the follower's functions are separable with respect to the follower's variables, and the other is that the follower's functions are convex if the follower's variables are not restricted to integers. A genetic algorithm based on an exponential distribution is proposed for the aforementioned problems. First, for each fixed leader's variable x, it is proved that the optimal solution y of the follower's mixed-integer programming can be obtained by solving associated relaxed problems, and according to the convexity of the functions involved, a simplified branch and bound approach is given to solve the follower's programming for the second class of problems. Furthermore, based on an exponential distribution with a parameter A, a new crossover operator is designed in which the best individuals are used to generate better offspring of crossover. The simulation results illustrate that the proposed algorithm is efficient and robust.

  6. Mitigation of Power Quality Problems in Grid-Interactive Distributed Generation System

    Science.gov (United States)

    Bhende, C. N.; Kalam, A.; Malla, S. G.

    2016-04-01

    Having an inter-tie between low/medium voltage grid and distributed generation (DG), both exposes to power quality (PQ) problems created by each other. This paper addresses various PQ problems arise due to integration of DG with grid. The major PQ problems are due to unbalanced and non-linear load connected at DG, unbalanced voltage variations on transmission line and unbalanced grid voltages which severely affect the performance of the system. To mitigate the above mentioned PQ problems, a novel integrated control of distribution static shunt compensator (DSTATCOM) is presented in this paper. DSTATCOM control helps in reducing the unbalance factor of PCC voltage. It also eliminates harmonics from line currents and makes them balanced. Moreover, DSTATCOM supplies the reactive power required by the load locally and hence, grid need not to supply the reactive power. To show the efficacy of the proposed controller, several operating conditions are considered and verified through simulation using MATLAB/SIMULINK.

  7. "Ranking Multivariate GARCH Models by Problem Dimension"

    OpenAIRE

    Caporin, Massimiliano; McAleer, Michael

    2010-01-01

    textabstractIn the last 15 years, several Multivariate GARCH (MGARCH) models have appeared in the literature. The two most widely known and used are the Scalar BEKK model of Engle and Kroner (1995) and Ding and Engle (2001), and the DCC model of Engle (2002). Some recent research has begun to examine MGARCH specifications in terms of their out-of-sample forecasting performance. In this paper, we provide an empirical comparison of a set of MGARCH models, namely BEKK, DCC, Corrected DCC (cDCC) ...

  8. Solving linear integer programming problems by a novel neural model.

    Science.gov (United States)

    Cavalieri, S

    1999-02-01

    The paper deals with integer linear programming problems. As is well known, these are extremely complex problems, even when the number of integer variables is quite low. Literature provides examples of various methods to solve such problems, some of which are of a heuristic nature. This paper proposes an alternative strategy based on the Hopfield neural network. The advantage of the strategy essentially lies in the fact that hardware implementation of the neural model allows for the time required to obtain a solution so as not depend on the size of the problem to be solved. The paper presents a particular class of integer linear programming problems, including well-known problems such as the Travelling Salesman Problem and the Set Covering Problem. After a brief description of this class of problems, it is demonstrated that the original Hopfield model is incapable of supplying valid solutions. This is attributed to the presence of constant bias currents in the dynamic of the neural model. A demonstration of this is given and then a novel neural model is presented which continues to be based on the same architecture as the Hopfield model, but introduces modifications thanks to which the integer linear programming problems presented can be solved. Some numerical examples and concluding remarks highlight the solving capacity of the novel neural model.

  9. Vehicle Routing Problem with Soft Time Windows Based on Improved Genetic Algorithm for Fruits and Vegetables Distribution

    Directory of Open Access Journals (Sweden)

    Peiqing Li

    2015-01-01

    Full Text Available Fresh fruits and vegetables, perishable by nature, are subject to additional deterioration and bruising in the distribution process due to vibration and shock caused by road irregularities. A nonlinear mathematical model was developed that considered not only the vehicle routing problem with time windows but also the effect of road irregularities on the bruising of fresh fruits and vegetables. The main objective of this work was to obtain the optimal distribution routes for fresh fruits and vegetables considering different road classes with the least amount of logistics costs. An improved genetic algorithm was used to solve the problem. A fruit delivery route among the 13 cities in Jiangsu Province was used as a real analysis case. The simulation results showed that the vehicle routing problem with time windows, considering road irregularities and different classes of toll roads, can significantly influence total delivery costs compared with traditional VRP models. The comparison between four models to predict the total cost and actual total cost in distribution showed that the improved genetic algorithm is superior to the Group-based pattern, CW pattern, and O-X type cross pattern.

  10. Equivalence of MAXENT and Poisson point process models for species distribution modeling in ecology.

    Science.gov (United States)

    Renner, Ian W; Warton, David I

    2013-03-01

    Modeling the spatial distribution of a species is a fundamental problem in ecology. A number of modeling methods have been developed, an extremely popular one being MAXENT, a maximum entropy modeling approach. In this article, we show that MAXENT is equivalent to a Poisson regression model and hence is related to a Poisson point process model, differing only in the intercept term, which is scale-dependent in MAXENT. We illustrate a number of improvements to MAXENT that follow from these relations. In particular, a point process model approach facilitates methods for choosing the appropriate spatial resolution, assessing model adequacy, and choosing the LASSO penalty parameter, all currently unavailable to MAXENT. The equivalence result represents a significant step in the unification of the species distribution modeling literature. Copyright © 2013, The International Biometric Society.

  11. ZERODUR strength modeling with Weibull statistical distributions

    Science.gov (United States)

    Hartmann, Peter

    2016-07-01

    The decisive influence on breakage strength of brittle materials such as the low expansion glass ceramic ZERODUR is the surface condition. For polished or etched surfaces it is essential if micro cracks are present and how deep they are. Ground surfaces have many micro cracks caused by the generation process. Here only the depths of the micro cracks are relevant. In any case presence and depths of micro cracks are statistical by nature. The Weibull distribution is the model used traditionally for the representation of such data sets. It is based on the weakest link ansatz. The use of the two or three parameter Weibull distribution for data representation and reliability prediction depends on the underlying crack generation mechanisms. Before choosing the model for a specific evaluation, some checks should be done. Is there only one mechanism present or is it to be expected that an additional mechanism might contribute deviating results? For ground surfaces the main mechanism is the diamond grains' action on the surface. However, grains breaking from their bonding might be moved by the tool across the surface introducing a slightly deeper crack. It is not to be expected that these scratches follow the same statistical distribution as the grinding process. Hence, their description with the same distribution parameters is not adequate. Before including them a dedicated discussion should be performed. If there is additional information available influencing the selection of the model, for example the existence of a maximum crack depth, this should be taken into account also. Micro cracks introduced by small diamond grains on tools working with limited forces cannot be arbitrarily deep. For data obtained with such surfaces the existence of a threshold breakage stress should be part of the hypothesis. This leads to the use of the three parameter Weibull distribution. A differentiation based on the data set alone without preexisting information is possible but requires a

  12. Distributing Working Memory Resources and the Use of External Representations on Problem Solving

    OpenAIRE

    大塚, 一徳; 宮谷, 真人

    2007-01-01

    This study examines how problem solvers use distributing working memory resources over internal and external epresentations. Participants played three-dimensional versions of number guessing games. The playing of number guessing games is directly related to consumption of working memory resources. They could use arbitrarily the game record windows which are the external representations of these games and, thus, they could distribute working memory demands over internal working memory resource...

  13. Comparison between fully distributed model and semi-distributed model in urban hydrology modeling

    Science.gov (United States)

    Ichiba, Abdellah; Gires, Auguste; Giangola-Murzyn, Agathe; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe

    2013-04-01

    Water management in urban areas is becoming more and more complex, especially because of a rapid increase of impervious areas. There will also possibly be an increase of extreme precipitation due to climate change. The aims of the devices implemented to handle the large amount of water generate by urban areas such as storm water retention basins are usually twofold: ensure pluvial flood protection and water depollution. These two aims imply opposite management strategies. To optimize the use of these devices there is a need to implement urban hydrological models and improve fine-scale rainfall estimation, which is the most significant input. In this paper we suggest to compare two models and their sensitivity to small-scale rainfall variability on a 2.15 km2 urban area located in the County of Val-de-Marne (South-East of Paris, France). The average impervious coefficient is approximately 34%. In this work two types of models are used. The first one is CANOE which is semi-distributed. Such models are widely used by practitioners for urban hydrology modeling and urban water management. Indeed, they are easily configurable and the computation time is reduced, but these models do not take fully into account either the variability of the physical properties or the variability of the precipitations. An alternative is to use distributed models that are harder to configure and require a greater computation time, but they enable a deeper analysis (especially at small scales and upstream) of the processes at stake. We used the Multi-Hydro fully distributed model developed at the Ecole des Ponts ParisTech. It is an interacting core between open source software packages, each of them representing a portion of the water cycle in urban environment. Four heavy rainfall events that occurred between 2009 and 2011 are analyzed. The data comes from the Météo-France radar mosaic and the resolution is 1 km in space and 5 min in time. The closest radar of the Météo-France network is

  14. Problems in indoor mapping and modelling

    NARCIS (Netherlands)

    Zlatanova, S.; Sithole, G.; Nakagawa, M.; Zhu, Q.

    2013-01-01

    Research in support of indoor mapping and modelling (IMM) has been active for over thirty years. This research has come in the form of As-Built surveys, Data structuring, Visualisation techniques, Navigation models and so forth. Much of this research is founded on advancements in photogrammetry, com

  15. Communication Reducing Algorithms for Distributed Hierarchical N-Body Problems with Boundary Distributions

    KAUST Repository

    Abduljabbar, Mustafa

    2017-05-11

    Reduction of communication and efficient partitioning are key issues for achieving scalability in hierarchical N-Body algorithms like Fast Multipole Method (FMM). In the present work, we propose three independent strategies to improve partitioning and reduce communication. First, we show that the conventional wisdom of using space-filling curve partitioning may not work well for boundary integral problems, which constitute a significant portion of FMM’s application user base. We propose an alternative method that modifies orthogonal recursive bisection to relieve the cell-partition misalignment that has kept it from scaling previously. Secondly, we optimize the granularity of communication to find the optimal balance between a bulk-synchronous collective communication of the local essential tree and an RDMA per task per cell. Finally, we take the dynamic sparse data exchange proposed by Hoefler et al. [1] and extend it to a hierarchical sparse data exchange, which is demonstrated at scale to be faster than the MPI library’s MPI_Alltoallv that is commonly used.

  16. Pseudoabsence generation strategies for species distribution models.

    Directory of Open Access Journals (Sweden)

    Brice B Hanberry

    Full Text Available BACKGROUND: Species distribution models require selection of species, study extent and spatial unit, statistical methods, variables, and assessment metrics. If absence data are not available, another important consideration is pseudoabsence generation. Different strategies for pseudoabsence generation can produce varying spatial representation of species. METHODOLOGY: We considered model outcomes from four different strategies for generating pseudoabsences. We generating pseudoabsences randomly by 1 selection from the entire study extent, 2 a two-step process of selection first from the entire study extent, followed by selection for pseudoabsences from areas with predicted probability <25%, 3 selection from plots surveyed without detection of species presence, 4 a two-step process of selection first for pseudoabsences from plots surveyed without detection of species presence, followed by selection for pseudoabsences from the areas with predicted probability <25%. We used Random Forests as our statistical method and sixteen predictor variables to model tree species with at least 150 records from Forest Inventory and Analysis surveys in the Laurentian Mixed Forest province of Minnesota. CONCLUSIONS: Pseudoabsence generation strategy completely affected the area predicted as present for species distribution models and may be one of the most influential determinants of models. All the pseudoabsence strategies produced mean AUC values of at least 0.87. More importantly than accuracy metrics, the two-step strategies over-predicted species presence, due to too much environmental distance between the pseudoabsences and recorded presences, whereas models based on random pseudoabsences under-predicted species presence, due to too little environmental distance between the pseudoabsences and recorded presences. Models using pseudoabsences from surveyed plots produced a balance between areas with high and low predicted probabilities and the strongest

  17. Simulation model of load balancing in distributed computing systems

    Science.gov (United States)

    Botygin, I. A.; Popov, V. N.; Frolov, S. G.

    2017-02-01

    The availability of high-performance computing, high speed data transfer over the network and widespread of software for the design and pre-production in mechanical engineering have led to the fact that at the present time the large industrial enterprises and small engineering companies implement complex computer systems for efficient solutions of production and management tasks. Such computer systems are generally built on the basis of distributed heterogeneous computer systems. The analytical problems solved by such systems are the key models of research, but the system-wide problems of efficient distribution (balancing) of the computational load and accommodation input, intermediate and output databases are no less important. The main tasks of this balancing system are load and condition monitoring of compute nodes, and the selection of a node for transition of the user’s request in accordance with a predetermined algorithm. The load balancing is one of the most used methods of increasing productivity of distributed computing systems through the optimal allocation of tasks between the computer system nodes. Therefore, the development of methods and algorithms for computing optimal scheduling in a distributed system, dynamically changing its infrastructure, is an important task.

  18. Reconsideration of mass-distribution models

    Directory of Open Access Journals (Sweden)

    Ninković S.

    2014-01-01

    Full Text Available The mass-distribution model proposed by Kuzmin and Veltmann (1973 is revisited. It is subdivided into two models which have a common case. Only one of them is subject of the present study. The study is focused on the relation between the density ratio (the central one to that corresponding to the core radius and the total-mass fraction within the core radius. The latter one is an increasing function of the former one, but it cannot exceed one quarter, which takes place when the density ratio tends to infinity. Therefore, the model is extended by representing the density as a sum of two components. The extension results into possibility of having a correspondence between the infinite density ratio and 100% total-mass fraction. The number of parameters in the extended model exceeds that of the original model. Due to this, in the extended model, the correspondence between the density ratio and total-mass fraction is no longer one-to-one; several values of the total-mass fraction can correspond to the same value for the density ratio. In this way, the extended model could explain the contingency of having two, or more, groups of real stellar systems (subsystems in the diagram total-mass fraction versus density ratio. [Projekat Ministarstva nauke Republike Srbije, br. 176011: Dynamics and Kinematics of Celestial Bodies and Systems

  19. XRF map identification problems based on a PDE electrodeposition model

    Science.gov (United States)

    Sgura, Ivonne; Bozzini, Benedetto

    2017-04-01

    In this paper we focus on the following map identification problem (MIP): given a morphochemical reaction–diffusion (RD) PDE system modeling an electrodepostion process, we look for a time t *, belonging to the transient dynamics and a set of parameters \\mathbf{p} , such that the PDE solution, for the morphology h≤ft(x,y,{{t}\\ast};\\mathbf{p}\\right) and for the chemistry θ ≤ft(x,y,{{t}\\ast};\\mathbf{p}\\right) approximates a given experimental map M *. Towards this aim, we introduce a numerical algorithm using singular value decomposition (SVD) and Frobenius norm to give a measure of error distance between experimental maps for h and θ and simulated solutions of the RD-PDE system on a fixed time integration interval. The technique proposed allows quantitative use of microspectroscopy images, such as XRF maps. Specifically, in this work we have modelled the morphology and manganese distributions of nanostructured components of innovative batteries and we have followed their changes resulting from ageing under operating conditions. The availability of quantitative information on space-time evolution of active materials in terms of model parameters will allow dramatic improvements in knowledge-based optimization of battery fabrication and operation.

  20. Some Problems in Using Diffusion Models for New Products.

    Science.gov (United States)

    Bernhardt, Irwin; Mackenzie, Kenneth D.

    This paper analyzes some of the problems of using diffusion models to formulate marketing strategies for new products. Though future work in this area appears justified, many unresolved problems limit its application. There is no theory for adoption and diffusion processes; such a theory is outlined in this paper. The present models are too…

  1. Inverse Modelling Problems in Linear Algebra Undergraduate Courses

    Science.gov (United States)

    Martinez-Luaces, Victor E.

    2013-01-01

    This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…

  2. Applying the General Linear Model to Repeated Measures Problems.

    Science.gov (United States)

    Pohlmann, John T.; McShane, Michael G.

    The purpose of this paper is to demonstrate the use of the general linear model (GLM) in problems with repeated measures on a dependent variable. Such problems include pretest-posttest designs, multitrial designs, and groups by trials designs. For each of these designs, a GLM analysis is demonstrated wherein full models are formed and restrictions…

  3. Inverse Modelling Problems in Linear Algebra Undergraduate Courses

    Science.gov (United States)

    Martinez-Luaces, Victor E.

    2013-01-01

    This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…

  4. Solid mechanics theory, modeling, and problems

    CERN Document Server

    Bertram, Albrecht

    2015-01-01

    This textbook offers an introduction to modeling the mechanical behavior of solids within continuum mechanics and thermodynamics. To illustrate the fundamental principles, the book starts with an overview of the most important models in one dimension. Tensor calculus, which is called for in three-dimensional modeling, is concisely presented in the second part of the book. Once the reader is equipped with these essential mathematical tools, the third part of the book develops the foundations of continuum mechanics right from the beginning. Lastly, the book’s fourth part focuses on modeling the mechanics of materials and in particular elasticity, viscoelasticity and plasticity. Intended as an introductory textbook for students and for professionals interested in self-study, it also features numerous worked-out examples to aid in understanding.

  5. Solution of distributive problems with synthesis of radar information fields parameters

    Directory of Open Access Journals (Sweden)

    А. В. Нестеров

    1999-05-01

    Full Text Available Considered is the approach to solution of the problems of synthesis of radar and information fields parameters. It is proposed that, as the result of the synthesis, the structure of location of ground radar components should be determined. The optimal location of radar and information system is supposed to be determined by the results of the solution of the distribution problems. Considered are three sets of problems: method of linear programming, method of non-linear programming, method of scanning theory. Described are the distinctive features of each method, their advantages and disadvantages also the groups of determining parameters; an advice is given as to the use of particular approach

  6. Ballistic model to estimate microsprinkler droplet distribution

    Directory of Open Access Journals (Sweden)

    Conceição Marco Antônio Fonseca

    2003-01-01

    Full Text Available Experimental determination of microsprinkler droplets is difficult and time-consuming. This determination, however, could be achieved using ballistic models. The present study aimed to compare simulated and measured values of microsprinkler droplet diameters. Experimental measurements were made using the flour method, and simulations using a ballistic model adopted by the SIRIAS computational software. Drop diameters quantified in the experiment varied between 0.30 mm and 1.30 mm, while the simulated between 0.28 mm and 1.06 mm. The greatest differences between simulated and measured values were registered at the highest radial distance from the emitter. The model presented a performance classified as excellent for simulating microsprinkler drop distribution.

  7. A conceptual, distributed snow redistribution model

    Science.gov (United States)

    Frey, S.; Holzmann, H.

    2015-11-01

    When applying conceptual hydrological models using a temperature index approach for snowmelt to high alpine areas often accumulation of snow during several years can be observed. Some of the reasons why these "snow towers" do not exist in nature are vertical and lateral transport processes. While snow transport models have been developed using grid cell sizes of tens to hundreds of square metres and have been applied in several catchments, no model exists using coarser cell sizes of 1 km2, which is a common resolution for meso- and large-scale hydrologic modelling (hundreds to thousands of square kilometres). In this paper we present an approach that uses only gravity and snow density as a proxy for the age of the snow cover and land-use information to redistribute snow in alpine basins. The results are based on the hydrological modelling of the Austrian Inn Basin in Tyrol, Austria, more specifically the Ötztaler Ache catchment, but the findings hold for other tributaries of the river Inn. This transport model is implemented in the distributed rainfall-runoff model COSERO (Continuous Semi-distributed Runoff). The results of both model concepts with and without consideration of lateral snow redistribution are compared against observed discharge and snow-covered areas derived from MODIS satellite images. By means of the snow redistribution concept, snow accumulation over several years can be prevented and the snow depletion curve compared with MODIS (Moderate Resolution Imaging Spectroradiometer) data could be improved, too. In a 7-year period the standard model would lead to snow accumulation of approximately 2900 mm SWE (snow water equivalent) in high elevated regions whereas the updated version of the model does not show accumulation and does also predict discharge with more accuracy leading to a Kling-Gupta efficiency of 0.93 instead of 0.9. A further improvement can be shown in the comparison of MODIS snow cover data and the calculated depletion curve, where

  8. Research on consumable distribution mode of shipbuilder’s shop based on vehicle routing problem

    Directory of Open Access Journals (Sweden)

    Xiang Su

    2017-02-01

    Full Text Available A distribution vehicle optimization is established with considerations for the problem of long period of requisition and high shop costs due to the existing consumable requisition mode in shipbuilder’s shops for the requirements of shops for consumables. The shortest traveling distance of distribution vehicles are calculated with the genetic algorithm (GA. Explorations are made into a shop consumable distribution mode for shipbuilders to help them to effectively save their production logistics costs, enhance their internal material management level and provide reference for shipbuilder’s change in traditional ways and realization of just-in-time (JIT production.

  9. A New Algorithm for Distributed Control Problem with Shortest-Distance Constraints

    Directory of Open Access Journals (Sweden)

    Yu Zhou

    2016-01-01

    Full Text Available This paper investigates the distributed shortest-distance problem of multiagent systems where agents satisfy the same continuous-time dynamics. The objective of multiagent systems is to find a common point for all agents to minimize the sum of the distances from each agent to its corresponding convex region. A distributed consensus algorithm is proposed based on local information. A sufficient condition also is given to guarantee the consensus. The simulation example shows that the distributed shortest-distance consensus algorithm is effective for our theoretical results.

  10. Discrete and Continuous Models for Partitioning Problems

    KAUST Repository

    Lellmann, Jan

    2013-04-11

    Recently, variational relaxation techniques for approximating solutions of partitioning problems on continuous image domains have received considerable attention, since they introduce significantly less artifacts than established graph cut-based techniques. This work is concerned with the sources of such artifacts. We discuss the importance of differentiating between artifacts caused by discretization and those caused by relaxation and provide supporting numerical examples. Moreover, we consider in depth the consequences of a recent theoretical result concerning the optimality of solutions obtained using a particular relaxation method. Since the employed regularizer is quite tight, the considered relaxation generally involves a large computational cost. We propose a method to significantly reduce these costs in a fully automatic way for a large class of metrics including tree metrics, thus generalizing a method recently proposed by Strekalovskiy and Cremers (IEEE conference on computer vision and pattern recognition, pp. 1905-1911, 2011). © 2013 Springer Science+Business Media New York.

  11. Water losses dynamic modelling in water distribution networks

    Science.gov (United States)

    Puleo, Valeria; Milici, Barbara

    2015-12-01

    In the last decades, one of the main concerns of the water system managers have been the minimisation of water losses, that frequently reach values of 30% or even 70% of the volume supplying the water distribution network. The economic and social costs associated with water losses in modern water supply systems are rapidly rising to unacceptably high levels. Furthermore, the problem of the water losses assumes more and more importance mainly when periods of water scarcity occur or when not sufficient water supply takes part in areas with fast growth. In the present analysis, a dynamic model was used for estimating real and apparent losses of a real case study. A specific nodal demand model reflecting the user's tank installation and a specific apparent losses module were implemented. The results from the dynamic model were compared with the modelling estimation based on a steady-state approach.

  12. Management Model of Resources Equilibrium Distribution among Overlapping-Generations

    Institute of Scientific and Technical Information of China (English)

    Jiang Xuemin; Li Ling

    2004-01-01

    The overlapping generation models the western scholars have designed from various perspectives to address different kinds of issues do not reflect Chinese emerging political and economic problems, and cannot be entirely and blindly applied to Chinese practical situation. In this paper the authors endeavor to incorporate some western scholars' research results into their own research findings to present overlapping generations model theory in a new perspective through establishing an overlapping generations theory on population including articulation of concepts and theorems of biological generation, economic generation and social generation and the overlapping periods in biological generation and two overlapping periods in economic generation among three generations. This management model with equilibrium distribution of resource wealth includes overlapping generations length model (δ),equilibrium transfer model (θ) and a complete model on equilibrium distribution among generations (δ-θ).The model provides quantitative basis for the creation of resource management system, and fills in a theoretical gap in this discipline in China. Besides,it furnishes a new methodology and manipulable tool for Chinese government to establish a comprehensive management information bank for many sectors such as economic trade, population, science and technology, education, human resource, natural resource and environment, agriculture, forestry,industry, mining and energy.

  13. Count data modeling and classification using finite mixtures of distributions.

    Science.gov (United States)

    Bouguila, Nizar

    2011-02-01

    In this paper, we consider the problem of constructing accurate and flexible statistical representations for count data, which we often confront in many areas such as data mining, computer vision, and information retrieval. In particular, we analyze and compare several generative approaches widely used for count data clustering, namely multinomial, multinomial Dirichlet, and multinomial generalized Dirichlet mixture models. Moreover, we propose a clustering approach via a mixture model based on a composition of the Liouville family of distributions, from which we select the Beta-Liouville distribution, and the multinomial. The novel proposed model, which we call multinomial Beta-Liouville mixture, is optimized by deterministic annealing expectation-maximization and minimum description length, and strives to achieve a high accuracy of count data clustering and model selection. An important feature of the multinomial Beta-Liouville mixture is that it has fewer parameters than the recently proposed multinomial generalized Dirichlet mixture. The performance evaluation is conducted through a set of extensive empirical experiments, which concern text and image texture modeling and classification and shape modeling, and highlights the merits of the proposed models and approaches.

  14. Wishart distributions for decomposable covariance graph models

    CERN Document Server

    Khare, Kshitij; 10.1214/10-AOS841

    2011-01-01

    Gaussian covariance graph models encode marginal independence among the components of a multivariate random vector by means of a graph $G$. These models are distinctly different from the traditional concentration graph models (often also referred to as Gaussian graphical models or covariance selection models) since the zeros in the parameter are now reflected in the covariance matrix $\\Sigma$, as compared to the concentration matrix $\\Omega =\\Sigma^{-1}$. The parameter space of interest for covariance graph models is the cone $P_G$ of positive definite matrices with fixed zeros corresponding to the missing edges of $G$. As in Letac and Massam [Ann. Statist. 35 (2007) 1278--1323], we consider the case where $G$ is decomposable. In this paper, we construct on the cone $P_G$ a family of Wishart distributions which serve a similar purpose in the covariance graph setting as those constructed by Letac and Massam [Ann. Statist. 35 (2007) 1278--1323] and Dawid and Lauritzen [Ann. Statist. 21 (1993) 1272--1317] do in ...

  15. Hierarchical Model Predictive Control for Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2010-01-01

    This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...... facilitates plug-and-play addition of subsystems without redesign of any controllers. The method is supported by a number of simulations featuring a three-level smart-grid power control system for a small isolated power grid....

  16. A Predictive Distribution Model for Cooperative Braking System of an Electric Vehicle

    OpenAIRE

    Hongqiang Guo; Hongwen He; Xuelian Xiao

    2014-01-01

    A predictive distribution model for a series cooperative braking system of an electric vehicle is proposed, which can solve the real-time problem of the optimum braking force distribution. To get the predictive distribution model, firstly three disciplines of the maximum regenerative energy recovery capability, the maximum generating efficiency and the optimum braking stability are considered, then an off-line process optimization stream is designed, particularly the optimal Latin hypercube d...

  17. Fix-point Multiplier Distributions in Discrete Turbulent Cascade Models

    CERN Document Server

    Jouault, B; Lipa, P

    1998-01-01

    One-point time-series measurements limit the observation of three-dimensional fully developed turbulence to one dimension. For one-dimensional models, like multiplicative branching processes, this implies that the energy flux from large to small scales is not conserved locally. This then renders the random weights used in the cascade curdling to be different from the multipliers obtained from a backward averaging procedure. The resulting multiplier distributions become solutions of a fix-point problem. With a further restoration of homogeneity, all observed correlations between multipliers in the energy dissipation field can be understood in terms of simple scale-invariant multiplicative branching processes.

  18. New Discrete Element Models for Three-Dimensional Impact Problems

    Institute of Scientific and Technical Information of China (English)

    SHAN Li; CHENG Ming; LIU Kai-xin; LIU Wei-Fu; CHEN Shi-Yang

    2009-01-01

    Two 3-D numerical models of the discrete element method(DEM)for impact problems are proposed.The models can calculate not only the impact problems of continuum and non-continuum,but also the transient process from continuum to non-continuum.The stress wave propagation in a concrete block and a dynamic splitting process of a marble disc under impact loading are numerically simulated with the proposed models.By comparing the numerical results with the corresponding results obtained by the finite element method(FEM)and the experiments,it is proved that the models are reliable for three-dimensional impact problems.

  19. A Network Model for Parallel Line Balancing Problem

    Directory of Open Access Journals (Sweden)

    Recep Benzer

    2007-01-01

    Full Text Available Gökçen et al. (2006 have proposed several procedures and a mathematical model on single-model (product assembly line balancing (ALB problem with parallel lines. In parallel ALB problem, the goal is to balance more than one assembly line together. In this paper, a network model for parallel ALB problem has been proposed and illustrated on a numerical example. This model is a new approach for parallel ALB and it provides a different point of view for interested researchers.

  20. Inverse distributed hydrological modelling of alpine catchments

    Directory of Open Access Journals (Sweden)

    H. Kunstmann

    2005-12-01

    Full Text Available Even in physically based distributed hydrological models, various remaining parameters must be estimated for each sub-catchment. This can involve tremendous effort, especially when the number of sub-catchments is large and the applied hydrological model is computationally expensive. Automatic parameter estimation tools can significantly facilitate the calibration process. Hence, we combined the nonlinear parameter estimation tool PEST with the distributed hydrological model WaSiM. PEST is based on the Gauss-Marquardt-Levenberg method, a gradient-based nonlinear parameter estimation algorithm. WaSiM is a fully distributed hydrological model using physically based algorithms for most of the process descriptions.

    WaSiM was applied to the alpine/prealpine Ammer River catchment (southern Germany, 710 km2 in a 100×100 m2 horizontal resolution. The catchment is heterogeneous in terms of geology, pedology and land use and shows a complex orography (the difference of elevation is around 1600 m. Using the developed PEST-WaSiM interface, the hydrological model was calibrated by comparing simulated and observed runoff at eight gauges for the hydrologic year 1997 and validated for the hydrologic year 1993. For each sub-catchment four parameters had to be calibrated: the recession constants of direct runoff and interflow, the drainage density, and the hydraulic conductivity of the uppermost aquifer. Additionally, five snowmelt specific parameters were adjusted for the entire catchment. Altogether, 37 parameters had to be calibrated. Additional a priori information (e.g. from flood hydrograph analysis narrowed the parameter space of the solutions and improved the non-uniqueness of the fitted values. A reasonable quality of fit was achieved. Discrepancies between modelled and observed runoff were also due to the small number of meteorological stations and corresponding interpolation artefacts in the orographically complex

  1. Modeling market equilibrium for transboundary environmental problem

    NARCIS (Netherlands)

    Kryazhimskii, A.; Nentjes, A.; Shybaiev, S; Tarasyev, A.

    2001-01-01

    We model the international negotiations on acid deposition reduction in Europe as a multiplayer non-cooperative normal form game. The equilibrium combining the properties of Nash equilibria and Pareto-optimal outcomes, is studied. We prove its existence and investigate a dynamic combined best reply-

  2. MODELING OF DISTRIBUTED MUTUAL EXCLUSION SYSTEM USING EVENT-B

    Directory of Open Access Journals (Sweden)

    Raghuraj Suryavanshi

    2013-02-01

    Full Text Available The problem of mutual exclusion arises in distributed systems whenever shared resources are concurrently accessed by several sites. For correctness, it is required that shared resource must be accessed by a single site at a time. To decide, which site execute the critical section next, each site communicate with a set of other sites. A systematic approach is essential to formulate an accurate speciation. Formal methods are mathematical techniques that provide systematic approach for building and verification of model. We have used Event-B as a formal technique for construction of our model. Event-B is event driven approach which is used to develop formal models of distributed systems .It supports generation and discharge of proof obligations arising due to consistency checking. In this paper, we outline a formal construction of model of Lamport's mutual exclusion algorithm for distributed system using Event-B. We have considered vector clock instead of using Lam-port's scalar clock for the purpose of message's time stamping.

  3. Dynamical insurance models with investment: Constrained singular problems for integrodifferential equations

    Science.gov (United States)

    Belkina, T. A.; Konyukhova, N. B.; Kurochkin, S. V.

    2016-01-01

    Previous and new results are used to compare two mathematical insurance models with identical insurance company strategies in a financial market, namely, when the entire current surplus or its constant fraction is invested in risky assets (stocks), while the rest of the surplus is invested in a risk-free asset (bank account). Model I is the classical Cramér-Lundberg risk model with an exponential claim size distribution. Model II is a modification of the classical risk model (risk process with stochastic premiums) with exponential distributions of claim and premium sizes. For the survival probability of an insurance company over infinite time (as a function of its initial surplus), there arise singular problems for second-order linear integrodifferential equations (IDEs) defined on a semiinfinite interval and having nonintegrable singularities at zero: model I leads to a singular constrained initial value problem for an IDE with a Volterra integral operator, while II model leads to a more complicated nonlocal constrained problem for an IDE with a non-Volterra integral operator. A brief overview of previous results for these two problems depending on several positive parameters is given, and new results are presented. Additional results are concerned with the formulation, analysis, and numerical study of "degenerate" problems for both models, i.e., problems in which some of the IDE parameters vanish; moreover, passages to the limit with respect to the parameters through which we proceed from the original problems to the degenerate ones are singular for small and/or large argument values. Such problems are of mathematical and practical interest in themselves. Along with insurance models without investment, they describe the case of surplus completely invested in risk-free assets, as well as some noninsurance models of surplus dynamics, for example, charity-type models.

  4. Cost Optimisation in Freight Distribution with Cross-Docking: N-Echelon Location Routing Problem

    Directory of Open Access Journals (Sweden)

    Jesus Gonzalez-Feliu

    2012-03-01

    Full Text Available Freight transportation constitutes one of the main activities that influence the economy and society, as it assures a vital link between suppliers and customers and represents a major source of employment. Multi-echelon distribution is one of the most common strategies adopted by the transportation companies in an aim of cost reduction. Although vehicle routing problems are very common in operational research, they are essentially related to single-echelon cases. This paper presents the main concepts of multi-echelon distribution with cross-docks and a unified notation for the N-echelon location routing problem. A literature review is also presented, in order to list the main problems and methods that can be helpful for scientists and transportation practitioners.

  5. Projection-based model reduction for contact problems

    CERN Document Server

    Balajewicz, Maciej; Farhat, Charbel

    2015-01-01

    Large scale finite element analysis requires model order reduction for computationally expensive applications such as optimization, parametric studies and control design. Although model reduction for nonlinear problems is an active area of research, a major hurdle is modeling and approximating contact problems. This manuscript introduces a projection-based model reduction approach for static and dynamic contact problems. In this approach, non-negative matrix factorization is utilized to optimally compress and strongly enforce positivity of contact forces in training simulation snapshots. Moreover, a greedy algorithm coupled with an error indicator is developed to efficiently construct parametrically robust low-order models. The proposed approach is successfully demonstrated for the model reduction of several two-dimensional elliptic and hyperbolic obstacle and self contact problems.

  6. Decoding Problem Gamblers' Signals: A Decision Model for Casino Enterprises.

    Science.gov (United States)

    Ifrim, Sandra

    2015-12-01

    The aim of the present study is to offer a validated decision model for casino enterprises. The model enables those users to perform early detection of problem gamblers and fulfill their ethical duty of social cost minimization. To this end, the interpretation of casino customers' nonverbal communication is understood as a signal-processing problem. Indicators of problem gambling recommended by Delfabbro et al. (Identifying problem gamblers in gambling venues: final report, 2007) are combined with Viterbi algorithm into an interdisciplinary model that helps decoding signals emitted by casino customers. Model output consists of a historical path of mental states and cumulated social costs associated with a particular client. Groups of problem and non-problem gamblers were simulated to investigate the model's diagnostic capability and its cost minimization ability. Each group consisted of 26 subjects and was subsequently enlarged to 100 subjects. In approximately 95% of the cases, mental states were correctly decoded for problem gamblers. Statistical analysis using planned contrasts revealed that the model is relatively robust to the suppression of signals performed by casino clientele facing gambling problems as well as to misjudgments made by staff regarding the clients' mental states. Only if the last mentioned source of error occurs in a very pronounced manner, i.e. judgment is extremely faulty, cumulated social costs might be distorted.

  7. DNA computation model to solve 0-1 programming problem.

    Science.gov (United States)

    Zhang, Fengyue; Yin, Zhixiang; Liu, Bo; Xu, Jin

    2004-01-01

    0-1 programming problem is an important problem in opsearch with very widespread applications. In this paper, a new DNA computation model utilizing solution-based and surface-based methods is presented to solve the 0-1 programming problem. This model contains the major benefits of both solution-based and surface-based methods; including vast parallelism, extraordinary information density and ease of operation. The result, verified by biological experimentation, revealed the potential of DNA computation in solving complex programming problem.

  8. Collaborative problem solving with a total quality model.

    Science.gov (United States)

    Volden, C M; Monnig, R

    1993-01-01

    A collaborative problem-solving system committed to the interests of those involved complies with the teachings of the total quality management movement in health care. Deming espoused that any quality system must become an integral part of routine activities. A process that is used consistently in dealing with problems, issues, or conflicts provides a mechanism for accomplishing total quality improvement. The collaborative problem-solving process described here results in quality decision-making. This model incorporates Ishikawa's cause-and-effect (fishbone) diagram, Moore's key causes of conflict, and the steps of the University of North Dakota Conflict Resolution Center's collaborative problem solving model.

  9. Medical problem and document model for natural language understanding.

    Science.gov (United States)

    Meystre, Stephanie; Haug, Peter J

    2003-01-01

    We are developing tools to help maintain a complete, accurate and timely problem list within a general purpose Electronic Medical Record system. As a part of this project, we have designed a system to automatically retrieve medical problems from free-text documents. Here we describe an information model based on XML (eXtensible Markup Language) and compliant with the CDA (Clinical Document Architecture). This model is used to ease the exchange of clinical data between the Natural Language Understanding application that retrieves potential problems from narrative document, and the problem list management application.

  10. How can model comparison help improving species distribution models?

    Science.gov (United States)

    Gritti, Emmanuel Stephan; Gaucherel, Cédric; Crespo-Perez, Maria-Veronica; Chuine, Isabelle

    2013-01-01

    Today, more than ever, robust projections of potential species range shifts are needed to anticipate and mitigate the impacts of climate change on biodiversity and ecosystem services. Such projections are so far provided almost exclusively by correlative species distribution models (correlative SDMs). However, concerns regarding the reliability of their predictive power are growing and several authors call for the development of process-based SDMs. Still, each of these methods presents strengths and weakness which have to be estimated if they are to be reliably used by decision makers. In this study we compare projections of three different SDMs (STASH, LPJ and PHENOFIT) that lie in the continuum between correlative models and process-based models for the current distribution of three major European tree species, Fagussylvatica L., Quercusrobur L. and Pinussylvestris L. We compare the consistency of the model simulations using an innovative comparison map profile method, integrating local and multi-scale comparisons. The three models simulate relatively accurately the current distribution of the three species. The process-based model performs almost as well as the correlative model, although parameters of the former are not fitted to the observed species distributions. According to our simulations, species range limits are triggered, at the European scale, by establishment and survival through processes primarily related to phenology and resistance to abiotic stress rather than to growth efficiency. The accuracy of projections of the hybrid and process-based model could however be improved by integrating a more realistic representation of the species resistance to water stress for instance, advocating for pursuing efforts to understand and formulate explicitly the impact of climatic conditions and variations on these processes.

  11. Storage Solutions for Power Quality Problems in Cyprus Electricity Distribution Network

    Directory of Open Access Journals (Sweden)

    Andreas Poullikkas

    2014-01-01

    Full Text Available In this work, a prediction of the effects of introducing energy storage systems on the network stability of the distribution network of Cyprus and a comparison in terms of cost with a traditional solution is carried out. In particular, for solving possible overvoltage problems, several scenarios of storage units' installation are used and compared with the alternative solution of extra cable connection between the node with the lowest voltage and the node with the highest voltage of the distribution network. For the comparison, a case study of a typical LV distribution feeder in the power system of Cyprus is used. The results indicated that the performance indicator of each solution depends on the type, the size and the position of installation of the storage unit. Also, as more storage units are installed the better the performance indicator and the more attractive is the investment in storage units to solve power quality problems in the distribution network. In the case where the technical requirements in voltage limitations according to distribution regulations are satisfied with one storage unit, the installation of an additional storage unit will only increase the final cost. The best solution, however, still remains the alternative solution of extra cable connection between the node with the lowest voltage and the node with the highest voltage of the distribution network, due to the lower investment costs compared to that of the storage units.

  12. Distribution Based Change-Point Problem With Two Types of Imperfect Debugging in Software Reliability

    Directory of Open Access Journals (Sweden)

    P. K. Kapur

    2009-01-01

    Full Text Available Software testing is an important phase of softwaredevelopment life cycle. It controls the quality of softwareproduct. Due to the complexity of software system andincomplete understanding of software, the testing team maynot be able to remove/correct the fault perfectly onobservation/detection of a failure and the original fault mayremain resulting in a phenomenon known as imperfectdebugging, or get replaced by another fault causing faultgeneration. In case of imperfect debugging, the fault contentof the software remains same while in case of faultgeneration, the fault content increases as the testingprogresses and removal/correction results in introduction ofnew faults while removing/correcting old ones. Duringsoftware testing fault detection /correction rate may not besame throughout the whole testing process, but it maychange at any time moment. In the literature varioussoftware reliability models have been proposedincorporating change-point concept. In this paper wepropose a distribution based change-point problem with twotypes of imperfect debugging in software reliability. Themodels developed have been validated and verified usingreal data sets. Estimated Parameters and comparisoncriteria results have also been presented

  13. The Higgs transverse momentum distribution in gluon fusion as a multiscale problem

    CERN Document Server

    Bagnaschi, Emanuele

    2015-01-01

    We consider Higgs production in gluon fusion and in particular the prediction of the Higgs transverse momentum distribution. We discuss the ambiguities affecting the matching procedure between fixed order matrix elements and the resummation to all orders of the terms enhanced by $\\log(p_T^H/m_H)$ factors. Following a recent proposal (Grazzini et al., hep-ph/1306.4581), we argue that the gluon fusion process, computed considering two active quark flavors, is a multiscale problem from the point of view of the resummation of the collinear singular terms. We perform an analysis at parton level of the collinear behavior of the real emission amplitudes and we derive an upper limit to the range of transverse momenta where the collinear approximation is valid. This scale is then used as the value of the resummation scale in the analytic resummation framework or as the value of the $h$ parameter in the POWHEG-BOX code. Finally, we provide a phenomenological analysis in the Standard Model, in the Two Higgs Doublet Mode...

  14. Regularized multivariate regression models with skew-t error distributions

    KAUST Repository

    Chen, Lianfu

    2014-06-01

    We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both the regression coefficient and inverse scale matrices simultaneously. The sparsity is introduced through penalizing the negative log-likelihood by adding L1-penalties on the entries of the two matrices. Taking advantage of the hierarchical representation of skew-t distributions, and using the expectation conditional maximization (ECM) algorithm, we reduce the problem to penalized normal likelihood and develop a procedure to minimize the ensuing objective function. Using a simulation study the performance of the method is assessed, and the methodology is illustrated using a real data set with a 24-dimensional response vector. © 2014 Elsevier B.V.

  15. Model of Vertical Product Differentiation Based on Triangular Distribution

    Institute of Scientific and Technical Information of China (English)

    HU Jian-bing; WANG He-ping; SHEN Yun-hong

    2007-01-01

    Supposing that the consumer preference complies with triangular distribution instead of uniform distribution, we establish the model of vertical product differentiation. The simulation shows that there exists stable equilibrium along with unstable equilibrium. In stable equilibrium, high quality products gain an advantage over low quality products. In unstable equilibrium, the former does not possess an apparent advantage in competition, likely to be at a disadvantage. In order to evolve from unstable equilibrium to stable equilibrium, it is necessary for firms to solve such problems as high prices and consumers' perception of scarcity on product qualities. In general, both product qualities and firm profits increase with the consuming capacity and quality perception, and the latter more rapidly.

  16. Modeling of Spacing Distribution of Queuing Vehicles at Signalized Junctions Using Random-Matrix Theory

    Institute of Scientific and Technical Information of China (English)

    JIN Xuexiang; SU Yuelong; ZHANG Yi; WEI Zheng; LI Li

    2009-01-01

    The modeling of headway/spacing between two consecutive vehicles in a queue has many appli-cations in traffic flow theory and transport practice. Most known approaches have only studied vehicles on freeways. This paper presents a model for the spacing distribution of queuing vehicles at a signalized junc-tion based on random-matrix theory. The spacing distribution of a Gaussian symplectic ensemble (GSE) fits well with recently measured spacing distribution data. These results are also compared with measured spacing distribution observed for the car parking problem. Vehicle stationary queuing and vehicle parking have different spacing distributions due to different driving patterns.

  17. Modeling of current distribution on smooth and columnar platinum structures.

    Science.gov (United States)

    Zinola, Carlos F

    2011-01-17

    Studying the growth and stability of anisotropic or isotropic disordered surfaces in electrodeposition is of importance in catalytic electrochemistry. In some cases, the metallic nature of the electrode defines the topography and roughness, which are also controlled by the experimental time and applied external potential. Because of the experimental restrictions in conventional electrochemical techniques and ex situ electron microscopies, a theoretical model of the surface geometry could aid in understanding the electrodeposition process and current distributions. In spite of applying a complex theory such as dynamic scaling method or perturbation theories, the resolution of mixed mass-/charge-transfer equations (tertiary distribution) for the electrodeposition process would give reliable information. One of the main problems with this type of distribution is the mathematics when solving the spatial n-dimensional differential equations. Use of a primary current distribution is proposed here to simplify the differential equations; however it limits wide application of the first assumption. Distributions of concentration profile, current density, and electrode potential are presented here as a function of the distance normal to the surface for the cases of smooth and rough platinum growth. In the particular case of columnar surfaces, cycloid curves are used to model the electrode, from which the concentration profile is presented in a parameterized form after solving a first-type curvilinear integral. The concentration contour results in a combination of a trigonometric inverse function and a linear distribution leading to a negative concavity curve. The calculation of the current density and electrode potential contours also show trigonometric shapes exhibiting forbidden imaginary values only at the minimal values of the trochoid curve.

  18. Effective Parameter Dimension via Bayesian Model Selection in the Inverse Acoustic Scattering Problem

    Directory of Open Access Journals (Sweden)

    Abel Palafox

    2014-01-01

    Full Text Available We address a prototype inverse scattering problem in the interface of applied mathematics, statistics, and scientific computing. We pose the acoustic inverse scattering problem in a Bayesian inference perspective and simulate from the posterior distribution using MCMC. The PDE forward map is implemented using high performance computing methods. We implement a standard Bayesian model selection method to estimate an effective number of Fourier coefficients that may be retrieved from noisy data within a standard formulation.

  19. State-space models' dirty little secrets: even simple linear Gaussian models can have estimation problems.

    Science.gov (United States)

    Auger-Méthé, Marie; Field, Chris; Albertsen, Christoffer M; Derocher, Andrew E; Lewis, Mark A; Jonsen, Ian D; Mills Flemming, Joanna

    2016-05-25

    State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible. They can model linear and nonlinear processes using a variety of statistical distributions. Recent ecological SSMs are often complex, with a large number of parameters to estimate. Through a simulation study, we show that even simple linear Gaussian SSMs can suffer from parameter- and state-estimation problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter estimates of a SSM describing the movement of polar bears (Ursus maritimus) result in overestimating their energy expenditure. We suggest potential solutions, but show that it often remains difficult to estimate parameters. While SSMs are powerful tools, they can give misleading results and we urge ecologists to assess whether the parameters can be estimated accurately before drawing ecological conclusions from their results.

  20. A Solution to Fastest Distributed Consensus Problem for Generic Star & K-cored Star Networks

    CERN Document Server

    Jafarizadeh, Saber

    2012-01-01

    Distributed average consensus is the main mechanism in algorithms for decentralized computation. In distributed average consensus algorithm each node has an initial state, and the goal is to compute the average of these initial states in every node. To accomplish this task, each node updates its state by a weighted average of its own and neighbors' states, by using local communication between neighboring nodes. In the networks with fixed topology, convergence rate of distributed average consensus algorithm depends on the choice of weights. This paper studies the weight optimization problem in distributed average consensus algorithm. The network topology considered here is a star network where the branches have different lengths. Closed-form formulas of optimal weights and convergence rate of algorithm are determined in terms of the network's topological parameters. Furthermore generic K-cored star topology has been introduced as an alternative to star topology. The introduced topology benefits from faster con...

  1. Some Identification Problems in the Cointegrated Vector Autoregressive Model

    DEFF Research Database (Denmark)

    Johansen, Søren

    An analysis of some identification problems in the cointegrated VAR is given. We give a new criteria for identification by linear restrictions on individual relations which is equivalent to the rank condition. We compare the asymptotic distribution of the estimators of a and ß; when they are iden...

  2. Some identification problems in the cointegrated vector autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren

    An analysis of some identification problems in the cointegrated VAR is given. We give a new criteria for identification by linear restrictions on indi- vidual relations which is equivalent to the rank condition. We compare the asymptotic distribution of the estimators of α and β; when they are id...

  3. The Model of Problem Based Learning in Practice: Evidence from Aalborg University

    DEFF Research Database (Denmark)

    Turcan, Romeo V.

    The aim of this paper is to share an experience from Aalborg University on the application of Problem Based Learning (PBL) model, with a specific example from a bachelor studies. PBL model has now been acknowledged worldwide as a powerful tool that allows students, faculty members and industry...... practitioners engage in multi-disciplinary, collaborative and geographically distributed activities. The key word in the model is ‘problem’ – a problem that is correctly formulated eventually affects the process of learning. It is also linked to the intended outcome of the PBL based teaching, whereby students...... solve real life problems of companies and organizations. As companies and organizations have various types of problems which might be attempted from different perspectives, it is pivotal that students have the opportunity to get equipped with a wide range of theoretical models and tools they can put...

  4. THE BUBNOV–GALERKIN PROCEDURE IN PROBLEMS OF MOBILE (SCANNING CONTROL FOR SYSTEMS WITH DISTRIBUTED PARAMETERS

    Directory of Open Access Journals (Sweden)

    Arakelyan Sh. Kh.

    2015-09-01

    Full Text Available We suggest to apply the Bubnov–Galerkin procedure to solve scanning control problems for systems with distributed parameters. The algorithm is described in details for three-dimensional linear heat equation It allows to reduce the solution of control problem to finite-dimensional nonlinear moments problem. The procedure of derivation of moments problem is illustrated in details on the example of one-dimensional equation of thermal conductivity. The solution of obtained moments problem is found in a particular case. Based on obtained results a computer simulation is done using COMSOL Multiphysics platform in one-dimensional case for a rod. The main dependences of control function against input data of the problem are revealed. The state of the rod for several (constant values of the source intensity is expressed in terms of graphs and illustrations. Corresponding illustrations are brought in case of control absence (null-power source for comparison. An effective numerical scheme for solving the obtained system of nonlinear constraints is suggested in the case of extended class of admissible controls. Calculation of control parameters is reduced to the simplest problem of nonlinear programming.

  5. Modeling Complex Chemical Systems: Problems and Solutions

    Science.gov (United States)

    van Dijk, Jan

    2016-09-01

    Non-equilibrium plasmas in complex gas mixtures are at the heart of numerous contemporary technologies. They typically contain dozens to hundreds of species, involved in hundreds to thousands of reactions. Chemists and physicists have always been interested in what are now called chemical reduction techniques (CRT's). The idea of such CRT's is that they reduce the number of species that need to be considered explicitly without compromising the validity of the model. This is usually achieved on the basis of an analysis of the reaction time scales of the system under study, which identifies species that are in partial equilibrium after a given time span. The first such CRT that has been widely used in plasma physics was developed in the 1960's and resulted in the concept of effective ionization and recombination rates. It was later generalized to systems in which multiple levels are effected by transport. In recent years there has been a renewed interest in tools for chemical reduction and reaction pathway analysis. An example of the latter is the PumpKin tool. Another trend is that techniques that have previously been developed in other fields of science are adapted as to be able to handle the plasma state of matter. Examples are the Intrinsic Low Dimension Manifold (ILDM) method and its derivatives, which originate from combustion engineering, and the general-purpose Principle Component Analysis (PCA) technique. In this contribution we will provide an overview of the most common reduction techniques, then critically assess the pros and cons of the methods that have gained most popularity in recent years. Examples will be provided for plasmas in argon and carbon dioxide.

  6. Problem Resolution through Electronic Mail: A Five-Step Model.

    Science.gov (United States)

    Grandgenett, Neal; Grandgenett, Don

    2001-01-01

    Discusses the use of electronic mail within the general resolution and management of administrative problems and emphasizes the need for careful attention to problem definition and clarity of language. Presents a research-based five-step model for the effective use of electronic mail based on experiences at the University of Nebraska at Omaha.…

  7. Methodology of problem space modeling in industrial enterprise management system

    Directory of Open Access Journals (Sweden)

    V.V. Glushchevsky

    2015-03-01

    Full Text Available The aim of the article. The aim of the article is to develop methodological principles for building a problem space model which can be integrated into industrial enterprise management system. The results of the analysis. The author developed methodological principles for constructing the problem space of an industrial enterprise as a structural and functional model. These problems appear on enterprise business process network topology and can be solved by its management system. The centerpiece of the article is description of the main stages of implementation of modeling methodology of industrial enterprise typical management problems. These stages help to solve several units of organizational management system structure of enterprise within their functional competence. Author formulated an axiom system of structural and characteristic properties of modeling space problems elements, and interconnections between them. This system of axioms is actually a justification for the correctness and adequacy of the proposed modeling methodology and comes as theoretical basis in the construction of the structural and functional model of the management problems space. This model generalizes three basic structural components of the enterprise management system with the help of axioms system: a three-dimensional model of the management problem space (the first dimension is the enterprise business process network, the second dimension is a set of management problems, the third dimension is four vectors of measurable and qualitative characteristics of management problems, which can be analyzed and managed during enterprise functioning; a two-dimensional model of the cybernetic space of analytical problems, which are formalized form of management problems (multivariate model experiments can be implemented with the help of this model to solve wide range of problem situations and determine the most effective or optimal management solutions; a two-dimensional model

  8. New Approaches to Studying Problem Behaviors: A Comparison of Methods for Modeling Longitudinal, Categorical Adolescent Drinking Data

    Science.gov (United States)

    Feldman, Betsy J.; Masyn, Katherine E.; Conger, Rand D.

    2009-01-01

    Analyzing problem-behavior trajectories can be difficult. The data are generally categorical and often quite skewed, violating distributional assumptions of standard normal-theory statistical models. In this article, the authors present several currently available modeling options, all of which make appropriate distributional assumptions for the…

  9. A descriptive model of information problem solving while using internet

    NARCIS (Netherlands)

    Brand-Gruwel, Saskia; Wopereis, Iwan; Walraven, Amber

    2009-01-01

    This paper presents the IPS-I-model: a model that describes the process of information problem solving (IPS) in which the Internet (I) is used to search information. The IPS-I-model is based on three studies, in which students in secondary and (post) higher education were asked to solve information

  10. Some Problems in Using Diffusion Models for New Products

    Science.gov (United States)

    Bernhardt, Irwin; Mackenzie, Kenneth D.

    1972-01-01

    Analyzes some of the problems involved in using diffusion models to formulate marketing strategies for introducing new products. Six models, which remove some of the theoretical and methodological restrictions inherent in current models of the adoption and diffusion process, are presented. (Author/JH)

  11. Distributed Approach for Solving Time-Dependent Problems in Multimodal Transport Networks

    Directory of Open Access Journals (Sweden)

    Carlos Galvez-Fernandez

    2009-01-01

    Full Text Available This paper presents an alternative approach for time-dependent multimodal transport problem. We describe a new graph structure to abstract multimodal networks, called transfer graph, which adapts to the distributed nature of real information sources of transportation networks. A decomposition of the Shortest Path Problem in transfer graph is proposed to optimize the computation time. This approach was computationally tested in several experimental multimodal networks having different size and complexity. The approach was integrated in the multimodal transport service of the European Carlink platform, where it has been validated in real scenarios. Comparision with other related works is provided.

  12. NEW DOCTORAL DEGREE Parameter estimation problem in the Weibull model

    OpenAIRE

    Marković, Darija

    2009-01-01

    In this dissertation we consider the problem of the existence of best parameters in the Weibull model, one of the most widely used statistical models in reliability theory and life data theory. Particular attention is given to a 3-parameter Weibull model. We have listed some of the many applications of this model. We have described some of the classical methods for estimating parameters of the Weibull model, two graphical methods (Weibull probability plot and hazard plot), and two analyt...

  13. The Model of Problem Based Learning in Practice: Evidence from Aalborg University

    DEFF Research Database (Denmark)

    Turcan, Romeo V.

    practitioners engage in multi-disciplinary, collaborative and geographically distributed activities. The key word in the model is ‘problem’ – a problem that is correctly formulated eventually affects the process of learning. It is also linked to the intended outcome of the PBL based teaching, whereby students...... that are in need of development or require further interdisciplinary approaches....

  14. Spreadsheet-Enhanced Problem Solving in Context as Modeling

    Directory of Open Access Journals (Sweden)

    Sergei Abramovich

    2003-07-01

    development through situated mathematical problem solving. Modeling activities described in this paper support the epistemological position regarding the interplay that exists between the development of mathematical concepts and available methods of calculation. The spreadsheet used is Microsoft Excel 2001

  15. Study on model and algorithm of inventory routing problem

    Science.gov (United States)

    Wan, Fengjiao

    Vehicle routing problem(VRP) is one of important research in the logistics system. Nowadays, there are many researches on the VRP, but their don't consider the cost of inventory. Thus, the conclusion doesn't meet reality. This paper studies on the inventory routing problem (IRP)and uses one target function to describe these two conflicting problems, which are very important in the logistics optimization. The paper establishes the model of single client and many clients' inventory routing problem. An optimizing iterative algorithm is presented to solve the model. According to the model we can confirm the best quantity, efficiency and route of delivery. Finally, an example is given to illustrate the efficiency of model and algorithm.

  16. The predictive performance and stability of six species distribution models.

    Science.gov (United States)

    Duan, Ren-Yan; Kong, Xiao-Quan; Huang, Min-Yi; Fan, Wei-Yi; Wang, Zhi-Gao

    2014-01-01

    Predicting species' potential geographical range by species distribution models (SDMs) is central to understand their ecological requirements. However, the effects of using different modeling techniques need further investigation. In order to improve the prediction effect, we need to assess the predictive performance and stability of different SDMs. We collected the distribution data of five common tree species (Pinus massoniana, Betula platyphylla, Quercus wutaishanica, Quercus mongolica and Quercus variabilis) and simulated their potential distribution area using 13 environmental variables and six widely used SDMs: BIOCLIM, DOMAIN, MAHAL, RF, MAXENT, and SVM. Each model run was repeated 100 times (trials). We compared the predictive performance by testing the consistency between observations and simulated distributions and assessed the stability by the standard deviation, coefficient of variation, and the 99% confidence interval of Kappa and AUC values. The mean values of AUC and Kappa from MAHAL, RF, MAXENT, and SVM trials were similar and significantly higher than those from BIOCLIM and DOMAIN trials (pMAXENT, and SVM. Compared to BIOCLIM and DOMAIN, other SDMs (MAHAL, RF, MAXENT, and SVM) had higher prediction accuracy, smaller confidence intervals, and were more stable and less affected by the random variable (randomly selected pseudo-absence points). According to the prediction performance and stability of SDMs, we can divide these six SDMs into two categories: a high performance and stability group including MAHAL, RF, MAXENT, and SVM, and a low performance and stability group consisting of BIOCLIM, and DOMAIN. We highlight that choosing appropriate SDMs to address a specific problem is an important part of the modeling process.

  17. The predictive performance and stability of six species distribution models.

    Directory of Open Access Journals (Sweden)

    Ren-Yan Duan

    Full Text Available Predicting species' potential geographical range by species distribution models (SDMs is central to understand their ecological requirements. However, the effects of using different modeling techniques need further investigation. In order to improve the prediction effect, we need to assess the predictive performance and stability of different SDMs.We collected the distribution data of five common tree species (Pinus massoniana, Betula platyphylla, Quercus wutaishanica, Quercus mongolica and Quercus variabilis and simulated their potential distribution area using 13 environmental variables and six widely used SDMs: BIOCLIM, DOMAIN, MAHAL, RF, MAXENT, and SVM. Each model run was repeated 100 times (trials. We compared the predictive performance by testing the consistency between observations and simulated distributions and assessed the stability by the standard deviation, coefficient of variation, and the 99% confidence interval of Kappa and AUC values.The mean values of AUC and Kappa from MAHAL, RF, MAXENT, and SVM trials were similar and significantly higher than those from BIOCLIM and DOMAIN trials (p<0.05, while the associated standard deviations and coefficients of variation were larger for BIOCLIM and DOMAIN trials (p<0.05, and the 99% confidence intervals for AUC and Kappa values were narrower for MAHAL, RF, MAXENT, and SVM. Compared to BIOCLIM and DOMAIN, other SDMs (MAHAL, RF, MAXENT, and SVM had higher prediction accuracy, smaller confidence intervals, and were more stable and less affected by the random variable (randomly selected pseudo-absence points.According to the prediction performance and stability of SDMs, we can divide these six SDMs into two categories: a high performance and stability group including MAHAL, RF, MAXENT, and SVM, and a low performance and stability group consisting of BIOCLIM, and DOMAIN. We highlight that choosing appropriate SDMs to address a specific problem is an important part of the modeling process.

  18. Unleashing spatially distributed ecohydrology modeling using Big Data tools

    Science.gov (United States)

    Miles, B.; Idaszak, R.

    2015-12-01

    Physically based spatially distributed ecohydrology models are useful for answering science and management questions related to the hydrology and biogeochemistry of prairie, savanna, forested, as well as urbanized ecosystems. However, these models can produce hundreds of gigabytes of spatial output for a single model run over decadal time scales when run at regional spatial scales and moderate spatial resolutions (~100-km2+ at 30-m spatial resolution) or when run for small watersheds at high spatial resolutions (~1-km2 at 3-m spatial resolution). Numerical data formats such as HDF5 can store arbitrarily large datasets. However even in HPC environments, there are practical limits on the size of single files that can be stored and reliably backed up. Even when such large datasets can be stored, querying and analyzing these data can suffer from poor performance due to memory limitations and I/O bottlenecks, for example on single workstations where memory and bandwidth are limited, or in HPC environments where data are stored separately from computational nodes. The difficulty of storing and analyzing spatial data from ecohydrology models limits our ability to harness these powerful tools. Big Data tools such as distributed databases have the potential to surmount the data storage and analysis challenges inherent to large spatial datasets. Distributed databases solve these problems by storing data close to computational nodes while enabling horizontal scalability and fault tolerance. Here we present the architecture of and preliminary results from PatchDB, a distributed datastore for managing spatial output from the Regional Hydro-Ecological Simulation System (RHESSys). The initial version of PatchDB uses message queueing to asynchronously write RHESSys model output to an Apache Cassandra cluster. Once stored in the cluster, these data can be efficiently queried to quickly produce both spatial visualizations for a particular variable (e.g. maps and animations), as well

  19. Modeling and inverse problems in the presence of uncertainty

    CERN Document Server

    Banks, H T; Thompson, W Clayton

    2014-01-01

    Modeling and Inverse Problems in the Presence of Uncertainty collects recent research-including the authors' own substantial projects-on uncertainty propagation and quantification. It covers two sources of uncertainty: where uncertainty is present primarily due to measurement errors and where uncertainty is present due to the modeling formulation itself. After a useful review of relevant probability and statistical concepts, the book summarizes mathematical and statistical aspects of inverse problem methodology, including ordinary, weighted, and generalized least-squares formulations. It then

  20. Bifurcation analysis for a free boundary problem modeling tumor growth

    CERN Document Server

    Escher, Joachim

    2010-01-01

    In this paper we deal with a free boundary problem modeling the growth of nonnecrotic tumors.The tumor is treated as an incompressible fluid, the tissue elasticity is neglected and no chemical inhibitor species are present. We re-express the mathematical model as an operator equation and by using a bifurcation argument we prove that there exist stationary solutions of the problem which are not radially symmetric.

  1. Critical review of problem solving processes traditional theoretical models

    OpenAIRE

    Botía Sanabria, María Lucero; Universidad Antonio Nariño, Bogotá, Colombia; Orozco Pulido, Luis Humberto; Universidad Antonio Nariño, Bogotá, Colombia

    2015-01-01

    This paper presents a brief analysis of most known problem solving theoretical models realized using epistemological categories such as observer position, object of study, methods and procedures, and descriptive or explicative scope. The review showed linear and cyclical models, the need to recognize method's limitations to generalizing, the relevance of expliciting observer position, and a diffuse delimitation of the object problem solving as a cognitive process. An integrative and molar the...

  2. Improving species distribution models: the value of data on abundance

    National Research Council Canada - National Science Library

    Howard, Christine; Stephens, Philip A; Pearce‐Higgins, James W; Gregory, Richard D; Willis, Stephen G; McPherson, Jana

    2014-01-01

    Species distribution models (SDMs) are important tools for forecasting the potential impacts of future environmental changes but debate remains over the most robust modelling approaches for making projections...

  3. Modelling dynamic programming problems by generalized d-graphs

    CERN Document Server

    Kátai, Zoltán

    2010-01-01

    In this paper we introduce the concept of generalized d-graph (admitting cycles) as special dependency-graphs for modelling dynamic programming (DP) problems. We describe the d-graph versions of three famous single-source shortest algorithms (The algorithm based on the topological order of the vertices, Dijkstra algorithm and Bellman-Ford algorithm), which can be viewed as general DP strategies in the case of three different class of optimization problems. The new modelling method also makes possible to classify DP problems and the corresponding DP strategies in term of graph theory.

  4. On the optimal control problem for two regions’ macroeconomic model

    Directory of Open Access Journals (Sweden)

    Surkov Platon G.

    2015-12-01

    Full Text Available In this paper we consider a model of joint economic growth of two regions. This model bases on the classical Kobb-Douglas function and is described by a nonlinear system of differential equations. The interaction between regions is carried out by changing the balance of trade. The optimal control problem for this system is posed and the Pontryagin maximum principle is used for analysis the problem. The maximized functional represents the global welfare of regions. The numeric solution of the optimal control problem for particular regions is found. The used parameters was obtained from the basic scenario of the MERGE

  5. A distributional solution to a hyperbolic problem arising in population dynamics

    Directory of Open Access Journals (Sweden)

    Irina Kmit

    2007-10-01

    Full Text Available We consider a generalization of the Lotka-McKendrick problem describing the dynamics of an age-structured population with time-dependent vital rates. The generalization consists in allowing the initial and the boundary conditions to be derivatives of the Dirac measure. We construct a unique D'-solution in the framework of intrinsic multiplication of distributions. We also investigate the regularity of this solution.

  6. A data model that captures clinical reasoning about patient problems.

    Science.gov (United States)

    Barrows, R. C.; Johnson, S. B.

    1995-01-01

    We describe a data model that has been implemented for the CPMC Ambulatory Care System, and exemplify its function for patient problems. The model captures some nuances of clinical thinking about patients that are not accommodated in most other models, such as an evolution of clinical understanding about patient problems. A record of this understanding has clinical utility, and serves research interests as well as medical audit concerns. The model is described with an example, and advantages and limitations in the current implementation are discussed. PMID:8563311

  7. Models and Methods for Urban Power Distribution Network Planning

    Institute of Scientific and Technical Information of China (English)

    余贻鑫; 王成山; 葛少云; 肖俊; 严雪飞; 黄纯华

    2004-01-01

    The models, methods and their application experiences of a practical GIS(geographic information system)-based computer decision-making support system of urban power distribution network planning with seven subsystems, termed CNP, are described. In each subsystem there is at least one or one set of practical mathematical methobs. Some new models and mathematical methods have been introduced. In the development of GNP the idea of cognitive system engineering has been insisted on, which claims that human and computer intelligence should be combined together to solve the complex engineering problems cooperatively. Practical applications have shown that not only the optimal plan can be automatically reached with many complicated factors considered, but also the computation,analysis and graphic drawing burden can be released considerably.

  8. Energy Loss, Velocity Distribution, and Temperature Distribution for a Baffled Cylinder Model, Special Report

    Science.gov (United States)

    Brevoort, Maurice J.

    1937-01-01

    In the design of a cowling a certain pressure drop across the cylinders of a radial air-cooled engine is made available. Baffles are designed to make use of this available pressure drop for cooling. The problem of cooling an air-cooled engine cylinder has been treated, for the most part, from considerations of a large heat-transfer coefficient. The knowledge of the precise cylinder characteristics that give a maximum heat-transfer coefficient should be the first consideration. The next problem is to distribute this ability to cool so that the cylinder cools uniformly. This report takes up the problem of the design of a baffle for a model cylinder. A study has been made of the important principles involved in the operation of a baffle for an engine cylinder and shows that the cooling can be improved 20% by using a correctly designed baffle. Such a gain is as effective in cooling the cylinder with the improved baffle as a 65% increase in pressure drop across the standard baffle and fin tips.

  9. Multiphase Simulated Annealing Based on Boltzmann and Bose-Einstein Distribution Applied to Protein Folding Problem

    Directory of Open Access Journals (Sweden)

    Juan Frausto-Solis

    2016-01-01

    Full Text Available A new hybrid Multiphase Simulated Annealing Algorithm using Boltzmann and Bose-Einstein distributions (MPSABBE is proposed. MPSABBE was designed for solving the Protein Folding Problem (PFP instances. This new approach has four phases: (i Multiquenching Phase (MQP, (ii Boltzmann Annealing Phase (BAP, (iii Bose-Einstein Annealing Phase (BEAP, and (iv Dynamical Equilibrium Phase (DEP. BAP and BEAP are simulated annealing searching procedures based on Boltzmann and Bose-Einstein distributions, respectively. DEP is also a simulated annealing search procedure, which is applied at the final temperature of the fourth phase, which can be seen as a second Bose-Einstein phase. MQP is a search process that ranges from extremely high to high temperatures, applying a very fast cooling process, and is not very restrictive to accept new solutions. However, BAP and BEAP range from high to low and from low to very low temperatures, respectively. They are more restrictive for accepting new solutions. DEP uses a particular heuristic to detect the stochastic equilibrium by applying a least squares method during its execution. MPSABBE parameters are tuned with an analytical method, which considers the maximal and minimal deterioration of problem instances. MPSABBE was tested with several instances of PFP, showing that the use of both distributions is better than using only the Boltzmann distribution on the classical SA.

  10. A NEW STATIC MECHANICS MODEL TO SOLVE CONTACT PROBLEMS IN MECHANICAL SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    陆志华; 叶庆泰

    2004-01-01

    The coupling of the local contact problems between the components and the deformation of the components in the mechanical system were discovered. A series of coordinate systems have been founded to describe the mechanical system with the contact problems. The method of isolating the boundary of contact body from others has been used to describe the constraint between the contacting points. A more generalized static mechanics model of the mechanical system with the contact problems has been founded through the principle of virtual work. As an application, the model was used to study the multi-teeth engagement problems in the inner meshed planet gear systems. The stress distribution of contact gears was got. A test has verified that the static contact model and the computational method are right.

  11. Distributed memory compiler methods for irregular problems: Data copy reuse and runtime partitioning

    Science.gov (United States)

    Das, Raja; Ponnusamy, Ravi; Saltz, Joel; Mavriplis, Dimitri

    1991-01-01

    Outlined here are two methods which we believe will play an important role in any distributed memory compiler able to handle sparse and unstructured problems. We describe how to link runtime partitioners to distributed memory compilers. In our scheme, programmers can implicitly specify how data and loop iterations are to be distributed between processors. This insulates users from having to deal explicitly with potentially complex algorithms that carry out work and data partitioning. We also describe a viable mechanism for tracking and reusing copies of off-processor data. In many programs, several loops access the same off-processor memory locations. As long as it can be verified that the values assigned to off-processor memory locations remain unmodified, we show that we can effectively reuse stored off-processor data. We present experimental data from a 3-D unstructured Euler solver run on iPSC/860 to demonstrate the usefulness of our methods.

  12. Optimal Weights of Certain Branches of an Arbitrary Connected Network for Fastest Distributed Consensus Averaging Problem

    CERN Document Server

    Jafarizadeh, Saber

    2010-01-01

    Solving fastest distributed consensus averaging problem over networks with different topologies has been an active area of research for a number of years. The main purpose of distributed consensus averaging is to compute the average of the initial values, via a distributed algorithm, in which the nodes only communicate with their neighbors. In the previous works full knowledge about the network's topology was required for finding optimal weights and convergence rate of network, but here in this work for the first time the optimal weights are determined analytically for the edges of certain types of branches, namely path branch, lollipop branch, semi-complete Branch and Ladder branch independent of the rest of network. The solution procedure consists of stratification of associated connectivity graph of branch and Semidefinite Programming (SDP), particularly solving the slackness conditions, where the optimal weights are obtained by inductive comparing of the characteristic polynomials initiated by slackness c...

  13. Distributed adaptive fuzzy iterative learning control of coordination problems for higher order multi-agent systems

    Science.gov (United States)

    Li, Jinsha; Li, Junmin

    2016-07-01

    In this paper, the adaptive fuzzy iterative learning control scheme is proposed for coordination problems of Mth order (M ≥ 2) distributed multi-agent systems. Every follower agent has a higher order integrator with unknown nonlinear dynamics and input disturbance. The dynamics of the leader are a higher order nonlinear systems and only available to a portion of the follower agents. With distributed initial state learning, the unified distributed protocols combined time-domain and iteration-domain adaptive laws guarantee that the follower agents track the leader uniformly on [0, T]. Then, the proposed algorithm extends to achieve the formation control. A numerical example and a multiple robotic system are provided to demonstrate the performance of the proposed approach.

  14. Definition of Model-based diagnosis problems with Altarica

    OpenAIRE

    Pencolé, Yannick; Chanthery, Elodie; Peynot, Thierry

    2016-01-01

    International audience; This paper presents a framework for modeling diagnosis problems based on a formal language called Altarica. The initial purpose of the language Altarica was to define a modeling language for safety analysis. This language has been developed as a collaboration between academics and industrial partners and is used in some industrial companies. The paper shows that the expres-sivity of this language, mixing event-based and state-based models, is sufficient to model classi...

  15. Towards an Information Model of Consistency Maintenance in Distributed Interactive Applications

    Directory of Open Access Journals (Sweden)

    Xin Zhang

    2008-01-01

    Full Text Available A novel framework to model and explore predictive contract mechanisms in distributed interactive applications (DIAs using information theory is proposed. In our model, the entity state update scheme is modelled as an information generation, encoding, and reconstruction process. Such a perspective facilitates a quantitative measurement of state fidelity loss as a result of the distribution protocol. Results from an experimental study on a first-person shooter game are used to illustrate the utility of this measurement process. We contend that our proposed model is a starting point to reframe and analyse consistency maintenance in DIAs as a problem in distributed interactive media compression.

  16. Predicting the fate of biodiversity using species' distribution models: enhancing model comparability and repeatability.

    Directory of Open Access Journals (Sweden)

    Genoveva Rodríguez-Castañeda

    Full Text Available Species distribution modeling (SDM is an increasingly important tool to predict the geographic distribution of species. Even though many problems associated with this method have been highlighted and solutions have been proposed, little has been done to increase comparability among studies. We reviewed recent publications applying SDMs and found that seventy nine percent failed to report methods that ensure comparability among studies, such as disclosing the maximum probability range produced by the models and reporting on the number of species occurrences used. We modeled six species of Falco from northern Europe and demonstrate that model results are altered by (1 spatial bias in species' occurrence data, (2 differences in the geographic extent of the environmental data, and (3 the effects of transformation of model output to presence/absence data when applying thresholds. Depending on the modeling decisions, forecasts of the future geographic distribution of Falco ranged from range contraction in 80% of the species to no net loss in any species, with the best model predicting no net loss of habitat in Northern Europe. The fact that predictions of range changes in response to climate change in published studies may be influenced by decisions in the modeling process seriously hampers the possibility of making sound management recommendations. Thus, each of the decisions made in generating SDMs should be reported and evaluated to ensure conclusions and policies are based on the biology and ecology of the species being modeled.

  17. Predicting the fate of biodiversity using species' distribution models: enhancing model comparability and repeatability.

    Science.gov (United States)

    Rodríguez-Castañeda, Genoveva; Hof, Anouschka R; Jansson, Roland; Harding, Larisa E

    2012-01-01

    Species distribution modeling (SDM) is an increasingly important tool to predict the geographic distribution of species. Even though many problems associated with this method have been highlighted and solutions have been proposed, little has been done to increase comparability among studies. We reviewed recent publications applying SDMs and found that seventy nine percent failed to report methods that ensure comparability among studies, such as disclosing the maximum probability range produced by the models and reporting on the number of species occurrences used. We modeled six species of Falco from northern Europe and demonstrate that model results are altered by (1) spatial bias in species' occurrence data, (2) differences in the geographic extent of the environmental data, and (3) the effects of transformation of model output to presence/absence data when applying thresholds. Depending on the modeling decisions, forecasts of the future geographic distribution of Falco ranged from range contraction in 80% of the species to no net loss in any species, with the best model predicting no net loss of habitat in Northern Europe. The fact that predictions of range changes in response to climate change in published studies may be influenced by decisions in the modeling process seriously hampers the possibility of making sound management recommendations. Thus, each of the decisions made in generating SDMs should be reported and evaluated to ensure conclusions and policies are based on the biology and ecology of the species being modeled.

  18. A Solution Approach from an Analytic Model to Heuristic Algorithm for Special Case of Vehicle Routing Problem with Stochastic Demands

    OpenAIRE

    Selçuk K. İşleyen; Ö. Faruk Baykoç

    2008-01-01

    We define a special case for the vehicle routing problem with stochastic demands (SC-VRPSD) where customer demands are normally distributed. We propose a new linear model for computing the expected length of a tour in SC-VRPSD. The proposed model is based on the integration of the “Traveling Salesman Problem” (TSP) and the Assignment Problem. For large-scale problems, we also use an Iterated Local Search (ILS) algorithm in order to reach an effective solution.

  19. Forward and inverse problems of electrocardiography: modeling and recovery of epicardial potentials in humans.

    Science.gov (United States)

    Shahidi, A V; Savard, P; Nadeau, R

    1994-03-01

    To assess the accuracy of solutions to the inverse problem of electrocardiography in man, epicardial potentials computed from thoracic potential distributions were compared to potentials measured directly over the surface of the heart during arrhythmia surgery. Three-dimensional finite element models of the thorax with different mesh resolutions and conductivity inhomogeneities were constructed from serial computerized tomography scans of a patient. These torso models were used to compute transfer matrices relating the epicardial potentials to the thoracic potentials. Potential distributions over the torso and the ventricles were measured with 63 leads in the same patient whose anatomical data was used to construct the torso models. To solve the inverse problem, different methods based on Tykhonov regularization or regularization- truncation were applied. The recovered epicardial potential distributions closely resembled the epicardial potential distributions measured early during ventricular preexcitation, but not the more complex distributions measured later during the QRS complex. Several problems encountered as the validation process is applied in man are also discussed.

  20. Developing a Model for Solving the Flight Perturbation Problem

    Directory of Open Access Journals (Sweden)

    Amirreza Nickkar

    2015-02-01

    Full Text Available Purpose: In the aviation and airline industry, crew costs are the second largest direct operating cost next to the fuel costs. But unlike the fuel costs, a considerable portion of the crew costs can be saved through optimized utilization of the internal resources of an airline company. Therefore, solving the flight perturbation scheduling problem, in order to provide an optimized schedule in a comprehensive manner that covered all problem dimensions simultaneously, is very important. In this paper, we defined an integrated recovery model as that which is able to recover aircraft and crew dimensions simultaneously in order to produce more economical solutions and create fewer incompatibilities between the decisions. Design/methodology/approach: Current research is performed based on the development of one of the flight rescheduling models with disruption management approach wherein two solution strategies for flight perturbation problem are presented: Dantzig-Wolfe decomposition and Lagrangian heuristic. Findings: According to the results of this research, Lagrangian heuristic approach for the DW-MP solved the problem optimally in all known cases. Also, this strategy based on the Dantig-Wolfe decomposition manage to produce a solution within an acceptable time (Under 1 Sec. Originality/value: This model will support the decisions of the flight controllers in the operation centers for the airlines. When the flight network faces a problem the flight controllers achieve a set of ranked answers using this model thus, applying crew’s conditions in the proposed model caused this model to be closer to actual conditions.

  1. RELAXATION TIME LIMITS PROBLEM FOR HYDRODYNAMIC MODELS IN SEMICONDUCTOR SCIENCE

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this article, two relaxation time limits, namely, the momentum relaxation time limit and the energy relaxation time limit are considered. By the compactness argument, it is obtained that the smooth solutions of the multidimensional nonisentropic Euler-Poisson problem converge to the solutions of an energy transport model or a drift diffusion model, respectively, with respect to different time scales.

  2. Data-Driven Model Order Reduction for Bayesian Inverse Problems

    KAUST Repository

    Cui, Tiangang

    2014-01-06

    One of the major challenges in using MCMC for the solution of inverse problems is the repeated evaluation of computationally expensive numerical models. We develop a data-driven projection- based model order reduction technique to reduce the computational cost of numerical PDE evaluations in this context.

  3. Performance prediction model for distributed applications on multicore clusters

    CSIR Research Space (South Africa)

    Khanyile, NP

    2012-07-01

    Full Text Available Distributed processing offers a way of successfully dealing with computationally demanding applications such as scientific problems. Over the years, researchers have investigated ways to predict the performance of parallel algorithms. Amdahl’s law...

  4. A modular approach to addressing model design, scale, and parameter estimation issues in distributed hydrological modelling

    Science.gov (United States)

    Leavesley, G.H.; Markstrom, S.L.; Restrepo, Pedro J.; Viger, R.J.

    2002-01-01

    A modular approach to model design and construction provides a flexible framework in which to focus the multidisciplinary research and operational efforts needed to facilitate the development, selection, and application of the most robust distributed modelling methods. A variety of modular approaches have been developed, but with little consideration for compatibility among systems and concepts. Several systems are proprietary, limiting any user interaction. The US Geological Survey modular modelling system (MMS) is a modular modelling framework that uses an open source software approach to enable all members of the scientific community to address collaboratively the many complex issues associated with the design, development, and application of distributed hydrological and environmental models. Implementation of a common modular concept is not a trivial task. However, it brings the resources of a larger community to bear on the problems of distributed modelling, provides a framework in which to compare alternative modelling approaches objectively, and provides a means of sharing the latest modelling advances. The concepts and components of the MMS are described and an example application of the MMS, in a decision-support system context, is presented to demonstrate current system capabilities. Copyright ?? 2002 John Wiley and Sons, Ltd.

  5. On the EEG/MEG forward problem solution for distributed cortical sources.

    Science.gov (United States)

    von Ellenrieder, Nicolás; Valdés-Hernández, Pedro A; Muravchik, Carlos H

    2009-10-01

    In studies of EEG/MEG problems involving cortical sources, the cortex may be modeled by a 2-D manifold inside the brain. In such cases the primary or impressed current density over this manifold is usually approximated by a set of dipolar sources located at the vertices of the cortical surface tessellation. In this study, we analyze the different errors induced by this approximation on the EEG/MEG forward problem. Our results show that in order to obtain more accurate solutions of the forward problems with the multiple dipoles approximation, the moments of the dipoles should be weighted by the area of the surrounding triangles, or using an alternative approximation of the primary current as a constant or linearly varying current density over plane triangular elements of the cortical surface tessellation. This should be taken into account when computing the lead field matrix for solving the EEG/MEG inverse problem in brain imaging methods.

  6. Dynamical Models For Prices With Distributed Delays

    Directory of Open Access Journals (Sweden)

    Mircea Gabriela

    2015-06-01

    Full Text Available In the present paper we study some models for the price dynamics of a single commodity market. The quantities of supplied and demanded are regarded as a function of time. Nonlinearities in both supply and demand functions are considered. The inventory and the level of inventory are taken into consideration. Due to the fact that the consumer behavior affects commodity demand, and the behavior is influenced not only by the instantaneous price, but also by the weighted past prices, the distributed time delay is introduced. The following kernels are taken into consideration: demand price weak kernel and demand price Dirac kernel. Only one positive equilibrium point is found and its stability analysis is presented. When the demand price kernel is weak, under some conditions of the parameters, the equilibrium point is locally asymptotically stable. When the demand price kernel is Dirac, the existence of the local oscillations is investigated. A change in local stability of the equilibrium point, from stable to unstable, implies a Hopf bifurcation. A family of periodic orbits bifurcates from the positive equilibrium point when the time delay passes through a critical value. The last part contains some numerical simulations to illustrate the effectiveness of our results and conclusions.

  7. Modelling human problem solving with data from an online game.

    Science.gov (United States)

    Rach, Tim; Kirsch, Alexandra

    2016-11-01

    Since the beginning of cognitive science, researchers have tried to understand human strategies in order to develop efficient and adequate computational methods. In the domain of problem solving, the travelling salesperson problem has been used for the investigation and modelling of human solutions. We propose to extend this effort with an online game, in which instances of the travelling salesperson problem have to be solved in the context of a game experience. We report on our effort to design and run such a game, present the data contained in the resulting openly available data set and provide an outlook on the use of games in general for cognitive science research. In addition, we present three geometrical models mapping the starting point preferences in the problems presented in the game as the result of an evaluation of the data set.

  8. Application of a Mathematical Model to an Advertisement Reservation Problem

    Directory of Open Access Journals (Sweden)

    Ozlem COSGUN

    2013-01-01

    Full Text Available Television networks provide TV programs free of charge to the public. However, they acquire their revenue by telecasting advertisements in the midst of continuing programs or shows. A key problem faced by the TV networks in Turkey is how to accept and televise the advertisements reserved by a client on a specified advertisement break which we called “Advertisement Reservation Problem” (ARP. The problem is complicated by limited time inventory, by different rating points for different target groups, competition avoidance and the relationship between TV networks and clients. In this study we have developed a mathematical model for advertisement reservation problem and extended this model for some cases encountered in real business life. We have also discussed how these cases affect the decisions of a TV network. Mixed integer linear programming approach is proposed to solve these problems. This approach has been implemented to a case taken from one of the biggest TV networks of Turkey.

  9. THE INVERSE PROBLEM OF A REPRODUCTION MODEL OF NATIONAL INCOME

    Directory of Open Access Journals (Sweden)

    Laipanova Z. M.

    2016-02-01

    Full Text Available In practice, there were developed and tested some mathematical models of balance relationships (balance model, economic growth, expanding economy, labour market, theories of consumption, production, competitive equilibrium models of the economy in conditions of imperfect competition and others. The basis of these models were based on linear algebra, mathematical analysis, mathematical programming, differential equations, optimization methods, optimal control theory, probability theory, stochastic processes, operations research, game theory, statistical analysis. The inverse problem in various models of mathematical Economics was considered quite rare. These tasks were sufficiently investigated in the study of physical processes. As shown by the analysis of the theoretical and applied studies of economic processes, they represent considerable interest for practice. Therefore, the considered in the study inverse problems of the mathematical model, as it is shown by the already introduced results of other mathematical models, are of considerable interest in applied and theoretical research. In this article, the authors have formulated and investigated an inverse problem for a model of economic growth. For its solution the authors propose to build a system of algebraic equations, using a reproduction model of national income; then, using methods of quadratic programming, to find the best average quadratic estimates of the model parameter

  10. Security Issues in Distributed Database System Model

    OpenAIRE

    MD.TABREZ QUASIM

    2013-01-01

    This paper reviews the most common as well as emerging security mechanism used in distributed database system. As distributed database became more popular, the need for improvement in distributed database management system become even more important. The most important issue is security that may arise and possibly compromise the access control and the integrity of the system. In this paper, we propose some solution for some security aspects such as multi-level access control, ...

  11. Atomic hydrogen distribution. [in Titan atmospheric model

    Science.gov (United States)

    Tabarie, N.

    1974-01-01

    Several possible H2 vertical distributions in Titan's atmosphere are considered with the constraint of 5 km-A a total quantity. Approximative calculations show that hydrogen distribution is quite sensitive to two other parameters of Titan's atmosphere: the temperature and the presence of other constituents. The escape fluxes of H and H2 are also estimated as well as the consequent distributions trapped in the Saturnian system.

  12. Analysis of the loop length distribution for the negative-weight percolation problem in dimensions d=2 through d=6.

    Science.gov (United States)

    Claussen, G; Apolo, L; Melchert, O; Hartmann, A K

    2012-11-01

    We consider the negative weight percolation (NWP) problem on hypercubic lattice graphs with fully periodic boundary conditions in all relevant dimensions from d=2 to the upper critical dimension d=6. The problem exhibits edge weights drawn from disorder distributions that allow for weights of either sign. We are interested in the full ensemble of loops with negative weight, i.e., nontrivial (system spanning) loops as well as topologically trivial ("small") loops. The NWP phenomenon refers to the disorder driven proliferation of system spanning loops of total negative weight. While previous studies where focused on the latter loops, we here put under scrutiny the ensemble of small loops. Our aim is to characterize-using this extensive and exhaustive numerical study-the loop length distribution of the small loops right at and below the critical point of the hypercubic setups by means of two independent critical exponents. These can further be related to the results of previous finite-size scaling analyses carried out for the system spanning loops. For the numerical simulations, we employed a mapping of the NWP model to a combinatorial optimization problem that can be solved exactly by using sophisticated matching algorithms. This allowed us to study here numerically exact very large systems with high statistics.

  13. A particle swarm approach to solve vehicle routing problem with uncertain demand: A drug distribution case study

    Directory of Open Access Journals (Sweden)

    Babak Farhang Moghadam

    2010-07-01

    Full Text Available During the past few years, there have tremendous efforts on improving the cost of logistics using varieties of Vehicle Routing Problem (VRP models. In fact, the recent rise on fuel prices has motivated many to reduce the cost of transportation associated with their business through an improved implementation of VRP systems. We study a specific form of VRP where demand is supposed to be uncertain with unknown distribution. A Particle Swarm Optimization (PSO is proposed to solve the VRP and the results are compared with other existing methods. The proposed approach is also used for real world case study of drug distribution and the preliminary results indicate that the method could reduce the unmet demand significantly.

  14. Solve the partitioning problem by sticker model in DNA computing

    Institute of Scientific and Technical Information of China (English)

    QU Huiqin; LU Mingming; ZHU Hong

    2004-01-01

    The aim of this work is to solve the partitioning problem, the most canonical NP-complete problem containing numerical parameters, within the sticker model of DNA computing. We firstly design a parallel program for addition, and then give a program to calculate the subset sums of a set. At last, a program for partitioning is given, which contains the former programs. Furthermore, the correctness of each program is proved in this paper.

  15. Security Issues in Distributed Database System Model

    Directory of Open Access Journals (Sweden)

    MD.TABREZ QUASIM

    2013-12-01

    Full Text Available This paper reviews the most common as well as emerging security mechanism used in distributed database system. As distributed database became more popular, the need for improvement in distributed database management system become even more important. The most important issue is security that may arise and possibly compromise the access control and the integrity of the system. In this paper, we propose some solution for some security aspects such as multi-level access control, confidentiality, reliability, integrity and recovery that pertain to a distributed database system.

  16. Problem-solving Model for Managing Stress and Anxiety

    OpenAIRE

    Taghi Abutalebi Ahmadi

    2013-01-01

    The purpose of this study is to take a look at problem-solving model for managing stress and anxiety. If each of us as a human being has an organized method for solving the different problems of our life, at that time we can get along with stress and anxiety easily. The capability of problem solving makes it possible for that person a) to distinguish emotions in himself and others b) to understand how excitement affects behavior c) to be able to show different reactions to different emotions....

  17. An Adaptive Neural Network Model for Nonlinear Programming Problems

    Institute of Scientific and Technical Information of China (English)

    Xiang-sun Zhang; Xin-jian Zhuo; Zhu-jun Jing

    2002-01-01

    In this paper a canonical neural network with adaptively changing synaptic weights and activation function parameters is presented to solve general nonlinear programming problems. The basic part of the model is a sub-network used to find a solution of quadratic programming problems with simple upper and lower bounds. By sequentially activating the sub-network under the control of an external computer or a special analog or digital processor that adjusts the weights and parameters, one then solves general nonlinear programming problems. Convergence proof and numerical results are given.

  18. Mathematical modeling/problem solving in global oxygen transport.

    Science.gov (United States)

    Farrell, Kevin; Hill, Andrew; Dent, Leon; Nguyen, Minh Ly

    2009-08-01

    A simplified approach to mathematical modeling/problem solving in global oxygen transport is presented. In addition to standard oxygen transport formulae, it uses the S-Factor and a mathematical relationship relating SvO(2) to the ratio DO(2)/VO(2). This method allows the determination or specification of SvO(2), PvO(2), P(50), and systemic shunting in the context of this simplified approach. Heretofore this has not been possible. With this approach, essentially all clinical problems in global oxygen transport can be dealt with. This is illustrated by the broad scope of the five problems presented.

  19. Managing problem employees: a model program and practical guide.

    Science.gov (United States)

    Miller, Laurence

    2010-01-01

    This article presents a model program for managing problem employees that includes a description ofthe basic types of problem employees and employee problems, as well as practical recommendations for. (1) selection and screening, (2) education and training, (3) coaching and counseling, (4) discipline, (5) psychological fitness-for-duty evaluations, (6) mental health services, (7) termination, and (8) leadership and administrative strategies. Throughout, the emphasis on balancing the need for order and productivity in the workplace with fairness and concern for employee health and well-being.

  20. Competing risk models in reliability systems, a weibull distribution model with bayesian analysis approach

    Science.gov (United States)

    Iskandar, Ismed; Satria Gondokaryono, Yudi

    2016-02-01

    In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range

  1. Crisscross Optimization Algorithm and Monte Carlo Simulation for Solving Optimal Distributed Generation Allocation Problem

    Directory of Open Access Journals (Sweden)

    Xiangang Peng

    2015-12-01

    Full Text Available Distributed generation (DG systems are integral parts in future distribution networks. In this paper, a novel approach integrating crisscross optimization algorithm and Monte Carlo simulation (CSO-MCS is implemented to solve the optimal DG allocation (ODGA problem. The feature of applying CSO to address the ODGA problem lies in three interacting operators, namely horizontal crossover, vertical crossover and competitive operator. The horizontal crossover can search new solutions in a hypercube space with a larger probability while in the periphery of each hypercube with a decreasing probability. The vertical crossover can effectively facilitate those stagnant dimensions of a population to escape from premature convergence. The competitive operator allows the crisscross search to always maintain in a historical best position to quicken the converge rate. It is the combination of the double search strategies and competitive mechanism that enables CSO significant advantage in convergence speed and accuracy. Moreover, to deal with system uncertainties such as the output power of wind turbine and photovoltaic generators, an MCS-based method is adopted to solve the probabilistic power flow. The effectiveness of the CSO-MCS method is validated on the typical 33-bus and 69-bus test system, and results substantiate the suitability of CSO-MCS for multi-objective ODGA problem.

  2. On the Spectral Problems for the Discrete Boltzmann Models

    Institute of Scientific and Technical Information of China (English)

    Aq Kwang-Hua Chu; J. FANG Jing

    2000-01-01

    The discrete Boltzmann models are used to study the spectral problems related to the one-dimensional plane wave propaogation in monatomic gases which are fundamental in the nonequilibrium tatistical thermodynamics. The results show that the 8-velocity model can only describe the propagation of the diffusion mode (entropy wave) in the intermediate Knudsen number regime. The 4- and 6-velocity models, instead, can describe the propagation of sound modes quite well, after comparison with the continuum-mechanical results.

  3. An optimisation model for the warehouse design and planning problem

    OpenAIRE

    Geraldes, Carla A. S.; Carvalho, Sameiro; Pereira, Guilherme

    2011-01-01

    In spite of the importance of warehouses in the field of the supply chain management, there is not a single decision model that integrates all the decisions that concerns the warehouse design and planning problem. A number of warehouse decision support models have been proposed in the literature but considerable difficulties in applying these models still remain, due to the large amount of information to be processed and to the large number of possible alternatives. In this paper we discuss a...

  4. Modelling Ecuador's rainfall distribution according to geographical characteristics.

    Science.gov (United States)

    Tobar, Vladimiro; Wyseure, Guido

    2017-04-01

    It is known that rainfall is affected by terrain characteristics and some studies had focussed on its distribution over complex terrain. Ecuador's temporal and spatial rainfall distribution is affected by its location on the ITCZ, the marine currents in the Pacific, the Amazon rainforest, and the Andes mountain range. Although all these factors are important, we think that the latter one may hold a key for modelling spatial and temporal distribution of rainfall. The study considered 30 years of monthly data from 319 rainfall stations having at least 10 years of data available. The relatively low density of stations and their location in accessible sites near to main roads or rivers, leave large and important areas ungauged, making it not appropriate to rely on traditional interpolation techniques to estimate regional rainfall for water balance. The aim of this research was to come up with a useful model for seasonal rainfall distribution in Ecuador based on geographical characteristics to allow its spatial generalization. The target for modelling was the seasonal rainfall, characterized by nine percentiles for each one of the 12 months of the year that results in 108 response variables, later on reduced to four principal components comprising 94% of the total variability. Predictor variables for the model were: geographic coordinates, elevation, main wind effects from the Amazon and Coast, Valley and Hill indexes, and average and maximum elevation above the selected rainfall station to the east and to the west, for each one of 18 directions (50-135°, by 5°) adding up to 79 predictors. A multiple linear regression model by the Elastic-net algorithm with cross-validation was applied for each one of the PC as response to select the most important ones from the 79 predictor variables. The Elastic-net algorithm deals well with collinearity problems, while allowing variable selection in a blended approach between the Ridge and Lasso regression. The model fitting

  5. General Ripple Mobility Model: A Novel Mobility Model of Uniform Spatial Distribution and Diverse Average Speed

    Science.gov (United States)

    Chen, Chun-Hung; Wu, Ho-Ting; Ke, Kai-Wei

    Simulations are often deployed to evaluate proposed mechanisms or algorithms in Mobile Ad Hoc Networks (MANET). In MANET, the impacts of some simulation parameters are noticeable, such as transmission range, data rate etc. However, the effect of mobility model is not clear until recently. Random Waypoint (RWP) is one of the most applied nodal mobility models in many simulations due to its clear procedures and easy employments. However, it exhibits the two major problems: decaying average speed and border effect. Both problems will overestimate the performance of the employed protocols and applications. Although many recently proposed mobility models are able to reduce or eliminate the above-mentioned problems, the concept of Diverse Average Speed (DAS) has not been introduced. DAS aims to provide different average speeds within the same speed range. In most mobility models, the average speed is decided when the minimum and maximum speeds are set. In this paper, we propose a novel mobility model, named General Ripple Mobility Model (GRMM). GRMM targets to provide a uniform nodal spatial distribution and DAS without decaying average speed. The simulations and analytic results have demonstrated the merits of the outstanding properties of the GRMM model.

  6. Exacerbating the Cosmological Constant Problem with Interacting Dark Energy Models

    Science.gov (United States)

    Marsh, M. C. David

    2017-01-01

    Future cosmological surveys will probe the expansion history of the Universe and constrain phenomenological models of dark energy. Such models do not address the fine-tuning problem of the vacuum energy, i.e., the cosmological constant problem (CCP), but can make it spectacularly worse. We show that this is the case for "interacting dark energy" models in which the masses of the dark matter states depend on the dark energy sector. If realized in nature, these models have far-reaching implications for proposed solutions to the CCP that require the number of vacua to exceed the fine-tuning of the vacuum energy density. We show that current estimates of the number of flux vacua in string theory, Nvac˜O (1 0272 000) , are far too small to realize certain simple models of interacting dark energy and solve the cosmological constant problem anthropically. These models admit distinctive observational signatures that can be targeted by future gamma-ray observatories, hence making it possible to observationally rule out the anthropic solution to the cosmological constant problem in theories with a finite number of vacua.

  7. Application of firefly algorithm to the dynamic model updating problem

    Science.gov (United States)

    Shabbir, Faisal; Omenzetter, Piotr

    2015-04-01

    Model updating can be considered as a branch of optimization problems in which calibration of the finite element (FE) model is undertaken by comparing the modal properties of the actual structure with these of the FE predictions. The attainment of a global solution in a multi dimensional search space is a challenging problem. The nature-inspired algorithms have gained increasing attention in the previous decade for solving such complex optimization problems. This study applies the novel Firefly Algorithm (FA), a global optimization search technique, to a dynamic model updating problem. This is to the authors' best knowledge the first time FA is applied to model updating. The working of FA is inspired by the flashing characteristics of fireflies. Each firefly represents a randomly generated solution which is assigned brightness according to the value of the objective function. The physical structure under consideration is a full scale cable stayed pedestrian bridge with composite bridge deck. Data from dynamic testing of the bridge was used to correlate and update the initial model by using FA. The algorithm aimed at minimizing the difference between the natural frequencies and mode shapes of the structure. The performance of the algorithm is analyzed in finding the optimal solution in a multi dimensional search space. The paper concludes with an investigation of the efficacy of the algorithm in obtaining a reference finite element model which correctly represents the as-built original structure.

  8. Problem-solving Model for Managing Stress and Anxiety

    Directory of Open Access Journals (Sweden)

    Taghi Abutalebi Ahmadi

    2013-07-01

    Full Text Available The purpose of this study is to take a look at problem-solving model for managing stress and anxiety. If each of us as a human being has an organized method for solving the different problems of our life, at that time we can get along with stress and anxiety easily. The capability of problem solving makes it possible for that person a to distinguish emotions in himself and others b to understand how excitement affects behavior c to be able to show different reactions to different emotions. If we don’t deal with emotional states such as grief, anger or anxiety properly, theses emotions will have negative effects on the physical and mental health of the person. Problem solving teaching is a treatment method by which the person learns to utilize effective cognitive skills to get along with inter-personal and problematic situations. In this study, we would like to emphasize the importance of the problem-solving teaching, and learn about its varieties and principal and implementation techniques, so that we can use it to manage our internal and environmental stressors. Among the various models, I will mention the easy and helpful five-step problem solving approach of Dixon and Glover (1984, as cited Yari (2009 as an example including describing the problem, stating the problem in precise and clear terms, selecting guidelines for solving the problems and prioritizing them, implementing the guidelines characterized at the previous stage, finally evaluating. At this stage, we will consider what we have gained vis a vis what we had hoped to gain.

  9. Emotion: Appraisal-coping model for the "Cascades" problem

    CERN Document Server

    Mahboub, Karim; Bertelle, Cyrille; Jay, Véronique

    2009-01-01

    Modelling emotion has become a challenge nowadays. Therefore, several models have been produced in order to express human emotional activity. However, only a few of them are currently able to express the close relationship existing between emotion and cognition. An appraisal-coping model is presented here, with the aim to simulate the emotional impact caused by the evaluation of a particular situation (appraisal), along with the consequent cognitive reaction intended to face the situation (coping). This model is applied to the "Cascades" problem, a small arithmetical exercise designed for ten-year-old pupils. The goal is to create a model corresponding to a child's behaviour when solving the problem using his own strategies.

  10. A mathematical model of a computational problem solving system

    Science.gov (United States)

    Aris, Teh Noranis Mohd; Nazeer, Shahrin Azuan

    2015-05-01

    This paper presents a mathematical model based on fuzzy logic for a computational problem solving system. The fuzzy logic uses truth degrees as a mathematical model to represent vague algorithm. The fuzzy logic mathematical model consists of fuzzy solution and fuzzy optimization modules. The algorithm is evaluated based on a software metrics calculation that produces the fuzzy set membership. The fuzzy solution mathematical model is integrated in the fuzzy inference engine that predicts various solutions to computational problems. The solution is extracted from a fuzzy rule base. Then, the solutions are evaluated based on a software metrics calculation that produces the level of fuzzy set membership. The fuzzy optimization mathematical model is integrated in the recommendation generation engine that generate the optimize solution.

  11. Blackboard system generator (BSG) - An alternative distributed problem-solving paradigm

    Science.gov (United States)

    Silverman, Barry G.; Feggos, Kostas; Chang, Joseph Shih

    1989-01-01

    A status review is presented for a generic blackboard-based distributed problem-solving environment in which multiple-agent cooperation can be effected. This environment is organized into a shared information panel, a chairman control panel, and a metaplanning panel. Each panel contains a number of embedded AI techniques that facilitate its operation and that provide heuristics for solving the underlying team-agent decision problem. The status of these panels and heuristics is described along with a number of robustness considerations. The techniques for each of the three panels and for four sets of paradigm-related advances are described, along with selected results from classroom teaching experiments and from three applications.

  12. Blackboard system generator (BSG) - An alternative distributed problem-solving paradigm

    Science.gov (United States)

    Silverman, Barry G.; Feggos, Kostas; Chang, Joseph Shih

    1989-01-01

    A status review is presented for a generic blackboard-based distributed problem-solving environment in which multiple-agent cooperation can be effected. This environment is organized into a shared information panel, a chairman control panel, and a metaplanning panel. Each panel contains a number of embedded AI techniques that facilitate its operation and that provide heuristics for solving the underlying team-agent decision problem. The status of these panels and heuristics is described along with a number of robustness considerations. The techniques for each of the three panels and for four sets of paradigm-related advances are described, along with selected results from classroom teaching experiments and from three applications.

  13. A climate distribution model of malaria transmission in Sudan.

    Science.gov (United States)

    Musa, Mohammed I; Shohaimi, Shamarina; Hashim, Nor R; Krishnarajah, Isthrinayagy

    2012-11-01

    Malaria remains a major health problem in Sudan. With a population exceeding 39 million, there are around 7.5 million cases and 35,000 deaths every year. The predicted distribution of malaria derived from climate factors such as maximum and minimum temperatures, rainfall and relative humidity was compared with the actual number of malaria cases in Sudan for the period 2004 to 2010. The predictive calculations were done by fuzzy logic suitability (FLS) applied to the numerical distribution of malaria transmission based on the life cycle characteristics of the Anopheles mosquito accounting for the impact of climate factors on malaria transmission. This information is visualized as a series of maps (presented in video format) using a geographical information systems (GIS) approach. The climate factors were found to be suitable for malaria transmission in the period of May to October, whereas the actual case rates of malaria were high from June to November indicating a positive correlation. While comparisons between the prediction model for June and the case rate model for July did not show a high degree of association (18%), the results later in the year were better, reaching the highest level (55%) for October prediction and November case rate.

  14. Flash flood modeling with the MARINE hydrological distributed model

    Directory of Open Access Journals (Sweden)

    V. Estupina-Borrell

    2006-11-01

    Full Text Available Flash floods are characterized by their violence and the rapidity of their occurrence. Because these events are rare and unpredictable, but also fast and intense, their anticipation with sufficient lead time for warning and broadcasting is a primary subject of research. Because of the heterogeneities of the rain and of the behavior of the surface, spatially distributed hydrological models can lead to a better understanding of the processes and so on they can contribute to a better forecasting of flash flood. Our main goal here is to develop an operational and robust methodology for flash flood forecasting. This methodology should provide relevant data (information about flood evolution on short time scales, and should be applicable even in locations where direct observations are sparse (e.g. absence of historical and modern rainfalls and streamflows in small mountainous watersheds. The flash flood forecast is obtained by the physically based, space-time distributed hydrological model "MARINE'' (Model of Anticipation of Runoff and INondations for Extreme events. This model is presented and tested in this paper for a real flash flood event. The model consists in two steps, or two components: the first component is a "basin'' flood module which generates flood runoff in the upstream part of the watershed, and the second component is the "stream network'' module, which propagates the flood in the main river and its subsidiaries. The basin flash flood generation model is a rainfall-runoff model that can integrate remotely sensed data. Surface hydraulics equations are solved with enough simplifying hypotheses to allow real time exploitation. The minimum data required by the model are: (i the Digital Elevation Model, used to calculate slopes that generate runoff, it can be issued from satellite imagery (SPOT or from French Geographical Institute (IGN; (ii the rainfall data from meteorological radar, observed or

  15. Effect of PLISSIT Model on Solution of Sexual Problems

    Directory of Open Access Journals (Sweden)

    Esra Uslu

    2016-03-01

    Full Text Available This systematic review study aims to determine the effect of PLISSIT model (permission, limited information, special suggestions, intensive therapy in the care of individuals having sexual problems. Two of the studies included in the systematic review have been carried out in Iran and one of them in Turkey. These studies were limited to the patients with stoma and women having sexual problems. Results presented that care via PLISSIT model improves the sexual functions and reduces sexual stress, increases the sexual desire, sexual arousal, lubrication, orgasm, sexual satisfaction and frequency of sexual activity. [Psikiyatride Guncel Yaklasimlar - Current Approaches in Psychiatry 2016; 8(1: 52-63

  16. A new solution for maximal clique problem based sticker model.

    Science.gov (United States)

    Darehmiraki, Majid

    2009-02-01

    In this paper, we use stickers to construct a solution space of DNA for the maximal clique problem (MCP). Simultaneously, we also apply the DNA operation in the sticker-based model to develop a DNA algorithm. The results of the proposed algorithm show that the MCP is resolved with biological operations in the sticker-based model for the solution space of the sticker. Moreover, this work presents clear evidence of the ability of DNA computing to solve the NP-complete problem. The potential of DNA computing for the MCP is promising given the operational time complexity of O(nxk).

  17. Models and analysis for distributed systems

    CERN Document Server

    Haddad, Serge; Pautet, Laurent; Petrucci, Laure

    2013-01-01

    Nowadays, distributed systems are increasingly present, for public software applications as well as critical systems. software applications as well as critical systems. This title and Distributed Systems: Design and Algorithms - from the same editors - introduce the underlying concepts, the associated design techniques and the related security issues.The objective of this book is to describe the state of the art of the formal methods for the analysis of distributed systems. Numerous issues remain open and are the topics of major research projects. One current research trend consists of pro

  18. Personality disorder models and their coverage of interpersonal problems.

    Science.gov (United States)

    Williams, Trevor F; Simms, Leonard J

    2016-01-01

    Interpersonal dysfunction is a defining feature of personality disorders (PDs) and can serve as a criterion for comparing PD models. In this study, the interpersonal coverage of 4 competing PD models was examined using a sample of 628 current or recent psychiatric patients who completed the NEO Personality Inventory-3 First Half (NEO-PI-3FH; McCrae & Costa, 2007), Personality Inventory for the DSM-5 (PID-5; Krueger et al., 2012), Computerized Adaptive Test of Personality Disorder-Static Form (CAT-PD-SF; Simms et al., 2011), and Structured Clinical Interview for DSM-IV Personality Questionnaire (SCID-II PQ; First, Spitzer, Gibbon, & Williams, 1995). Participants also completed the Inventory of Interpersonal Problems-Short Circumplex (IIP-SC; Soldz, Budman, Demby, & Merry, 1995) to assess interpersonal dysfunction. Analyses compared the severity and style of interpersonal problems that characterize PD models. Previous research with DSM-5 Section II and III models was generally replicated. Extraversion and Agreeableness facets related to the most well defined interpersonal problems across normal-range and pathological traits. Pathological trait models provided more coverage of dominance problems, whereas normal-range traits covered nonassertiveness better. These results suggest that more work may be needed to reconcile descriptions of personality pathology at the level of specific constructs. (c) 2016 APA, all rights reserved).

  19. Analysis Model for Domestic Hot Water Distribution Systems: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Maguire, J.; Krarti, M.; Fang, X.

    2011-11-01

    A thermal model was developed to estimate the energy losses from prototypical domestic hot water (DHW) distribution systems for homes. The developed model, using the TRNSYS simulation software, allows researchers and designers to better evaluate the performance of hot water distribution systems in homes. Modeling results were compared with past experimental study results and showed good agreement.

  20. A model for the distribution channels planning process

    NARCIS (Netherlands)

    Neves, M.F.; Zuurbier, P.; Campomar, M.C.

    2001-01-01

    Research of existing literature reveals some models (sequence of steps) for companies that want to plan distribution channels. None of these models uses strong contributions from transaction cost economics, bringing a possibility to elaborate on a "distribution channels planning model", with these c

  1. An Integrated Model for Production and Distribution Planning of Perishable Products with Inventory and Routing Considerations

    Directory of Open Access Journals (Sweden)

    S. M. Seyedhosseini

    2014-01-01

    Full Text Available In many conventional supply chains, production planning and distribution planning are treated separately. However, it is now demonstrated that they are mutually related problems that must be tackled in an integrated way. Hence, in this paper a new integrated production and distribution planning model for perishable products is formulated. The proposed model considers a supply chain network consisting of a production facility and multiple distribution centers. The facility produces a single perishable product that is storable only for predetermined periods. A homogenous fleet of vehicles is responsible for delivering the product from facility to distribution centers. The decisions to be made are the production quantities, the distribution centers that must be visited, and the quantities to be delivered to them. The objective is to minimize the total cost, where the trip minimization is considered simultaneously. As the proposed formulation is computationally complex, a heuristic method is developed to tackle the problem. In the developed method, the problem is divided into production submodel and distribution submodel. The production submodel is solved using LINGO, and a particle swarm heuristic is developed to tackle distribution submodel. Efficiency of the algorithm is proved through a number of randomly generated test problems.

  2. Guidance for modeling causes and effects in environmental problem solving

    Science.gov (United States)

    Armour, Carl L.; Williamson, Samuel C.

    1988-01-01

    Environmental problems are difficult to solve because their causes and effects are not easily understood. When attempts are made to analyze causes and effects, the principal challenge is organization of information into a framework that is logical, technically defensible, and easy to understand and communicate. When decisionmakers attempt to solve complex problems before an adequate cause and effect analysis is performed there are serious risks. These risks include: greater reliance on subjective reasoning, lessened chance for scoping an effective problem solving approach, impaired recognition of the need for supplemental information to attain understanding, increased chance for making unsound decisions, and lessened chance for gaining approval and financial support for a program/ Cause and effect relationships can be modeled. This type of modeling has been applied to various environmental problems, including cumulative impact assessment (Dames and Moore 1981; Meehan and Weber 1985; Williamson et al. 1987; Raley et al. 1988) and evaluation of effects of quarrying (Sheate 1986). This guidance for field users was written because of the current interest in documenting cause-effect logic as a part of ecological problem solving. Principal literature sources relating to the modeling approach are: Riggs and Inouye (1975a, b), Erickson (1981), and United States Office of Personnel Management (1986).

  3. Stochastic reduced order models for inverse problems under uncertainty.

    Science.gov (United States)

    Warner, James E; Aquino, Wilkins; Grigoriu, Mircea D

    2015-03-01

    This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well.

  4. The two capacitor problem revisited: simple harmonic oscillator model approach

    CERN Document Server

    Lee, Keeyung

    2012-01-01

    The well-known two-capacitor problem, in which exactly half the stored energy disappears when a charged capacitor is connected to an identical capacitor is discussed based on the mechanical harmonic oscillator model approach. In the mechanical harmonic oscillator model, it is shown first that \\emph {exactly half} the work done by a constant applied force is dissipated irrespective of the form of dissipation mechanism when the system comes to a new equilibrium after a constant force is abruptly applied. This model is then applied to the energy loss mechanism in the capacitor charging problem or the two-capacitor problem. This approach allows a simple explanation of the energy dissipation mechanism in these problems and shows that the dissipated energy should always be \\emph {exactly half} the supplied energy whether that is caused by the Joule heat or by the radiation. This paper which provides a simple treatment of the energy dissipation mechanism in the two-capacitor problem is suitable for all undergraduate...

  5. LAGRANGE MULTIPLIERS IN THE PROBABILITY DISTRIBUTIONS ELICITATION PROBLEM: AN APPLICATION TO THE 2013 FIFA CONFEDERATIONS CUP

    Directory of Open Access Journals (Sweden)

    Diogo de Carvalho Bezerra

    2015-12-01

    Full Text Available ABSTRACT Contributions from the sensitivity analysis of the parameters of the linear programming model for the elicitation of experts' beliefs are presented. The process allows for the calibration of the family of probability distributions obtained in the elicitation process. An experiment to obtain the probability distribution of a future event (Brazil vs. Spain soccer game in the 2013 FIFA Confederations Cup final game was conducted. The proposed sensitivity analysis step may help to reduce the vagueness of the information given by the expert.

  6. Electricity distribution management Smart Grid system model

    Directory of Open Access Journals (Sweden)

    Wiesław Nowak

    2012-06-01

    Full Text Available This paper presents issues concerning the implementation of Smart Grid solutions in a real distribution network. The main components possible to quick implementation were presented. Realization of these ideas should bring tangible benefi ts to both customers and distribution system operators. Moreover the paper shows selected research results which examine proposed solutions in area of improving supply reliability and reducing energy losses in analysed network.

  7. Electricity distribution management Smart Grid system model

    OpenAIRE

    Wiesław Nowak; Wojciech Bąchorek; Szczepan Moskwa; Rafał Tarko; Waldemar Szpyra; Mariusz Benesz; Andrzej Makuch; Jarosław Łabno; Paweł Mazur

    2012-01-01

    This paper presents issues concerning the implementation of Smart Grid solutions in a real distribution network. The main components possible to quick implementation were presented. Realization of these ideas should bring tangible benefi ts to both customers and distribution system operators. Moreover the paper shows selected research results which examine proposed solutions in area of improving supply reliability and reducing energy losses in analysed network.

  8. Generalized Valon Model for Double Parton Distributions

    Science.gov (United States)

    Broniowski, Wojciech; Ruiz Arriola, Enrique; Golec-Biernat, Krzysztof

    2016-06-01

    We show how the double parton distributions may be obtained consistently from the many-body light-cone wave functions. We illustrate the method on the example of the pion with two Fock components. The procedure, by construction, satisfies the Gaunt-Stirling sum rules. The resulting single parton distributions of valence quarks and gluons are consistent with a phenomenological parametrization at a low scale.

  9. Generalized Valon Model for Double Parton Distributions

    CERN Document Server

    Broniowski, Wojciech; Golec-Biernat, Krzysztof

    2016-01-01

    We show how the double parton distributions may be obtained consistently from the many-body light-cone wave functions. We illustrate the method on the example of the pion with two Fock components. The procedure, by construction, satisfies the Gaunt-Stirling sum rules. The resulting single parton distributions of valence quarks and gluons are consistent with a phenomenological parametrization at a low scale.

  10. Model reduction for optimization of structural-acoustic coupling problems

    DEFF Research Database (Denmark)

    Creixell Mediante, Ester; Jensen, Jakob Søndergaard; Brunskog, Jonas;

    2016-01-01

    , which becomes highly time consuming since many iterations may be required. The use of model reduction techniques to speed up the computations is studied in this work. The Component Mode Synthesis (CMS) method and the Multi-Model Reduction (MMR) method are adapted for problems with structure......Fully coupled structural-acoustic models of complex systems, such as those used in the hearing aid field, may have several hundreds of thousands of nodes. When there is a strong structure-acoustic interaction, performing optimization on one part requires the complete model to be taken into account...

  11. A review of mathematical models in economic environmental problems

    DEFF Research Database (Denmark)

    Nahorski, Z.; Ravn, H.F.

    2000-01-01

    The paper presents a review of mathematical models used,in economic analysis of environmental problems. This area of research combines macroeconomic models of growth, as dependent on capital, labour, resources, etc., with environmental models describing such phenomena like natural resources...... exhaustion or pollution accumulation and degradation. In simpler cases the models can be treated analytically and the utility function can be optimized using, e.g., such tools as the maximum principle. In more complicated cases calculation of the optimal environmental policies requires a computer solution....

  12. Optimal reinsurance/investment problems for general insurance models

    CERN Document Server

    Liu, Yuping; 10.1214/08-AAP582

    2009-01-01

    In this paper the utility optimization problem for a general insurance model is studied. The reserve process of the insurance company is described by a stochastic differential equation driven by a Brownian motion and a Poisson random measure, representing the randomness from the financial market and the insurance claims, respectively. The random safety loading and stochastic interest rates are allowed in the model so that the reserve process is non-Markovian in general. The insurance company can manage the reserves through both portfolios of the investment and a reinsurance policy to optimize a certain utility function, defined in a generic way. The main feature of the problem lies in the intrinsic constraint on the part of reinsurance policy, which is only proportional to the claim-size instead of the current level of reserve, and hence it is quite different from the optimal investment/consumption problem with constraints in finance. Necessary and sufficient conditions for both well posedness and solvability...

  13. Distributed Model Predictive Control for Smart Energy Systems

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus Fogtmann; Vandenberghe, Lieven; Poulsen, Niels Kjølstad

    2016-01-01

    Integration of a large number of flexible consumers in a smart grid requires a scalable power balancing strategy. We formulate the control problem as an optimization problem to be solved repeatedly by the aggregator in a model predictive control framework. To solve the large-scale control problem...

  14. Addressing the Challenges of Distributed Hydrologic Modeling for Operational Forecasting

    Science.gov (United States)

    Butts, M. B.; Yamagata, K.; Kobor, J.; Fontenot, E.

    2008-05-01

    Operational forecasting systems must provide reliable, accurate and timely flood forecasts for a range of catchments from small rapidly responding mountain catchments and urban areas to large, complex but more slowly responding fluvial systems. Flood forecasting systems have evolved from simple forecasting for flood mitigation to real-time decision support systems for real-time reservoir operations for water supply, navigation, hydropower, for managing environmental flows and habitat protection, cooling water and water quality forecasting. These different requirements lead to a number of challenges in applying distributed modelling in an operational context. These challenges include, the often short time available for forecasting that requires a trade-off between model complexity and accuracy on the one hand and on the other hand the need for efficient calculations to reduce the computation times. Limitations in the data available in real-time require modelling tools that can not only operate on a minimum of data but also take advantage of new data sources such as weather radar, satellite remote sensing, wireless sensors etc. Finally, models must not only accurately predict flood peaks but also forecast low flows and surface water-groundwater interactions, water quality, water temperature, optimal reservoir levels, and inundated areas. This paper shows how these challenges are being addressed in a number of case studies. The central strategy has been to develop a flexible modelling framework that can be adapted to different data sources, different levels of complexity and spatial distribution and different modelling objectives. The resulting framework allows amongst other things, optimal use of grid-based precipitation fields from weather radar and numerical weather models, direct integration of satellite remote sensing, a unique capability to treat a range of new forecasting problems such as flooding conditioned by surface water-groundwater interactions. Results

  15. The flavour problem and family symmetry beyond the Standard Model

    CERN Document Server

    Dziewit, Bartosz; Richter, Monika; Zając, Sebastian; Zrałek, Marek

    2016-01-01

    In the frame of two Higgs doublet model we try to explain the lepton masses and mixing matrix elements assuming that neutrinos are Dirac particles. Discrete family symmetry groups, which are subgroups of U(3) up to the 1025 order are considered. Like in the one Higgs Standard Model, we found that discrete family symmetries do not give satisfactory answer for this basic questions in the flavour problem.

  16. A reduced order model for nonlinear vibroacoustic problems

    Directory of Open Access Journals (Sweden)

    Ouisse Morvan

    2012-07-01

    Full Text Available This work is related to geometrical nonlinearities applied to thin plates coupled with fluid-filled domain. Model reduction is performed to reduce the computation time. Reduced order model (ROM is issued from the uncoupled linear problem and enriched with residues to describe the nonlinear behavior and coupling effects. To show the efficiency of the proposed method, numerical simulations in the case of an elastic plate closing an acoustic cavity are presented.

  17. Rural Single Wire Earth Return distribution networks - Associated problems and cost-effective solutions

    Energy Technology Data Exchange (ETDEWEB)

    Hosseinzadeh, N. [Swinburne University of Technology, PO Box 218, Hawthorn, Vic 3122 (Australia); Mayer, J.E. [Aurecon Australia Pty Ltd., Brisbane (Australia); Wolfs, P.J. [Curtin University of Technology, Perth (Australia)

    2011-02-15

    Single Wire Earth Return (SWER) systems are used for supplying electricity at low cost, where electricity supply is required for small populations of people dispersed across wide geographical areas. It is principally used for rural electrification, but is also used for other isolated loads and light rail. The existing SWER distribution systems have been stretched with the sharp growth of their loads because of customers' change of lifestyle, which has introduced additional load of air conditioning equipment, motors driven by variable-speed drives and inverters. This paper proposes cost-effective solutions to address the problem of voltage regulation and compensation of the unbalancing effect of SWER lines on the three-phase feeder of these lines, which have been exacerbated by this load growth. To improve the voltage regulation problem, a LV switchable reactor has been designed, a prototype made and tested in the field. Also, an unbalance compensator has been designed to reduce the unbalancing effect of SWER lines. Two case networks have been used to perform simulation studies on the effectiveness of both proposed solutions. At first, a case study is used to demonstrate the impact of a switchable reactor on improving voltage regulation. Then, another case study shows that installation of a switchable reactor and an unbalance compensator simultaneously on a SWER distribution system effectively improves voltage regulation and reduces unbalancing effects. (author)

  18. Interpretation Problems in Modelling Complex Artifacts for Diagnosis

    DEFF Research Database (Denmark)

    Lind, Morten

    1996-01-01

    The paper analyse the interpretation problems involved in building models for diagnosis of industrial systems. It is shown that the construction of a fault tree of a plant is based on general diagnostic knowledge and an extensive body of plant knowledge. It is also shown that the plant knowledge ...

  19. Modeling and identification of harmonic instability problems in wind farms

    DEFF Research Database (Denmark)

    Ebrahimzadeh, Esmaeil; Blaabjerg, Frede; Wang, Xiongfei;

    2016-01-01

    to identify harmonic instability problems in wind farms, where many wind turbines, cables, transformers, capacitor banks, shunt reactors, etc, typically are located. This methodology introduces the wind farm as a Multi-Input Multi-Outpur (MIMO) control system, where the linearized models of fast inner control...

  20. A psychological cascade model for persisting voice problems in teachers

    NARCIS (Netherlands)

    de Jong, FICRS; Cornelis, BE; Wuyts, FL; Kooijman, PGC; Schutte, HK; Oudes, MJ; Graamans, K

    2003-01-01

    In 76 teachers with persisting voice problems, the maintaining factors and coping strategies were examined. Physical, functional, psychological and socioeconomic factors were assessed. A parallel was drawn to a psychological cascade model designed for patients with chronic back pain. The majority of

  1. Ontological Support in Modeling Learners' Problem Solving Process

    Science.gov (United States)

    Lu, Chun-Hung; Wu, Chia-Wei; Wu, Shih-Hung; Chiou, Guey-Fa; Hsu, Wen-Lian

    2005-01-01

    This paper presents a new model for simulating procedural knowledge in the problem solving process with our ontological system, InfoMap. The method divides procedural knowledge into two parts: process control and action performer. By adopting InfoMap, we hope to help teachers construct curricula (declarative knowledge) and teaching strategies by…

  2. A psychological cascade model for persisting voice problems in teachers

    NARCIS (Netherlands)

    de Jong, FICRS; Cornelis, BE; Wuyts, FL; Kooijman, PGC; Schutte, HK; Oudes, MJ; Graamans, K

    2003-01-01

    In 76 teachers with persisting voice problems, the maintaining factors and coping strategies were examined. Physical, functional, psychological and socioeconomic factors were assessed. A parallel was drawn to a psychological cascade model designed for patients with chronic back pain. The majority of

  3. Distributed primal–dual interior-point methods for solving tree-structured coupled convex problems using message-passing

    DEFF Research Database (Denmark)

    Khoshfetrat Pakazad, Sina; Hansson, Anders; Andersen, Martin S.;

    2016-01-01

    In this paper, we propose a distributed algorithm for solving coupled problems with chordal sparsity or an inherent tree structure which relies on primal–dual interior-point methods. We achieve this by distributing the computations at each iteration, using message-passing. In comparison to existing...... distributed algorithms for solving such problems, this algorithm requires far fewer iterations to converge to a solution with high accuracy. Furthermore, it is possible to compute an upper-bound for the number of required iterations which, unlike existing methods, only depends on the coupling structure...... in the problem. We illustrate the performance of our proposed method using a set of numerical examples....

  4. A Model of Problem Solving: Its Operation, Validity, and Usefulness in the Case of Organic-Synthesis Problems.

    Science.gov (United States)

    Tsaparlis, Georgios; Angelopoulous, Vasileios

    2000-01-01

    Presents a test of the limits of the Johnstone--El-Banna model of problem solving as related to students' responses to organic-synthesis problems. Finds that the predicted pattern was observed in both samples, although the model was more useful for students without previous problem-solving training and for field-independent and field-intermediate…

  5. Mathematical modeling of materially nonlinear problems in structural analyses, Part II: Application in contemporary software

    Directory of Open Access Journals (Sweden)

    Bonić Zoran

    2010-01-01

    Full Text Available The paper presents application of nonlinear material models in the software package Ansys. The development of the model theory is presented in the paper of the mathematical modeling of material nonlinear problems in structural analysis (part I - theoretical foundations, and here is described incremental-iterative procedure for solving problems of nonlinear material used by this package and an example of modeling of spread footing by using Bilinear-kinematics and Drucker-Prager mode was given. A comparative analysis of the results obtained by these modeling and experimental research of the author was made. Occurrence of the load level that corresponds to plastic deformation was noted, development of deformations with increasing load, as well as the distribution of dilatation in the footing was observed. Comparison of calculated and measured values of reinforcement dilatation shows their very good agreement.

  6. Problems with using the normal distribution--and ways to improve quality and efficiency of data analysis.

    Directory of Open Access Journals (Sweden)

    Eckhard Limpert

    Full Text Available BACKGROUND: The gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by mean ± SD, or with the standard error of the mean, mean ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. METHODOLOGY/PRINCIPAL FINDINGS: Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the "95% range check", their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log- normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to mean ± SD, it connects the multiplicative (or geometric mean mean * and the multiplicative standard deviation s* in the form mean * x/s*, that is advantageous and recommended. CONCLUSIONS/SIGNIFICANCE: The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life.

  7. Approximation modeling for the online performance management of distributed computing systems.

    Science.gov (United States)

    Kusic, Dara; Kandasamy, Nagarajan; Jiang, Guofei

    2008-10-01

    A promising method of automating management tasks in computing systems is to formulate them as control or optimization problems in terms of performance metrics. For an online optimization scheme to be of practical value in a distributed setting, however, it must successfully tackle the curses of dimensionality and modeling. This paper develops a hierarchical control framework to solve performance management problems in distributed computing systems operating in a data center. Concepts from approximation theory are used to reduce the computational burden of controlling such large-scale systems. The relevant approximations are made in the construction of the dynamical models to predict system behavior and in the solution of the associated control equations. Using a dynamic resource-provisioning problem as a case study, we show that a computing system managed by the proposed control framework with approximation models realizes profit gains that are, in the best case, within 1% of a controller using an explicit model of the system.

  8. Diagnostics of Robust Growth Curve Modeling Using Student's "t" Distribution

    Science.gov (United States)

    Tong, Xin; Zhang, Zhiyong

    2012-01-01

    Growth curve models with different types of distributions of random effects and of intraindividual measurement errors for robust analysis are compared. After demonstrating the influence of distribution specification on parameter estimation, 3 methods for diagnosing the distributions for both random effects and intraindividual measurement errors…

  9. DISOPE distributed model predictive control of cascade systems with network communication

    Institute of Scientific and Technical Information of China (English)

    Yan ZHANG; Shaoyuan LI

    2005-01-01

    A novel distributed model predictive control scheme based on dynamic integrated system optimization and parameter estimation (DISOPE) was proposed for nonlinear cascade systems under network environment.Under the distributed control structure,online optimization of the cascade system was composed of several cascaded agents that can cooperate and exchange information via network communication.By iterating on modified distributed linear optimal control problems on the basis of estimating parameters at every iteration the correct optimal control action of the nonlinear model predictive control problem of the cascade system could be obtained,assuming that the algorithm was convergent.This approach avoids solving the complex nonlinear optimization problem and significantly reduces the computational burden.The simulation results of the fossil fuel power unit are illustrated to verify the effectiveness and practicability of the proposed algorithm.

  10. On the multiplicity distribution in statistical model: (I) negative binomial distribution

    CERN Document Server

    Xu, Hao-jie

    2016-01-01

    With the distribution of principal thermodynamic variables (e.g.,volume) and the probability condition from reference multiplicity, we develop an improved baseline measure for multiplicity distribution in statistical model to replace the traditional Poisson expectations. We demonstrate the mismatches between experimental measurements and previous theoretical calculations on multiplicity distributions. We derive a general expression for multiplicity distribution, i.e. a conditional probability distribution, in statistical model and calculate its cumulants under Poisson approximation in connection with recent data for multiplicity fluctuations. We find that probability condition from reference multiplicity are crucial to explain the centrality resolution effect in experiment. With the improved baseline measure for multiplicity distribution, we can quantitatively reproduce the cumulants (cumulant products) for multiplicity distribution of total (net) charges measured in experiments.

  11. Compositional modelling of distributed-parameter systems

    NARCIS (Netherlands)

    Maschke, Bernhard; Schaft, van der Arjan; Lamnabhi-Lagarrigue, F.; Loría, A.; Panteley, E.

    2005-01-01

    The Hamiltonian formulation of distributed-parameter systems has been a challenging reserach area for quite some time. (A nice introduction, especially with respect to systems stemming from fluid dynamics, can be found in [26], where also a historical account is provided.) The identification of the

  12. Improving permafrost distribution modelling using feature selection algorithms

    Science.gov (United States)

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail

    2016-04-01

    The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its

  13. Analysis of a Free Boundary Problem Modeling Tumor Growth

    Institute of Scientific and Technical Information of China (English)

    Shang Bin CUI

    2005-01-01

    In this paper, we study a free boundary problem arising from the modeling of tumor growth. The problem comprises two unknown functions: R = R(t), the radius of the tumor, and u = u(r, t), the concentration of nutrient in the tumor. The function u satisfies a nonlinear reaction diffusion equation in the region 0 < r < R(t), t > 0, and the function R satisfies a nonlinear integrodifferential equation containing u. Under some general conditions, we establish global existence of transient solutions, unique existence of a stationary solution, and convergence of transient solutions toward the stationary solution as t →∞.

  14. Modelling the distribution of pig production and diseases in Thailand

    OpenAIRE

    Thanapongtharm, Weerapong

    2015-01-01

    This thesis, entitled “Modelling the distribution of pig production and diseases in Thailand”, presents many aspects of pig production in Thailand including the characteristics of pig farming system, distribution of pig population and pig farms, spatio-temporal distribution and risk of most important diseases in pig at present, and the suitability area for pig farming. Spatial distribution and characteristics of pig farming in Thailand were studied using time-series pig population data to des...

  15. Preliminary 2D numerical modeling of common granular problems

    Science.gov (United States)

    Wyser, Emmanuel; Jaboyedoff, Michel

    2017-04-01

    Granular studies received an increasing interest during the last decade. Many scientific investigations were successfully addressed to acknowledge the ubiquitous behavior of granular matter. We investigate liquid impacts onto granular beds, i.e. the influence of the packing and compaction-dilation transition. However, a physically-based model is still lacking to address complex microscopic features of granular bed response during liquid impacts such as compaction-dilation transition or granular bed uplifts (Wyser et al. in review). We present our preliminary 2D numerical modeling based on the Discrete Element Method (DEM) using nonlinear contact force law (the Hertz-Mindlin model) for disk shape particles. The algorithm is written in C programming language. Our 2D model provides an analytical tool to address granular problems such as i) granular collapses and ii) static granular assembliy problems. This provides a validation framework of our numerical approach by comparing our numerical results with previous laboratory experiments or numerical works. Inspired by the work of Warnett et al. (2014) and Staron & Hinch (2005), we studied i) the axisymetric collapse of granular columns. We addressed the scaling between the initial aspect ratio and the final runout distance. Our numerical results are in good aggreement with the previous studies of Warnett et al. (2014) and Staron & Hinch (2005). ii) Reproducing static problems for regular and randomly stacked particles provides a valid comparison to results of Egholm (2007). Vertical and horizontal stresses within the assembly are quite identical to stresses obtained by Egholm (2007), thus demonstating the consistency of our 2D numerical model. Our 2D numerical model is able to reproduce common granular case studies such as granular collapses or static problems. However, a sufficient small timestep should be used to ensure a good numerical consistency, resulting in higher computational time. The latter becomes critical

  16. LIFE DISTRIBUTION OF SERIES UNDER THE SUCCESSIVE DAMAGE MODEL

    Institute of Scientific and Technical Information of China (English)

    WANG Dongqian; C. D. Lai; LI Guoying

    2003-01-01

    We analyse further the reliability behaviour of series and parallel systems in the successive damage model initiated by Downton. The results are compared with those obtained for other models with different bivariate distributions.

  17. Modeling Workflow Management in a Distributed Computing System ...

    African Journals Online (AJOL)

    Modeling Workflow Management in a Distributed Computing System Using Petri Nets. ... who use it to share information more rapidly and increases their productivity. ... Petri nets are an established tool for modelling and analyzing processes.

  18. Study on Fleet Assignment Problem Model and Algorithm

    Directory of Open Access Journals (Sweden)

    Yaohua Li

    2013-01-01

    Full Text Available The Fleet Assignment Problem (FAP of aircraft scheduling in airlines is studied, and the optimization model of FAP is proposed. The objective function of this model is revenue maximization, and it considers comprehensively the difference of scheduled flights and aircraft models in flight areas and mean passenger flows. In order to solve the model, a self-adapting genetic algorithm is supposed to solve the model, which uses natural number coding, adjusts dynamically crossover and mutation operator probability, and adopts intelligent heuristic adjusting to quicken optimization pace. The simulation with production data of an airline shows that the model and algorithms suggested in this paper are feasible and have a good application value.

  19. Testing and Modeling of Contact Problems in Resistance Welding

    DEFF Research Database (Denmark)

    Song, Quanfeng

    As a part of the efforts towards a professional and reliable numerical tool for resistance welding engineers, this Ph.D. project is dedicated to refining the numerical models related to the interface behavior. An FE algorithm for the contact problems in resistance welding has been developed...... together two or three cylindrical parts as well as disc-ring pairs of dissimilar metals. The tests have demonstrated the effectiveness of the model. A theoretical and experimental study is performed on the contact resistance aiming at a more reliable model for numerical simulation of resistance welding....... The model currently employed is evaluated. It is found that the model may underestimate the constriction resistance because it is based on the assumption of continual contact area. A new model is proposed on the constriction resistance in resistance welding. A parametric study is performed on the contact...

  20. A model for the inverse 1-median problem on trees under uncertain costs

    Directory of Open Access Journals (Sweden)

    Kien Trung Nguyen

    2016-01-01

    Full Text Available We consider the problem of justifying vertex weights of a tree under uncertain costs so that a prespecified vertex become optimal and the total cost should be optimal in the uncertainty scenario. We propose a model which delivers the information about the optimal cost which respect to each confidence level \\(\\alpha \\in [0,1]\\. To obtain this goal, we first define an uncertain variable with respect to the minimum cost in each confidence level. If all costs are independently linear distributed, we present the inverse distribution function of this uncertain variable in \\(O(n^{2}\\log n\\ time, where \\(n\\ is the number of vertices in the tree.

  1. Modeling distributed axonal delays in mean-field brain dynamics

    Science.gov (United States)

    Roberts, J. A.; Robinson, P. A.

    2008-11-01

    The range of conduction delays between connected neuronal populations is often modeled as a single discrete delay, assumed to be an effective value averaging over all fiber velocities. This paper shows the effects of distributed delays on signal propagation. A distribution acts as a linear filter, imposing an upper frequency cutoff that is inversely proportional to the delay width. Distributed thalamocortical and corticothalamic delays are incorporated into a physiologically based mean-field model of the cortex and thalamus to illustrate their effects on the electroencephalogram (EEG). The power spectrum is acutely sensitive to the width of the thalamocortical delay distribution, and more so than the corticothalamic distribution, because all input signals must travel along the thalamocortical pathway. This imposes a cutoff frequency above which the spectrum is overly damped. The positions of spectral peaks in the resting EEG depend primarily on the distribution mean, with only weak dependences on distribution width. Increasing distribution width increases the stability of fixed point solutions. A single discrete delay successfully approximates a distribution for frequencies below a cutoff that is inversely proportional to the delay width, provided that other model parameters are moderately adjusted. A pair of discrete delays together having the same mean, variance, and skewness as the distribution approximates the distribution over the same frequency range without needing parameter adjustment. Delay distributions with large fractional widths are well approximated by low-order differential equations.

  2. Distributed Time Delay Goodwin's Models of the Business Cycle

    Science.gov (United States)

    Antonova, A. O.; Reznik, S. N.; Todorov, M. D.

    2011-11-01

    We consider continuously distributed time delay Goodwin's model of the business cycle. We show that the delay induced sawtooth oscillations, similar to those detected by R. H. Strotz, J. C. McAnulty, J. B. Naines, Econometrica, 21, 390-411 (1953) for Goodwin's model with fixed investment time lag, exist only for very narrow delay distribution when the variance of the delay distribution much less than the average delay.

  3. A Novel Nonlinear Programming Model for Distribution Protection Optimization

    NARCIS (Netherlands)

    Zambon, Eduardo; Bossois, Débora Z.; Garcia, Berilhes B.; Azeredo, Elias F.

    2009-01-01

    This paper presents a novel nonlinear binary programming model designed to improve the reliability indices of a distribution network. This model identifies the type and location of protection devices that should be installed in a distribution feeder and is a generalization of the classical optimizat

  4. Bayesian Analysis for Binomial Models with Generalized Beta Prior Distributions.

    Science.gov (United States)

    Chen, James J.; Novick, Melvin, R.

    1984-01-01

    The Libby-Novick class of three-parameter generalized beta distributions is shown to provide a rich class of prior distributions for the binomial model that removes some restrictions of the standard beta class. A numerical example indicates the desirability of using these wider classes of densities for binomial models. (Author/BW)

  5. Numerical simulation of EEG forward problem in centrosphere head model

    Directory of Open Access Journals (Sweden)

    HE Juan

    2013-02-01

    Full Text Available At present,EEG has become an important technical means in investigation of the brain function and clinical diagnosis.On the inverse problem of EEG,a lot of calculation of EEG forward problem is essential.In this paper,on the one hand,we develop a computing formula based on weighted residuals BEM; in center spherical head model,we compute the scalp potentials for different dipole position and orientation.On the other hand,we conduct simulation to EEG forward problem,and compare the numerical and analytical solutions of scalp potential.The results show that the weighted residual method has advantages of high computing efficiency and accuracy compared with FEM,DM.So it is widely used in computational mechanics.

  6. Solution of the two-dimensional MHD problem on the distribution of induced electromagnetic fields of an annular channel

    Energy Technology Data Exchange (ETDEWEB)

    Lavrent' ev, I.V.; Sidorenkov, S.I.

    1988-01-01

    To establish the limits of applicability of two-dimensional mathematical models describing induced electromagnetic field distribution in an annular MHD channel, it is necessary to solve a three-dimensional problem. By reducing the number of dimensions of the problem (using, for example, the axial symmetry of MHD flow), the solution can be derived in some approximation. This paper proposes and demonstrates this method by studying the motion of a conducting medium in an annular channel with a two-pole ferromagnetic system under various assumptions for the field, channel and liquid, among them the superconductivity of the working medium. The work performed by the Lorentz force in the channel, equal to the Joule losses in the current-carrying boundary layer, was determined. It was concluded that the current-carrying boundary layer begins to develop at the wall of the channel when the flow enters the magnetic field and that its thickness grows with the length of the region of MHD interaction. The problem was solved numerically and asymptotically.

  7. Location-Routing Problem with Simultaneous Home Delivery and Customer’s Pickup for City Distribution of Online Shopping Purchases

    Directory of Open Access Journals (Sweden)

    Lin Zhou

    2016-08-01

    Full Text Available With the increasing interest in online shopping, the Last Mile delivery is regarded as one of the most expensive and pollutive—and yet the least efficient—stages of the e-commerce supply chain. To address this challenge, a novel location-routing problem with simultaneous home delivery and customer’s pickup is proposed. This problem aims to build a more effective Last Mile distribution system by providing two kinds of service options when delivering packages to customers. To solve this specific problem, a hybrid evolution search algorithm by combining genetic algorithm (GA and local search (LS is presented. In this approach, a diverse population generation algorithm along with a two-phase solution initialization heuristic is first proposed to give high quality initial population. Then, advantaged solution representation, individual evaluation, crossover and mutation operations are designed to enhance the evolution and search efficiency. Computational experiments based on a large family of instances are conducted, and the results obtained indicate the validity of the proposed model and method.

  8. Modeling a geographically distributed MEMS fabrication network

    Science.gov (United States)

    Benard, William L.; Huff, Michael A.

    2001-04-01

    Manufacturing is typically limited to fabrication of parts at a single location, with some sites assembling components from parts made elsewhere. The age of ubiquitous information transfer has made it conceivable to distribute manufacturing geographically, in order to provide access to unique manufacturing capabilities in a flexible manner. If the overhead of a distributed manufacturing network can be adequately reduced, it has the potential to make previously cost ineffective low volume and custom applications economically feasible. The MEMS-Exchange is an infrastructural service available to the domestic microelectromechanical systems community that provides an interface between MEMS designers and microfabrication facilities (academic, commercial, and government labs) which allows designers to develop and exercise custom process sequences in order to realize their devices.

  9. Species distribution modelling in fisheries science

    OpenAIRE

    Paradinas, Iosu

    2017-01-01

    Latest fisheries directives propose adopting an ecosystem approach to manage fisheries \\citep{FAO-EAFM}. Such an approach aims to protect important ecosystems based on the principle that healthy ecosystems produce more and thus enhance sustainability. Unfortunately, quantifying the importance of an ecosystem is a difficult task to do due the immense number of interactions involved in marine systems. This PhD dissertation relies on the fact that good fisheries distribution maps could play ...

  10. Prospects and problems for standardizing model validation in systems biology.

    Science.gov (United States)

    Gross, Fridolin; MacLeod, Miles

    2017-10-01

    There are currently no widely shared criteria by which to assess the validity of computational models in systems biology. Here we discuss the feasibility and desirability of implementing validation standards for modeling. Having such a standard would facilitate journal review, interdisciplinary collaboration, model exchange, and be especially relevant for applications close to medical practice. However, even though the production of predictively valid models is considered a central goal, in practice modeling in systems biology employs a variety of model structures and model-building practices. These serve a variety of purposes, many of which are heuristic and do not seem to require strict validation criteria and may even be restricted by them. Moreover, given the current situation in systems biology, implementing a validation standard would face serious technical obstacles mostly due to the quality of available empirical data. We advocate a cautious approach to standardization. However even though rigorous standardization seems premature at this point, raising the issue helps us develop better insights into the practices of systems biology and the technical problems modelers face validating models. Further it allows us to identify certain technical validation issues which hold regardless of modeling context and purpose. Informal guidelines could in fact play a role in the field by helping modelers handle these. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. A generalized statistical model for the size distribution of wealth

    Science.gov (United States)

    Clementi, F.; Gallegati, M.; Kaniadakis, G.

    2012-12-01

    In a recent paper in this journal (Clementi et al 2009 J. Stat. Mech. P02037), we proposed a new, physically motivated, distribution function for modeling individual incomes, having its roots in the framework of the κ-generalized statistical mechanics. The performance of the κ-generalized distribution was checked against real data on personal income for the United States in 2003. In this paper we extend our previous model so as to be able to account for the distribution of wealth. Probabilistic functions and inequality measures of this generalized model for wealth distribution are obtained in closed form. In order to check the validity of the proposed model, we analyze the US household wealth distributions from 1984 to 2009 and conclude an excellent agreement with the data that is superior to any other model already known in the literature.

  12. Applying the INN model to the MaxClique problem

    Energy Technology Data Exchange (ETDEWEB)

    Grossman, T.

    1993-09-01

    Max-Clique is the problem of finding the largest clique in a given graph. It is not only NP-hard, but, as recent results suggest, even hard to approximate. Nevertheless it is still very important to develop and test practical algorithms that will find approximate solutions for the maximum clique problem on various graphs stemming from numerous applications. Indeed, many different types of algorithmic approaches are applied to that problem. Several neural networks and related algorithms were applied recently to combinatorial optimization problems in general and to the Max-Clique problem in particular. These neural nets are dynamical system which minimize a cost (or computational ``energy``) function that represents the optimization problem, the Max-Clique in our case. Therefore they all belong to the class of integer programming algorithms surveyed in the Pardalos and Xue review. The work presented here is a development and improvement of a neural network algorithm that was introduced recently. In the previous work, we have considered two Hopfield type neural networks, the INN and the HcN, and their application to the max-clique problem. In this paper, I concentrate on the INN network and present an improved version of the t-A algorithm that was introduced in. The rest of this paper is organized as follows: in section 2, I describe the INN model and how it implements a given graph. In section 3, it is characterized in terms of graph theory. In particular, the stable states of the network are mapped to the maximal cliques of its underling graph. In section 4, I present the t-Annealing algorithm and an improved version of it, the Adaptive t-Annealing. Several experiments done with these algorithms on benchmark graphs are reported in section 5, and the efficiency of the new algorithm is demonstrated. I conclude with a short discussion.

  13. Some identification problems in the cointegrated vector autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren

    An analysis of some identification problems in the cointegrated VAR is given. We give a new criteria for identification by linear restrictions on indi- vidual relations which is equivalent to the rank condition. We compare the asymptotic distribution of the estimators of α and β; when they are id......An analysis of some identification problems in the cointegrated VAR is given. We give a new criteria for identification by linear restrictions on indi- vidual relations which is equivalent to the rank condition. We compare the asymptotic distribution of the estimators of α and β; when...... they are identified by linear restrictions on β and when they are identified by linear restrictions on α; in which case a component of β is asymptotically Gaussian. Finally we discuss identification of shocks by introducing the contemporaneous and permanent e¤ect of a shock and the distinction between permanent...... and transi- tory shocks, which allows one to identify permanent shocks from the long-run variance and transitory shocks from the short-run variance....

  14. Some Identification Problems in the Cointegrated Vector Autoregressive Model

    DEFF Research Database (Denmark)

    Johansen, Søren

    An analysis of some identification problems in the cointegrated VAR is given. We give a new criteria for identification by linear restrictions on individual relations which is equivalent to the rank condition. We compare the asymptotic distribution of the estimators of a and ß; when they are iden......An analysis of some identification problems in the cointegrated VAR is given. We give a new criteria for identification by linear restrictions on individual relations which is equivalent to the rank condition. We compare the asymptotic distribution of the estimators of a and ß; when...... they are identified by linear restrictions on ß; and when they are identified by linear restrictions on a; in which case a component of ß^ is asymptotically Gaussian. Finally we discuss identification of shocks by introducing the contemporaneous and permanent effect of a shock and the distinction between permanent...... and transitory shocks, which allows one to identify permanent shocks from the long-run variance and transitory shocks from the short-run variance...

  15. Robustness of a Distributed Knowledge Management Model

    DEFF Research Database (Denmark)

    Pedersen, Mogens Kühn; Larsen, Michael Holm

    1999-01-01

    specific for each actor in the network in recognition of actor role differences. The article analyses the conditions for the model to generate symmetric incentives between different actor roles. The model is proposed for business networks like supply chains and networks like replacement, maintenance......Knowledge management based on symmetric incentives is rarely found in literature. A knowledge exchange model relies upon a double loop knowledge conversion with symmetric incentives in a network. The model merges specific knowledge with knowledge from other actors into a decision support system...... and services industries....

  16. Robustness of a Distributed Knowledge Management Model

    DEFF Research Database (Denmark)

    Pedersen, Mogens Kühn; Larsen, Michael Holm

    1999-01-01

    Knowledge management based on symmetric incentives is rarely found in literature. A knowledge exchange model relies upon a double loop knowledge conversion with symmetric incentives in a network. The model merges specific knowledge with knowledge from other actors into a decision support system...... specific for each actor in the network in recognition of actor role differences. The article analyses the conditions for the model to generate symmetric incentives between different actor roles. The model is proposed for business networks like supply chains and networks like replacement, maintenance...

  17. Modeling Portfolio Optimization Problem by Probability-Credibility Equilibrium Risk Criterion

    Directory of Open Access Journals (Sweden)

    Ye Wang

    2016-01-01

    Full Text Available This paper studies the portfolio selection problem in hybrid uncertain decision systems. Firstly the return rates are characterized by random fuzzy variables. The objective is to maximize the total expected return rate. For a random fuzzy variable, this paper defines a new equilibrium risk value (ERV with credibility level beta and probability level alpha. As a result, our portfolio problem is built as a new random fuzzy expected value (EV model subject to ERV constraint, which is referred to as EV-ERV model. Under mild assumptions, the proposed EV-ERV model is a convex programming problem. Furthermore, when the possibility distributions are triangular, trapezoidal, and normal, the EV-ERV model can be transformed into its equivalent deterministic convex programming models, which can be solved by general purpose optimization software. To demonstrate the effectiveness of the proposed equilibrium optimization method, some numerical experiments are conducted. The computational results and comparison study demonstrate that the developed equilibrium optimization method is effective to model portfolio selection optimization problem with twofold uncertain return rates.

  18. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-01-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  19. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-05-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  20. The Distributed Network Monitoring Model with Bounded Delay Constraints

    Institute of Scientific and Technical Information of China (English)

    LIU Xiang-hui; YIN Jian-ping; LU Xi-cheng; CAI Zhi-ping; ZHAO Jian-min

    2004-01-01

    We address the problem of optimizing a distributed monitoring system and the goal of the optimization is to reduce the cost of deployment of the monitoring infrastructure by identifying a minimum aggregating set subject to delay constraint on the aggregating path. We show that this problem is NP-hard and propose approximation algorithm proving the approximation ratio with ln m+1, where is the number of monitoring nodes. At last we extend our modal with more constraint of bounded delay variation.

  1. Fractional Stefan problems exhibiting lumped and distributed latent-heat memory effects

    Science.gov (United States)

    Voller, Vaughan R.; Falcini, Federico; Garra, Roberto

    2013-04-01

    We consider fractional Stefan melting problems which involve a memory of the latent-heat accumulation. We show that the manner in which the memory of the latent-heat accumulation is recorded depends on the assumed nature of the transition between the liquid and the solid phases. When a sharp interface between the liquid and the solid phases is assumed, the memory of the accumulation of the latent heat is “lumped” in the history of the speed of the interface. In contrast, when a diffuse interface is assumed, the memory of the accumulation is “distributed” throughout the liquid phase. By use of an example problem, we demonstrate that the equivalence of the sharp- and diffuse-interface models can only occur when there is no memory in the system.

  2. Tempered stable distributions stochastic models for multiscale processes

    CERN Document Server

    Grabchak, Michael

    2015-01-01

    This brief is concerned with tempered stable distributions and their associated Levy processes. It is a good text for researchers interested in learning about tempered stable distributions.  A tempered stable distribution is one which takes a stable distribution and modifies its tails to make them lighter. The motivation for this class comes from the fact that infinite variance stable distributions appear to provide a good fit to data in a variety of situations, but the extremely heavy tails of these models are not realistic for most real world applications. The idea of using distributions that modify the tails of stable models to make them lighter seems to have originated in the influential paper of Mantegna and Stanley (1994). Since then, these distributions have been extended and generalized in a variety of ways. They have been applied to a wide variety of areas including mathematical finance, biostatistics,computer science, and physics.

  3. MODEL 9975 SHIPPING PACKAGE FABRICATION PROBLEMS AND SOLUTIONS

    Energy Technology Data Exchange (ETDEWEB)

    May, C; Allen Smith, A

    2008-05-07

    The Model 9975 Shipping Package is the latest in a series (9965, 9968, etc.) of radioactive material shipping packages that have been the mainstay for shipping radioactive materials for several years. The double containment vessels are relatively simple designs using pipe and pipe cap in conjunction with the Chalfont closure to provide a leak-tight vessel. The fabrication appears simple in nature, but the history of fabrication tells us there are pitfalls in the different fabrication methods and sequences. This paper will review the problems that have arisen during fabrication and precautions that should be taken to meet specifications and tolerances. The problems and precautions can also be applied to the Models 9977 and 9978 Shipping Packages.

  4. Modeling the Distribution of Rainfall Intensity using Hourly Data

    OpenAIRE

    Salisu Dan'azumi; Supiah Shamsudin; Azmi Aris

    2010-01-01

    Problem statement: Design of storm water best management practices to control runoff and water pollution can be achieved if a prior knowledge of the distribution of rainfall characteristics is known. Rainfall intensity, particularly in tropical climate, plays a major role in the design of runoff conveyance and erosion control systems. This study is aimed to explore the statistical distribution of rainfall intensity for Peninsular Malaysia using hourly rainfall data. Approach: Hourly rainfall ...

  5. Analysis and Modelling of Extreme Wind Speed Distributions in Complex Mountainous Regions

    Science.gov (United States)

    Laib, Mohamed; Kanevski, Mikhail

    2016-04-01

    Modelling of wind speed distributions in complex mountainous regions is an important and challenging problem which interests many scientists from several fields. In the present research, high frequency (10 min) Swiss wind speed monitoring data (IDAWEB service, Meteosuisse) are analysed and modelled with different parametric distributions (Weibull, GEV, Gamma, etc.) using maximum likelihood method. In total, 111 stations placed in different geomorphological units and at different altitude (from 203 to 3580 meters) are studied. Then, this information is used for training machine learning algorithms (Extreme Learning Machines, Support vector machine) to predict the distribution at new places, potentially useful for aeolian energy generation. An important part of the research deals with the construction and application of a high dimensional input feature space, generated from digital elevation model. A comprehensive study was carried out using feature selection approach to get the best model for the prediction. The main results are presented as spatial patterns of distributions' parameters.

  6. Robust user equilibrium model based on cumulative prospect theory under distribution-free travel time

    Institute of Scientific and Technical Information of China (English)

    王伟; 孙会君; 吴建军

    2015-01-01

    The assumption widely used in the user equilibrium model for stochastic network was that the probability distributions of the travel time were known explicitly by travelers. However, this distribution may be unavailable in reality. By relaxing the restrictive assumption, a robust user equilibrium model based on cumulative prospect theory under distribution-free travel time was presented. In the absence of the cumulative distribution function of the travel time, the exact cumulative prospect value (CPV) for each route cannot be obtained. However, the upper and lower bounds on the CPV can be calculated by probability inequalities. Travelers were assumed to choose the routes with the best worst-case CPVs. The proposed model was formulated as a variational inequality problem and solved via a heuristic solution algorithm. A numerical example was also provided to illustrate the application of the proposed model and the efficiency of the solution algorithm.

  7. Photovoltaic subsystem marketing and distribution model: programming manual. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1982-07-01

    Complete documentation of the marketing and distribution (M and D) computer model is provided. The purpose is to estimate the costs of selling and transporting photovoltaic solar energy products from the manufacturer to the final customer. The model adjusts for the inflation and regional differences in marketing and distribution costs. The model consists of three major components: the marketing submodel, the distribution submodel, and the financial submodel. The computer program is explained including the input requirements, output reports, subprograms and operating environment. The program specifications discuss maintaining the validity of the data and potential improvements. An example for a photovoltaic concentrator collector demonstrates the application of the model.

  8. Photovoltaic subsystem marketing and distribution model: programming manual. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1982-07-01

    Complete documentation of the marketing and distribution (M and D) computer model is provided. The purpose is to estimate the costs of selling and transporting photovoltaic solar energy products from the manufacturer to the final customer. The model adjusts for the inflation and regional differences in marketing and distribution costs. The model consists of three major components: the marketing submodel, the distribution submodel, and the financial submodel. The computer program is explained including the input requirements, output reports, subprograms and operating environment. The program specifications discuss maintaining the validity of the data and potential improvements. An example for a photovoltaic concentrator collector demonstrates the application of the model.

  9. Ecology and equity in global fisheries: Modelling policy options using theoretical distributions

    NARCIS (Netherlands)

    Rammelt, C.F.; van Schie, Maarten

    2016-01-01

    Global fisheries present a typical case of political ecology or environmental injustice, i.e. a problem of distribution of resources within ecological limits. We built a stock-flow model to visualize this challenge and its dynamics, with both an ecological and a social dimension. We incorporated the

  10. Ecology and equity in global fisheries: Modelling policy options using theoretical distributions

    NARCIS (Netherlands)

    Rammelt, C.F.; van Schie, Maarten

    2016-01-01

    Global fisheries present a typical case of political ecology or environmental injustice, i.e. a problem of distribution of resources within ecological limits. We built a stock-flow model to visualize this challenge and its dynamics, with both an ecological and a social dimension. We incorporated the

  11. Equipartitions and a Distribution for Numbers: A Statistical Model for Benford's Law

    CERN Document Server

    Iafrate, Joseph R; Strauch, Frederick W

    2015-01-01

    A statistical model for the fragmentation of a conserved quantity is analyzed, using the principle of maximum entropy and the theory of partitions. Upper and lower bounds for the restricted partitioning problem are derived and applied to the distribution of fragments. The resulting power law directly leads to Benford's law for the first digits of the parts.

  12. A class of mechanically decidable problems beyond Tarski's model

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    By means of dimension-decreasing method and cell-decomposition,a practical algorithm is proposed to decide the positivity of a certain class of symmetric polynomials,the numbers of whose elements are variable.This is a class of mechanically decidable problems beyond Tarski model.To implement the algorithm,a program nprove written in maple is developed which can decide the positivity of these polynomials rapidly.

  13. Modeling emission from the first explosions: pitfalls and problems

    Energy Technology Data Exchange (ETDEWEB)

    Fryer, Christopher Lee [Los Alamos National Laboratory; Whalen, Daniel J [Los Alamos National Laboratory; Frey, Lucille H [Los Alamos National Laboratory

    2010-01-01

    Observations of the explosions of the population III stars have the potential to teach us much about the formation and evolution of these zero metallicity objects. But to reach this potential, we must tie the observed emission to and explosion model. This requires accurate light-curve/spectral calculations. Here we discuss many of the pitfalls and problems involved in such calculations, presenting some preliminary results from radiation-hydrodynamics calculations.

  14. Mathematical problem solving, modelling, applications, and links to other subjects

    OpenAIRE

    Blum, Werner; Niss, Mogens

    1989-01-01

    The paper will consist of three parts. In part I we shall present some background considerations which are necessary as a basis for what follows. We shall try to clarify some basic concepts and notions, and we shall collect the most important arguments (and related goals) in favour of problem solving, modelling and applications to other subjects in mathematics instruction. In the main part II we shall review the present state, recent trends, and prospective lines of developm...

  15. Model problem of MHD flow in a lithium blanket

    Energy Technology Data Exchange (ETDEWEB)

    Cherepanov, V.Y.

    1978-01-01

    A model problem is considered for a feasibility study concerning controlled MHD flow in the blanket of a Tokamak nuclear reactor. The fundamental equations for the steady flow of an incompressible viscous fluid in a uniform transverse magnetic field are solved in rectangular coordinates, in the zero-induction approximation and with negligible induced currents. A numerical solution obtained for a set of appropriate boundary constraints establishes the conditions under which no stagnation zones will be formed.

  16. Species Distribution modeling as a tool to unravel determinants of palm distribution in Thailand

    DEFF Research Database (Denmark)

    Tovaranonte, Jantrararuk; Barfod, Anders S.; Balslev, Henrik

    2011-01-01

    As a consequence of the decimation of the forest cover in Thailand from 50% to ca. 20 % since the 1950ies, it is difficult to gain insight in the drivers behind past, present and future distribution ranges of plant species. Species distribution modeling allows visualization of potential species...... distribution under specific sets of assumptions. In this study we used maximum entropy to map potential distributions of 103 species of palms for which more than 5 herbarium records exist. Palms constitute key-stone plant group from both an ecological, economical and conservation perspective. The models were......) and the Area Under the Curve (AUC). All models performed well with AUC scores above 0.95. The predicted distribution ranges showed high suitability for palms in the southern region of Thailand. It also shows that spatial predictor variables are important in cases where historical processes may explain extant...

  17. An effective hybrid immune algorithm for solving the distributed permutation flow-shop scheduling problem

    Science.gov (United States)

    Xu, Ye; Wang, Ling; Wang, Shengyao; Liu, Min

    2014-09-01

    In this article, an effective hybrid immune algorithm (HIA) is presented to solve the distributed permutation flow-shop scheduling problem (DPFSP). First, a decoding method is proposed to transfer a job permutation sequence to a feasible schedule considering both factory dispatching and job sequencing. Secondly, a local search with four search operators is presented based on the characteristics of the problem. Thirdly, a special crossover operator is designed for the DPFSP, and mutation and vaccination operators are also applied within the framework of the HIA to perform an immune search. The influence of parameter setting on the HIA is investigated based on the Taguchi method of design of experiment. Extensive numerical testing results based on 420 small-sized instances and 720 large-sized instances are provided. The effectiveness of the HIA is demonstrated by comparison with some existing heuristic algorithms and the variable neighbourhood descent methods. New best known solutions are obtained by the HIA for 17 out of 420 small-sized instances and 585 out of 720 large-sized instances.

  18. Modelling Difficulties and Their Overcoming Strategies in the Solution of a Modelling Problem

    Science.gov (United States)

    Dede, Ayse Tekin

    2016-01-01

    The purpose of the study is to reveal the elementary mathematics student teachers' difficulties encountered in the solution of a modelling problem, the strategies to overcome those difficulties and whether the strategies worked or not. Nineteen student teachers solved the modelling problem in their four or five-person groups, and the video records…

  19. Evolutionary algorithm for the problem of oil products distributions on oil pipeline network; Algoritmo evolucionario para distribuicao de produtos de petroleo por redes de polidutos

    Energy Technology Data Exchange (ETDEWEB)

    Marcondes, Eduardo; Goldbarg, Elizabeth; Goldbarg, Marco; Cunha, Thatiana [Universidade Federal do Rio Grande do Norte (UFRN), Natal, RN (Brazil)

    2008-07-01

    A major problem about the planning of production in refinery is the determination of what should be done in each stage of production as a horizon of time. Among such problems, distribution of oil products through networks of pipelines is a very significant problem because of its economic importance. In this work, a problem of distribution of oil through a network of pipelines is modeled. The network studied is a simplification of a real network. There are several restrictions to be met, such as limits of storage, transmission or receipt of limits and limitations of transport. The model is adopted bi-goal where you want to minimize the fragmentation and the time of transmission, given the restrictions of demand and storage capacity. Whereas the occupancy rate of networks is increasingly high, is of great importance optimize its use. In this work, the technique of optimization by Cloud of particles is applied to the problem of distribution of oil products by networks of pipelines. (author)

  20. Distributed Prognostics Based on Structural Model Decomposition

    Data.gov (United States)

    National Aeronautics and Space Administration — Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based...

  1. THE ROC CURVE MODEL FROM GENERALIZED-EXPONENTIAL DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    Ehtesham Hussain

    2011-04-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 In biomedical studies often yield continuously positively skewed (non- normal distributed data. In this regard Generalized-Exponential Distribution is suggested for analyzing such data. In this paper the parametric equation of  the Receiving Operating Characteristic (ROC curve model is established under the assumptions of bi-distributional population based on pair of Generalized-Exponential Distributions. Also its maximum likelihood estimator MLE, sampling distribution , equivalence test statistic and exact confidence interval are derived.  

  2. Model Checking Geographically Distributed Interlocking Systems Using UMC

    DEFF Research Database (Denmark)

    Fantechi, Alessandro; Haxthausen, Anne Elisabeth; Nielsen, Michel Bøje Randahl

    2017-01-01

    The current trend of distributing computations over a network is here, as a novelty, applied to a safety critical system, namely a railway interlocking system. We show how the challenge of guaranteeing safety of the distributed application has been attacked by formally specifying and model checking...... the relevant distributed protocols. By doing that we obey the safety guidelines of the railway signalling domain, that require formal methods to support the certification of such products. We also show how formal modelling can help designing alternative distributed solutions, while maintaining adherence...

  3. Substrate turnover at low carbon concentrations in a model drinking water distribution system

    DEFF Research Database (Denmark)

    Boe-Hansen, Rasmus; Albrechtsen, Hans-Jørgen; Arvin, Erik;

    2002-01-01

    utilisation and bacterial growth at low nutrient conditions in a model distribution system. The model system consisted of two loops in series, where flow rate and retention time were controlled independently. Spiking the drinking water of the model system with two different environmentally realistic......Water quality changes caused by microbial activity in the distribution network can cause serious problems. Reducing the amount of microbial available substrate may be an effective way to control bacterial aftergrowth. The purpose of the present study was to study the kinetics of substrate...

  4. Five (or so) challenges for species distribution modelling

    DEFF Research Database (Denmark)

    Bastos Araujo, Miguel; Guisan, Antoine

    2006-01-01

    Species distribution modelling is central to both fundamental and applied research in biogeography. Despite widespread use of models, there are still important conceptual ambiguities as well as biotic and algorithmic uncertainties that need to be investigated in order to increase confidence...... in model results. We identify and discuss five areas of enquiry that are of high importance for species distribution modelling: (1) clarification of the niche concept; (2) improved designs for sampling data for building models; (3) improved parameterization; (4) improved model selection and predictor...

  5. Raindrop size distribution: Fitting performance of common theoretical models

    Science.gov (United States)

    Adirosi, E.; Volpi, E.; Lombardo, F.; Baldini, L.

    2016-10-01

    Modelling raindrop size distribution (DSD) is a fundamental issue to connect remote sensing observations with reliable precipitation products for hydrological applications. To date, various standard probability distributions have been proposed to build DSD models. Relevant questions to ask indeed are how often and how good such models fit empirical data, given that the advances in both data availability and technology used to estimate DSDs have allowed many of the deficiencies of early analyses to be mitigated. Therefore, we present a comprehensive follow-up of a previous study on the comparison of statistical fitting of three common DSD models against 2D-Video Distrometer (2DVD) data, which are unique in that the size of individual drops is determined accurately. By maximum likelihood method, we fit models based on lognormal, gamma and Weibull distributions to more than 42.000 1-minute drop-by-drop data taken from the field campaigns of the NASA Ground Validation program of the Global Precipitation Measurement (GPM) mission. In order to check the adequacy between the models and the measured data, we investigate the goodness of fit of each distribution using the Kolmogorov-Smirnov test. Then, we apply a specific model selection technique to evaluate the relative quality of each model. Results show that the gamma distribution has the lowest KS rejection rate, while the Weibull distribution is the most frequently rejected. Ranking for each minute the statistical models that pass the KS test, it can be argued that the probability distributions whose tails are exponentially bounded, i.e. light-tailed distributions, seem to be adequate to model the natural variability of DSDs. However, in line with our previous study, we also found that frequency distributions of empirical DSDs could be heavy-tailed in a number of cases, which may result in severe uncertainty in estimating statistical moments and bulk variables.

  6. Influence of microscale in snow distributed modelling in semiarid regions

    OpenAIRE

    Pimentel Leiva, Rafael

    2015-01-01

    This work focuses on the importance of the microscale snow distribution in the modelling of the snow dynamics in semiarid regions. Snow over these areas has particular features that further complicate its measuring, monitoring and modelling (e.g. several snowmelt cycles throughout the year and a very heterogeneous distribution). Most extended GIS-based calculation of snowmelt/accumulation models must deal with non-negligible scales effects below the cell size, which may result ...

  7. The Nonlinear Sigma Model With Distributed Adaptive Mesh Refinement

    CERN Document Server

    Liebling, S L

    2004-01-01

    An adaptive mesh refinement (AMR) scheme is implemented in a distributed environment using Message Passing Interface (MPI) to find solutions to the nonlinear sigma model. Previous work studied behavior similar to black hole critical phenomena at the threshold for singularity formation in this flat space model. This work is a follow-up describing extensions to distribute the grid hierarchy and presenting tests showing the correctness of the model.

  8. Cognitive load and modelling of an algebra problem

    Science.gov (United States)

    Chinnappan, Mohan

    2010-09-01

    In the present study, I examine a modelling strategy as employed by a teacher in the context of an algebra lesson. The actions of this teacher suggest that a modelling approach will have a greater impact on enriching student learning if we do not lose sight of the need to manage associated cognitive loads that could either aid or hinder the integration of core concepts with processes that are at play. Results here also show that modelling a problem that is set within an authentic context helps learners develop a better appreciation of variables and relations that constitute the model. The teacher's scaffolding actions revealed the use of strategies that foster the development of connected, meaningful and more useable algebraic knowledge.

  9. Iterative methods for solving coefficient inverse problems of wave tomography in models with attenuation

    Science.gov (United States)

    Goncharsky, Alexander V.; Romanov, Sergey Y.

    2017-02-01

    We develop efficient iterative methods for solving inverse problems of wave tomography in models incorporating both diffraction effects and attenuation. In the inverse problem the aim is to reconstruct the velocity structure and the function that characterizes the distribution of attenuation properties in the object studied. We prove mathematically and rigorously the differentiability of the residual functional in normed spaces, and derive the corresponding formula for the Fréchet derivative. The computation of the Fréchet derivative includes solving both the direct problem with the Neumann boundary condition and the reversed-time conjugate problem. We develop efficient methods for numerical computations where the approximate solution is found using the detector measurements of the wave field and its normal derivative. The wave field derivative values at detector locations are found by solving the exterior boundary value problem with the Dirichlet boundary conditions. We illustrate the efficiency of this approach by applying it to model problems. The algorithms developed are highly parallelizable and designed to be run on supercomputers. Among the most promising medical applications of our results is the development of ultrasonic tomographs for differential diagnosis of breast cancer.

  10. A Model of the Fatigue Life Distribution of Composite Laminates Based on Their Static Strength Distribution

    Institute of Scientific and Technical Information of China (English)

    Wu Fuqiang; Yao Weixing

    2008-01-01

    The reasons of the static strength dispersion and the fatigue life dispersion of composite laminates are analyzed in this article.It is concluded that the inner original defects,which derived from the manufacturing process of composite laminates,are the common and major reason of causing the random distributions of the static strength and the fatigue life.And there is a correlative relation between the two distributions.With the study of statistical relationship between the fatigue loading and the fatigue life in the uniform confidence level and the same survival rate S-N curves of material,the relationship between the static strength distribution and the fatigue life distribution through a material S-N curve model has been obtained.And then the model which is used to describe the distributions of fatigue life of composites,based on their distributions of static strength,is set up.This model reasonably reflects the effects of the inner original defects on the static strength dispersion and on the fatigue life dispersion of composite laminates.The experimental data of three kinds of composite laminates are employed to verify this model,and the results show that this model can predict the random distributions of fatigue life for composites under any fatigue loads fairly well.

  11. On the Problem of Permissible Covariance and Variogram Models

    Science.gov (United States)

    Christakos, George

    1984-02-01

    The covariance and variogram models (ordinary or generalized) are important statistical tools used in various estimation and simulation techniques which have been recently applied to diverse hydrologic problems. For example, the efficacy of kriging, a method for interpolating, filtering, or averaging spatial phenomena, depends, to a large extent, on the covariance or variogram model chosen. The aim of this article is to provide the users of these techniques with convenient criteria that may help them to judge whether a function which arises in a particular problem, and is not included among the known covariance or variogram models, is permissible as such a model. This is done by investigating the properties of the candidate model in both the space and frequency domains. In the present article this investigation covers stationary random functions as well as intrinsic random functions (i.e., nonstationary functions for which increments of some order are stationary). Then, based on the theoretical results obtained, a procedure is outlined and successfully applied to a number of candidate models. In order to give to this procedure a more practical context, we employ "stereological" equations that essentially transfer the investigations to one-dimensional space, together with approximations in terms of polygonal functions and Fourier-Bessel series expansions. There are many benefits and applications of such a procedure. Polygonal models can be fit arbitrarily closely to the data. Also, the approximation of a particular model in the frequency domain by a Fourier-Bessel series expansion can be very effective. This is shown by theory and by example.

  12. [Problems in model-beds for tuberculosis patients].

    Science.gov (United States)

    Ito, Kunihiko; Yoshiyama, Takashi; Kato, Seiya; Ishikawa, Nobukatsu

    2009-01-01

    To investigate the possibility and obstacles to care tuberculosis patients in general hospitals. Sending questionnaire to the general and psychiatric hospitals running the model-beds for tuberculosis patients care, which have been the project proposed by Ministry of Health, Labour and Welfare, and analyzing their answers. Answer sheets were recovered from 43 of 75 (57%) hospitals which were the objects of this investigation. Situations of running the model-beds were highly diverse. 74% of the hospitals assumed that the general hospitals (if some conditions were satisfied) could care most of the tuberculosis patients. Problems in running the model-beds pointed by the hospitals were; HIC (hospital infection control) procedures pressing extra-workload (37%), low occupancy rate of the model-beds (30%), high cost of equipments for HIC (28%), high workload and high cost for tuberculosis patients care (21%), low fee for tuberculosis care (16%), difficulties of caring psychologically and/or physically unstable tuberculosis patients in rooms separate from the nurse station (16%), difficulties in long-term in-hospital care due to lack of sufficient amenities (14%), difficulties in accepting tuberculosis patients with short notice (12%), heavy burden for nurses who have to care for patients with associated conditions unfamiliar to them (12%), difficulties in maintaining quality of tuberculosis care (7%), risk of infection to the staffs and other patients (5%) and others miscellaneous problems (16%). Needs for tuberculosis patients' care in general hospitals are expected to further increase in the near future, but to cope with the above situation many problems are still to be solved. Hereafter we must expand the project of model-beds for tuberculosis care, and accumulate more experiences in tuberculosis patients' care in general hospitals.

  13. Mass distributions in a variational model

    CERN Document Server

    Stevenson, P D

    2009-01-01

    The time-dependent Hartree-Fock approach may be derived from a variational principle and a Slater Determinant wavefunction Ansatz. It gives a good description of nuclear processes in which one-body collisions dominate and has been applied with success to giant resonances and collisions around the barrier. It is inherently unable to give a good description of two-body observables. A variational principle, due to Balian and Veneroni has been proposed which can be geared to good reproduction of two-body observables. Keeping the Slater Determinant Ansatz, and restricting the two-body observables to be the squares of one-body observables, the procedure can be implemented as a modification of the time-dependent Hartree-Fock procedure. Applications, using the Skyrme effective interaction, are presented for the mass distributions of fragments following de-excitation of the giant dipole resonance in S-32. An illustration of the method's use in collisions is given.

  14. Distributed Software Development Modelling and Control Framework

    Directory of Open Access Journals (Sweden)

    Yi Feng

    2012-10-01

    Full Text Available With the rapid progress of internet technology, more and more software projects adopt e-development tofacilitate the software development process in a world-wide context. However, distributed softwaredevelopment activity itself is a complex orchestration. It involves many people working together without thebarrier of time and space difference. Therefore, how to efficiently monitor and control software edevelopmentin a global perspective becomes an important issue for any internet-based softwaredevelopment project. In this paper, we present a novel approach to tackle this crucial issue by means ofcontrolling e-development process, collaborative task progress and communication quality. Meanwhile, wealso present our e-development supporting environment prototype: Caribou, to demonstrate the viability ofour approach.

  15. MODELING AND OPTIMIZATION OF MULTI-RESPONSE SURFACE PROBLEMS WITH FUZZY APPROACH

    Directory of Open Access Journals (Sweden)

    Özlem TÜRKŞEN

    2012-06-01

    Full Text Available The most widely used approach for solving multi response surface problems is response surface methodology. It is thought to be that the response surface methodology is inadequate for evaluation ofunexplained vagueness in real world problems. Therefore in the study, fuzzy approach is proposed as an alternative to solve the multi response surface problems. The main aim of this study is to representthe applicability of the fuzzy approach for solving of the multi-response problems in which the probability distributions of the response variables cannot be determined. At the modeling stage, the fuzzy least squares regression analysis, based on Diamond's distance metric, is used. In the optimization stage, the problem is considered as a fuzzy multi-objective optimization problem. NondominatedSorting Genetic Algorithm-II (NSGA-II, defined in the literature, is adapted by using centroid index fuzzy ranking approach then called Fuzzy NSGA-II (FNSGA-II. Fuzzy Pareto solution set is obtainedby optimizing the problem, which is composed of fuzzy objective functions, with FNSGA-II. The proposed fuzzy solution approaches are applied on a data set defined in the literature. Thus, it is seen thatan obtained fuzzy Pareto solution is a set of acceptable different response values for the performed multi-response experiments at the defined levels of input variables.

  16. Scaling precipitation input to spatially distributed hydrological models by measured snow distribution

    OpenAIRE

    2016-01-01

    Accurate knowledge on snow distribution in alpine terrain is crucial for various applicationssuch as flood risk assessment, avalanche warning or managing water supply and hydro-power.To simulate the seasonal snow cover development in alpine terrain, the spatially distributed,physics-based model Alpine3D is suitable. The model is typically driven by spatial interpolationsof observations from automatic weather stations (AWS), leading to errors in the spatial distributionof atmospheric forcing. ...

  17. Forward- vs. Inverse Problems in Modeling Seismic Attenuation

    Science.gov (United States)

    Morozov, I. B.

    2015-12-01

    Seismic attenuation is an important property of wave propagation used in numerous applications. However, the attenuation is also a complex phenomenon, and it is important to differentiate between its two typical uses: 1) in forward problems, to model the amplitudes and spectral contents of waves required for hazard assessment and geotechnical engineering, and 2) in inverse problems, to determine the physical properties of the subsurface. In the forward-problem sense, the attenuation is successfully characterized in terms of empirical parameters of geometric spreading, radiation patterns, scattering amplitudes, t-star, alpha, kappa, or Q. Arguably, the predicted energy losses can be correct even if the underlying attenuation model is phenomenological and not sufficiently based on physics. An example of such phenomenological model is the viscoelasticity based on the correspondence principle and the Q-factor assigned to the material. By contrast, when used to invert for in situ material properties, models addressing the specific physics are required. In many studies (including in this session), a Q-factor is interpreted as a property of a point within the subsurface; however this property is only phenomenological and may be physically insufficient or inconsistent. For example, the bulk or shear Q at the same point can be different when evaluated from different wave modes. The cases of frequency-dependent Q are particularly prone of ambiguities such as trade-off with the assumed background geometric spreading. To rigorously characterize the in situ material properties responsible for seismic-wave attenuation, it is insufficient to only focus on the seismic energy loss. Mechanical models of the material need to be considered. Such models can be constructed by using Lagrangian mechanics. These models should likely contain no Q but will be based on parameters of microstructure such as heterogeneity, fractures, or fluids. I illustrate several such models based on viscosity

  18. Distributed Model Predictive Control via Dual Decomposition

    DEFF Research Database (Denmark)

    Biegel, Benjamin; Stoustrup, Jakob; Andersen, Palle

    2014-01-01

    This chapter presents dual decomposition as a means to coordinate a number of subsystems coupled by state and input constraints. Each subsystem is equipped with a local model predictive controller while a centralized entity manages the subsystems via prices associated with the coupling constraints...

  19. Renewable deployment: Model for a fairer distribution

    Science.gov (United States)

    Grunewald, Philipp

    2017-09-01

    Typically, the allocation of renewable power sources is determined by a desire to maximize output and reduce generation costs in order to satisfy the preferences of a small number of stakeholders. A new model broadens this perspective by considering societal equity and acceptability, with the aim of improving the siting process.

  20. Testing the US Integrated Ocean Observing System Data Discovery and Distribution Infrastructure with Real-World Problems

    Science.gov (United States)

    Snowden, D. P.; Signell, R.; Knee, K.; Kupiec, J.; Bird, A.; Fratantonio, B.; Koeppen, W.; Wilcox, K.

    2014-12-01

    The distributed, service-oriented architecture of the US Integrated Ocean Observing System (US IOOS) has been implemented mostly independently by US IOOS partners, using different software approaches and different levels of compliance to standards. Some uniformity has been imparted by documenting the intended output data formats and content and service interface behavior. But to date, a rigorous testing of the distributed system of systems has not been done. To assess the functionality of this system, US IOOS is conducting a system integration test (http://github.com/ioos/system-test) that evaluates whether the services (i.e. SOS, OPeNDAP, WMS, CS/W) deployed to the 17 Federal partners and 11 Regional Associations can solve real-world problems. Scenarios were selected that both address IOOS societal goals and test different functionality of the data architecture. For example, one scenario performs an assessment of water level forecast skill by prompting the user for a bounding box and a temporal extent, searching metadata catalogs via a Catalog Services for the Web (CS/W) interface to discover available sea level observations and model results, extracting data from the identified service endpoints (either OPeNDAP or SOS), interpolating both modeled and observed data onto a common time base, and then comparing the skill of the various models. Other scenarios explore issues such as hypoxia and wading bird habitats. For each scenario, the entire workflow (user input, search, access, analysis and visualization) is captured in an IPython Notebook on GitHub. This allows the scenarios to be self-documenting as well as reproducible by anyone, using free software. The Python packages required to run the scenarios are all available on GitHub and Conda packages are available on binstar.org so that users can easily run the scenarios using the free Anaconda Python distribution. With the advent of hosted services such as Wakari, it is possible for anyone to reproduce these