PROGRAMMING OF METHODS FOR THE NEEDS OF LOGISTICS DISTRIBUTION SOLVING PROBLEMS
Directory of Open Access Journals (Sweden)
Andrea Štangová
2014-06-01
Full Text Available Logistics has become one of the dominant factors which is affecting the successful management, competitiveness and mentality of the global economy. Distribution logistics materializes the connesciton of production and consumer marke. It uses different methodology and methods of multicriterial evaluation and allocation. This thesis adresses the problem of the costs of securing the distribution of product. It was therefore relevant to design a software product thet would be helpful in solvin the problems related to distribution logistics. Elodis – electronic distribution logistics program was designed on the basis of theoretical analysis of the issue of distribution logistics and on the analysis of the software products market. The program uses a multicriterial evaluation methods to deremine the appropriate type and mathematical and geometrical method to determine an appropriate allocation of the distribution center, warehouse and company.
Optimization of hot water transport and distribution networks by analytical method: OPTAL program
International Nuclear Information System (INIS)
Barreau, Alain; Caizergues, Robert; Moret-Bailly, Jean
1977-06-01
This report presents optimization studies of hot water transport and distribution network by minimizing operating cost. Analytical optimization is used: Lagrange's method of undetermined multipliers. Optimum diameter of each pipe is calculated for minimum network operating cost. The characteristics of the computer program used for calculations, OPTAL, are given in this report. An example of network is calculated and described: 52 branches and 27 customers. Results are discussed [fr
Schnase, John L. (Inventor); Duffy, Daniel Q. (Inventor); Tamkin, Glenn S. (Inventor)
2016-01-01
A system, method and computer-readable storage devices for providing a climate data analytic services application programming interface distribution package. The example system can provide various components. The system provides a climate data analytic services application programming interface library that enables software applications running on a client device to invoke the capabilities of a climate data analytic service. The system provides a command-line interface that provides a means of interacting with a climate data analytic service by issuing commands directly to the system's server interface. The system provides sample programs that call on the capabilities of the application programming interface library and can be used as templates for the construction of new client applications. The system can also provide test utilities, build utilities, service integration utilities, and documentation.
Cumulative Poisson Distribution Program
Bowerman, Paul N.; Scheuer, Ernest M.; Nolty, Robert
1990-01-01
Overflow and underflow in sums prevented. Cumulative Poisson Distribution Program, CUMPOIS, one of two computer programs that make calculations involving cumulative Poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), used independently of one another. CUMPOIS determines cumulative Poisson distribution, used to evaluate cumulative distribution function (cdf) for gamma distributions with integer shape parameters and cdf for X (sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Written in C.
NEWTPOIS- NEWTON POISSON DISTRIBUTION PROGRAM
Bowerman, P. N.
1994-01-01
The cumulative poisson distribution program, NEWTPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, NEWTPOIS (NPO-17715) and CUMPOIS (NPO-17714), can be used independently of one another. NEWTPOIS determines percentiles for gamma distributions with integer shape parameters and calculates percentiles for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. NEWTPOIS determines the Poisson parameter (lambda), that is; the mean (or expected) number of events occurring in a given unit of time, area, or space. Given that the user already knows the cumulative probability for a specific number of occurrences (n) it is usually a simple matter of substitution into the Poisson distribution summation to arrive at lambda. However, direct calculation of the Poisson parameter becomes difficult for small positive values of n and unmanageable for large values. NEWTPOIS uses Newton's iteration method to extract lambda from the initial value condition of the Poisson distribution where n=0, taking successive estimations until some user specified error term (epsilon) is reached. The NEWTPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting epsilon, n, and the cumulative probability of the occurrence of n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 30K. NEWTPOIS was developed in 1988.
R Programs for Truncated Distributions
Directory of Open Access Journals (Sweden)
Saralees Nadarajah
2006-08-01
Full Text Available Truncated distributions arise naturally in many practical situations. In this note, we provide programs for computing six quantities of interest (probability density function, mean, variance, cumulative distribution function, quantile function and random numbers for any truncated distribution: whether it is left truncated, right truncated or doubly truncated. The programs are written in R: a freely downloadable statistical software.
Newton/Poisson-Distribution Program
Bowerman, Paul N.; Scheuer, Ernest M.
1990-01-01
NEWTPOIS, one of two computer programs making calculations involving cumulative Poisson distributions. NEWTPOIS (NPO-17715) and CUMPOIS (NPO-17714) used independently of one another. NEWTPOIS determines Poisson parameter for given cumulative probability, from which one obtains percentiles for gamma distributions with integer shape parameters and percentiles for X(sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Program written in C.
Method of forecasting power distribution
International Nuclear Information System (INIS)
Kaneto, Kunikazu.
1981-01-01
Purpose: To obtain forecasting results at high accuracy by reflecting the signals from neutron detectors disposed in the reactor core on the forecasting results. Method: An on-line computer transfers, to a simulator, those process data such as temperature and flow rate for coolants in each of the sections and various measuring signals such as control rod positions from the nuclear reactor. The simulator calculates the present power distribution before the control operation. The signals from the neutron detectors at each of the positions in the reactor core are estimated from the power distribution and errors are determined based on the estimated values and the measured values to determine the smooth error distribution in the axial direction. Then, input conditions at the time to be forecast are set by a data setter. The simulator calculates the forecast power distribution after the control operation based on the set conditions. The forecast power distribution is corrected using the error distribution. (Yoshino, Y.)
A methodology for developing distributed programs
Ramesh, S.; Mehndiratta, S.L.
1987-01-01
A methodology, different from the existing ones, for constructing distributed programs is presented. It is based on the well-known idea of developing distributed programs via synchronous and centralized programs. The distinguishing features of the methodology are: 1) specification include process
Distributed optimization system and method
Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.
2003-06-10
A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.
Nonlinear programming analysis and methods
Avriel, Mordecai
2012-01-01
This text provides an excellent bridge between principal theories and concepts and their practical implementation. Topics include convex programming, duality, generalized convexity, analysis of selected nonlinear programs, techniques for numerical solutions, and unconstrained optimization methods.
Programming the finite element method
Smith, I M; Margetts, L
2013-01-01
Many students, engineers, scientists and researchers have benefited from the practical, programming-oriented style of the previous editions of Programming the Finite Element Method, learning how to develop computer programs to solve specific engineering problems using the finite element method. This new fifth edition offers timely revisions that include programs and subroutine libraries fully updated to Fortran 2003, which are freely available online, and provides updated material on advances in parallel computing, thermal stress analysis, plasticity return algorithms, convection boundary c
Programming Languages for Distributed Computing Systems
Bal, H.E.; Steiner, J.G.; Tanenbaum, A.S.
1989-01-01
When distributed systems first appeared, they were programmed in traditional sequential languages, usually with the addition of a few library procedures for sending and receiving messages. As distributed applications became more commonplace and more sophisticated, this ad hoc approach became less
Programming a Distributed System Using Shared Objects
Tanenbaum, A.S.; Bal, H.E.; Kaashoek, M.F.
1993-01-01
Building the hardware for a high-performance distributed computer system is a lot easier than building its software. The authors describe a model for programming distributed systems based on abstract data types that can be replicated on all machines that need them. Read operations are done locally,
Methods for robustness programming
Olieman, N.J.
2008-01-01
Robustness of an object is defined as the probability that an object will have properties as required. Robustness Programming (RP) is a mathematical approach for Robustness estimation and Robustness optimisation. An example in the context of designing a food product, is finding the best composition
Calculation Methods for Wallenius’ Noncentral Hypergeometric Distribution
DEFF Research Database (Denmark)
Fog, Agner
2008-01-01
Two different probability distributions are both known in the literature as "the" noncentral hypergeometric distribution. Wallenius' noncentral hypergeometric distribution can be described by an urn model without replacement with bias. Fisher's noncentral hypergeometric distribution...... is the conditional distribution of independent binomial variates given their sum. No reliable calculation method for Wallenius' noncentral hypergeometric distribution has hitherto been described in the literature. Several new methods for calculating probabilities from Wallenius' noncentral hypergeometric...... distribution are derived. Range of applicability, numerical problems, and efficiency are discussed for each method. Approximations to the mean and variance are also discussed. This distribution has important applications in models of biased sampling and in models of evolutionary systems....
Inspection Methods in Programming.
1981-06-01
Counting is a a specialization of Iterative-generation in which the generating function is Oneplus ) Waters’ second category of plan building method...is Oneplus and the initial input is 1. 0 I 180 CHAPTER NINE -ta a acio f igr9-.IeaieGnrtoPln 7 -7 STEADY STATE PLANS 181 TemporalPlan counting...specializalion iterative-generation roles .action(afu nction) ,tail(counting) conslraints .action.op = oneplus A .action.input = 1 The lItcrative-application
DEFF Research Database (Denmark)
Huang, Shaojun; Wu, Qiuwei; Oren, Shmuel S.
2015-01-01
) calculates dynamic tariffs and publishes them to the aggregators, who make the optimal energy plans for the flexible demands. The DLMP through QP instead of linear programing as studied in previous literatures solves the multiple solution issue of the ag- gregator optimization which may cause......This paper presents the distribution locational mar- ginal pricing (DLMP) method through quadratic programming (QP) designed to alleviate the congestion that might occur in a distribution network with high penetration of flexible demands. In the DLMP method, the distribution system operator (DSO...
Distributed Programming via Safe Closure Passing
Directory of Open Access Journals (Sweden)
Philipp Haller
2016-02-01
Full Text Available Programming systems incorporating aspects of functional programming, e.g., higher-order functions, are becoming increasingly popular for large-scale distributed programming. New frameworks such as Apache Spark leverage functional techniques to provide high-level, declarative APIs for in-memory data analytics, often outperforming traditional "big data" frameworks like Hadoop MapReduce. However, widely-used programming models remain rather ad-hoc; aspects such as implementation trade-offs, static typing, and semantics are not yet well-understood. We present a new asynchronous programming model that has at its core several principles facilitating functional processing of distributed data. The emphasis of our model is on simplicity, performance, and expressiveness. The primary means of communication is by passing functions (closures to distributed, immutable data. To ensure safe and efficient distribution of closures, our model leverages both syntactic and type-based restrictions. We report on a prototype implementation in Scala. Finally, we present preliminary experimental results evaluating the performance impact of a static, type-based optimization of serialization.
Programming model for distributed intelligent systems
Sztipanovits, J.; Biegl, C.; Karsai, G.; Bogunovic, N.; Purves, B.; Williams, R.; Christiansen, T.
1988-01-01
A programming model and architecture which was developed for the design and implementation of complex, heterogeneous measurement and control systems is described. The Multigraph Architecture integrates artificial intelligence techniques with conventional software technologies, offers a unified framework for distributed and shared memory based parallel computational models and supports multiple programming paradigms. The system can be implemented on different hardware architectures and can be adapted to strongly different applications.
Separable programming theory and methods
Stefanov, Stefan M
2001-01-01
In this book, the author considers separable programming and, in particular, one of its important cases - convex separable programming Some general results are presented, techniques of approximating the separable problem by linear programming and dynamic programming are considered Convex separable programs subject to inequality equality constraint(s) and bounds on variables are also studied and iterative algorithms of polynomial complexity are proposed As an application, these algorithms are used in the implementation of stochastic quasigradient methods to some separable stochastic programs Numerical approximation with respect to I1 and I4 norms, as a convex separable nonsmooth unconstrained minimization problem, is considered as well Audience Advanced undergraduate and graduate students, mathematical programming operations research specialists
Distribution method optimization : inventory flexibility
Asipko, D.
2010-01-01
This report presents the outcome of the Logistics Design Project carried out for Nike Inc. This project has two goals: create a model to measure a flexibility aspect of the inventory usage in different Nike distribution channels, and analyze opportunities of changing the decision model of splitting
Distribution network planning method considering distributed generation for peak cutting
International Nuclear Information System (INIS)
Ouyang Wu; Cheng Haozhong; Zhang Xiubin; Yao Liangzhong
2010-01-01
Conventional distribution planning method based on peak load brings about large investment, high risk and low utilization efficiency. A distribution network planning method considering distributed generation (DG) for peak cutting is proposed in this paper. The new integrated distribution network planning method with DG implementation aims to minimize the sum of feeder investments, DG investments, energy loss cost and the additional cost of DG for peak cutting. Using the solution techniques combining genetic algorithm (GA) with the heuristic approach, the proposed model determines the optimal planning scheme including the feeder network and the siting and sizing of DG. The strategy for the site and size of DG, which is based on the radial structure characteristics of distribution network, reduces the complexity degree of solving the optimization model and eases the computational burden substantially. Furthermore, the operation schedule of DG at the different load level is also provided.
Non-linear programming method in optimization of fast reactors
International Nuclear Information System (INIS)
Pavelesku, M.; Dumitresku, Kh.; Adam, S.
1975-01-01
Application of the non-linear programming methods on optimization of nuclear materials distribution in fast reactor is discussed. The programming task composition is made on the basis of the reactor calculation dependent on the fuel distribution strategy. As an illustration of this method application the solution of simple example is given. Solution of the non-linear program is done on the basis of the numerical method SUMT. (I.T.)
Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution
Energy Technology Data Exchange (ETDEWEB)
Hamadameen, Abdulqader Othman [Optimization, Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia); Zainuddin, Zaitul Marlizawati [Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia)
2014-06-19
This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.
Development of sample size allocation program using hypergeometric distribution
International Nuclear Information System (INIS)
Kim, Hyun Tae; Kwack, Eun Ho; Park, Wan Soo; Min, Kyung Soo; Park, Chan Sik
1996-01-01
The objective of this research is the development of sample allocation program using hypergeometric distribution with objected-oriented method. When IAEA(International Atomic Energy Agency) performs inspection, it simply applies a standard binomial distribution which describes sampling with replacement instead of a hypergeometric distribution which describes sampling without replacement in sample allocation to up to three verification methods. The objective of the IAEA inspection is the timely detection of diversion of significant quantities of nuclear material, therefore game theory is applied to its sampling plan. It is necessary to use hypergeometric distribution directly or approximate distribution to secure statistical accuracy. Improved binomial approximation developed by Mr. J. L. Jaech and correctly applied binomial approximation are more closer to hypergeometric distribution in sample size calculation than the simply applied binomial approximation of the IAEA. Object-oriented programs of 1. sample approximate-allocation with correctly applied standard binomial approximation, 2. sample approximate-allocation with improved binomial approximation, and 3. sample approximate-allocation with hypergeometric distribution were developed with Visual C ++ and corresponding programs were developed with EXCEL(using Visual Basic for Application). 8 tabs., 15 refs. (Author)
Modular programming method at JAERI
International Nuclear Information System (INIS)
Asai, Kiyoshi; Katsuragi, Satoru
1982-02-01
In this report the histories, concepts and a method for the construction and maintenance of nuclear code systems of Japan Atomic Energy Research Institute (JAERI) are presented. The method is mainly consisted of novel computer features. The development process of the features and experiences with them which required many man-months and efforts of scientists and engineers of JAERI and a computer manufacturer are also described. One of the features is a file handling program named datapool. The program is being used in code systems which are under development at JAERI. The others are computer features such as dynamic linking, reentrant coding of Fortran programs, interactive programming facility, document editor, quick system output viewer and editor, flexible man-machine interactive Fortran executor, and selective use of time-sharing or batch oriented computer in an interactive porgramming environment. In 1980 JAERI has replaced its two old computer systems by three FACOM M-200 computer systems and they have such features as mentioned above. Since 1981 most code systems, or even big single codes can be changed to modular code systems even if the developers or users of the systems will not recognize the fact that they are using modular code systems. The purpose of this report is to describe our methodology of modular programming from aspects of computer features and some of their applications to nuclear codes to get sympathetic understanding of it from persons of organizations who are concerned with the effective use of computers, especially, in nuclear research fields. (author)
Object oriented distributed programming: studies and proposals
International Nuclear Information System (INIS)
Guerraoui, Rachid
1992-01-01
This thesis contributes to the investigation of the object concept in distributed programming. Henceforth, this programming style has become a reality in the computer science world, since it allows to increase of the availability of applications and to decrease their execution time. Nevertheless, designing a distributed application is a hard task: the various abstraction levels that must be considered hinder the software reusability and maintenance, while errors and concurrent accesses are often sources of executions incoherence. The object concept improves the software modularity, and raises the computing abstraction level. Integrating distribution related aspects into the object model brings up the issues of expressing the concurrency and maintaining the coherency. The investigation of these problems in this thesis has been guided by a major concern for the preservation of the intrinsic properties of object-orientation, and the orthogonality of the solutions given. The main contributions of the thesis are: (i) the classification, regarding modularity, of the different design alternatives for object-oriented concurrent languages; (ii) the evaluation of various transactional mechanisms in object-based concurrent languages, and the design of an atomic asynchronous communication protocol named ACS; (iii) the definition of a transaction-based object-oriented concurrent language called KAROS; (iv) the implementation of a modular framework which allows to combine in a same application, various concurrency control and error recovery mechanisms; (v) the identification of a formal property, named general atomicity, which constitutes a correctness criteria for atomic objects specifications. (author) [fr
Integer programming of cement distribution by train
Indarsih
2018-01-01
Cement industry in Central Java distributes cement by train to meet daily demand in Yogyakarta and Central Java area. There are five destination stations. For each destination station, there is a warehouse to load cements. Decision maker of cement industry have a plan to redesign the infrastructure and transportation system. The aim is to determine how many locomotives, train wagons, and containers and how to arrange train schedules with subject to the delivery time. For this purposes, we consider an integer programming to minimize the total of operational cost. Further, we will discuss a case study and the solution the problem can be calculated by LINGO software.
Secure Execution of Distributed Session Programs
Directory of Open Access Journals (Sweden)
Nuno Alves
2011-10-01
Full Text Available The development of the SJ Framework for session-based distributed programming is part of recent and ongoing research into integrating session types and practical, real-world programming languages. SJ programs featuring session types (protocols are statically checked by the SJ compiler to verify the key property of communication safety, meaning that parties engaged in a session only communicate messages, including higher-order communications via session delegation, that are compatible with the message types expected by the recipient. This paper presents current work on security aspects of the SJ Framework. Firstly, we discuss our implementation experience from improving the SJ Runtime platform with security measures to protect and augment communication safety at runtime. We implement a transport component for secure session execution that uses a modified TLS connection with authentication based on the Secure Remote Password (SRP protocol. The key technical point is the delicate treatment of secure session delegation to counter a previous vulnerability. We find that the modular design of the SJ Runtime, based on the notion of an Abstract Transport for session communication, supports rapid extension to utilise additional transports whilst separating this concern from the application-level session programming task. In the second part of this abstract, we formally prove the target security properties by modelling the extended SJ delegation protocols in the pi-calculus.
Generalized Analysis of a Distribution Separation Method
Directory of Open Access Journals (Sweden)
Peng Zhang
2016-04-01
Full Text Available Separating two probability distributions from a mixture model that is made up of the combinations of the two is essential to a wide range of applications. For example, in information retrieval (IR, there often exists a mixture distribution consisting of a relevance distribution that we need to estimate and an irrelevance distribution that we hope to get rid of. Recently, a distribution separation method (DSM was proposed to approximate the relevance distribution, by separating a seed irrelevance distribution from the mixture distribution. It was successfully applied to an IR task, namely pseudo-relevance feedback (PRF, where the query expansion model is often a mixture term distribution. Although initially developed in the context of IR, DSM is indeed a general mathematical formulation for probability distribution separation. Thus, it is important to further generalize its basic analysis and to explore its connections to other related methods. In this article, we first extend DSM’s theoretical analysis, which was originally based on the Pearson correlation coefficient, to entropy-related measures, including the KL-divergence (Kullback–Leibler divergence, the symmetrized KL-divergence and the JS-divergence (Jensen–Shannon divergence. Second, we investigate the distribution separation idea in a well-known method, namely the mixture model feedback (MMF approach. We prove that MMF also complies with the linear combination assumption, and then, DSM’s linear separation algorithm can largely simplify the EM algorithm in MMF. These theoretical analyses, as well as further empirical evaluation results demonstrate the advantages of our DSM approach.
Computer program for source distribution process in radiation facility
International Nuclear Information System (INIS)
Al-Kassiri, H.; Abdul Ghani, B.
2007-08-01
Computer simulation for dose distribution using Visual Basic has been done according to the arrangement and activities of Co-60 sources. This program provides dose distribution in treated products depending on the product density and desired dose. The program is useful for optimization of sources distribution during loading process. there is good agreement between calculated data for the program and experimental data.(Author)
Methods for Distributed Optimal Energy Management
DEFF Research Database (Denmark)
Brehm, Robert
The presented research deals with the fundamental underlying methods and concepts of how the growing number of distributed generation units based on renewable energy resources and distributed storage devices can be most efficiently integrated into the existing utility grid. In contrast to convent......The presented research deals with the fundamental underlying methods and concepts of how the growing number of distributed generation units based on renewable energy resources and distributed storage devices can be most efficiently integrated into the existing utility grid. In contrast...... to conventional centralised optimal energy flow management systems, here-in, focus is set on how optimal energy management can be achieved in a decentralised distributed architecture such as a multi-agent system. Distributed optimisation methods are introduced, targeting optimisation of energy flow in virtual......-consumption of renewable energy resources in low voltage grids. It can be shown that this method prevents mutual discharging of batteries and prevents peak loads, a supervisory control instance can dictate the level of autarchy from the utility grid. Further it is shown that the problem of optimal energy flow management...
Reduction Method for Active Distribution Networks
DEFF Research Database (Denmark)
Raboni, Pietro; Chen, Zhe
2013-01-01
On-line security assessment is traditionally performed by Transmission System Operators at the transmission level, ignoring the effective response of distributed generators and small loads. On the other hand the required computation time and amount of real time data for including Distribution...... Networks also would be too large. In this paper an adaptive aggregation method for subsystems with power electronic interfaced generators and voltage dependant loads is proposed. With this tool may be relatively easier including distribution networks into security assessment. The method is validated...... by comparing the results obtained in PSCAD® with the detailed network model and with the reduced one. Moreover the control schemes of a wind turbine and a photovoltaic plant included in the detailed network model are described....
Distributed Pair Programming Using Collaboration Scripts: An Educational System and Initial Results
Tsompanoudi, Despina; Satratzemi, Maya; Xinogalos, Stelios
2015-01-01
Since pair programming appeared in the literature as an effective method of teaching computer programming, many systems were developed to cover the application of pair programming over distance. Today's systems serve personal, professional and educational purposes allowing distributed teams to work together on the same programming project. The…
A Program to Generate a Particle Distribution from Emittance Measurements
Bouma, DS; Lallement, JB
2010-01-01
We have written a program to generate a particle distribution based on emittance measurements in x-x’ and y-y’. The accuracy of this program has been tested using real and constructed emittance measurements. Based on these tests, the distribution generated by the program can be used to accurately simulate the beam in multi-particle tracking codes, as an alternative to a Gaussian or uniform distribution.
Multipath interference test method for distributed amplifiers
Okada, Takahiro; Aida, Kazuo
2005-12-01
A method for testing distributed amplifiers is presented; the multipath interference (MPI) is detected as a beat spectrum between the multipath signal and the direct signal using a binary frequency shifted keying (FSK) test signal. The lightwave source is composed of a DFB-LD that is directly modulated by a pulse stream passing through an equalizer, and emits the FSK signal of the frequency deviation of about 430MHz at repetition rate of 80-100 kHz. The receiver consists of a photo-diode and an electrical spectrum analyzer (ESA). The base-band power spectrum peak appeared at the frequency of the FSK frequency deviation can be converted to amount of MPI using a calibration chart. The test method has improved the minimum detectable MPI as low as -70 dB, compared to that of -50 dB of the conventional test method. The detailed design and performance of the proposed method are discussed, including the MPI simulator for calibration procedure, computer simulations for evaluating the error caused by the FSK repetition rate and the fiber length under test and experiments on singlemode fibers and distributed Raman amplifier.
Distributed Memory Programming on Many-Cores
DEFF Research Database (Denmark)
Berthold, Jost; Dieterle, Mischa; Lobachev, Oleg
2009-01-01
Eden is a parallel extension of the lazy functional language Haskell providing dynamic process creation and automatic data exchange. As a Haskell extension, Eden takes a high-level approach to parallel programming and thereby simplifies parallel program development. The current implementation is ...
Calculation methods in program CCRMN
Energy Technology Data Exchange (ETDEWEB)
Chonghai, Cai [Nankai Univ., Tianjin (China). Dept. of Physics; Qingbiao, Shen [Chinese Nuclear Data Center, Beijing, BJ (China)
1996-06-01
CCRMN is a program for calculating complex reactions of a medium-heavy nucleus with six light particles. In CCRMN, the incoming particles can be neutrons, protons, {sup 4}He, deuterons, tritons and {sup 3}He. the CCRMN code is constructed within the framework of the optical model, pre-equilibrium statistical theory based on the exciton model and the evaporation model. CCRMN is valid in 1{approx} MeV energy region, it can give correct results for optical model quantities and all kinds of reaction cross sections. This program has been applied in practical calculations and got reasonable results.
Probability evolution method for exit location distribution
Zhu, Jinjie; Chen, Zhen; Liu, Xianbin
2018-03-01
The exit problem in the framework of the large deviation theory has been a hot topic in the past few decades. The most probable escape path in the weak-noise limit has been clarified by the Freidlin-Wentzell action functional. However, noise in real physical systems cannot be arbitrarily small while noise with finite strength may induce nontrivial phenomena, such as noise-induced shift and noise-induced saddle-point avoidance. Traditional Monte Carlo simulation of noise-induced escape will take exponentially large time as noise approaches zero. The majority of the time is wasted on the uninteresting wandering around the attractors. In this paper, a new method is proposed to decrease the escape simulation time by an exponentially large factor by introducing a series of interfaces and by applying the reinjection on them. This method can be used to calculate the exit location distribution. It is verified by examining two classical examples and is compared with theoretical predictions. The results show that the method performs well for weak noise while may induce certain deviations for large noise. Finally, some possible ways to improve our method are discussed.
A Rational Method for Ranking Engineering Programs.
Glower, Donald D.
1980-01-01
Compares two methods for ranking academic programs, the opinion poll v examination of career successes of the program's alumni. For the latter, "Who's Who in Engineering" and levels of research funding provided data. Tables display resulting data and compare rankings by the two methods for chemical engineering and civil engineering. (CS)
Neurolinguistics Programming: Method or Myth?
Gumm, W. B.; And Others
1982-01-01
The preferred modality by which 50 right-handed female college students encoded experience was assessed by recordings of conjugate eye movements, content analysis of the subject's verbal report, and the subject's self-report. Kappa analyses failed to reveal any agreement of the three assessment methods. (Author)
DUMA - a program to display distributions in hexagonal geometry
International Nuclear Information System (INIS)
Tran Quoc Dung; Makai, M.
1987-09-01
DUMA program displays hexagonal structures applied in WWER-440 reactors or one or two distributions in them. It helps users to display either integer, literal or real arrays in an arbitrary hexagonal structure. Possible applications: displaying reactor core layout, power distribution or activity measurements. (author)
Simulation of depth distribution of geological strata. HQSW program
International Nuclear Information System (INIS)
Czubek, J.A.; Kolakowski, L.
1987-01-01
The method of simulation of the layered geological formation for a given geological parameter is presented. The geological formation contains at least two types of layers and is given with the depth resolution Δh corresponding to the thickness of the hypothetical elementary layer. Two types of geostatistical distributions of the rock parameters are considered: modified normal and modified lognormal for which the input data are expected value and the variance. The HQSW simulation program given in the paper generates in a random way (but in a given repeatable sequence) the thicknesses of a given type of strata, their average specific radioactivity and the variance of specific radioactivity within a given layer. 8 refs., 14 figs., 1 tab. (author)
Risky Group Decision-Making Method for Distribution Grid Planning
Li, Cunbin; Yuan, Jiahang; Qi, Zhiqiang
2015-12-01
With rapid speed on electricity using and increasing in renewable energy, more and more research pay attention on distribution grid planning. For the drawbacks of existing research, this paper proposes a new risky group decision-making method for distribution grid planning. Firstly, a mixing index system with qualitative and quantitative indices is built. On the basis of considering the fuzziness of language evaluation, choose cloud model to realize "quantitative to qualitative" transformation and construct interval numbers decision matrices according to the "3En" principle. An m-dimensional interval numbers decision vector is regarded as super cuboids in m-dimensional attributes space, using two-level orthogonal experiment to arrange points uniformly and dispersedly. The numbers of points are assured by testing numbers of two-level orthogonal arrays and these points compose of distribution points set to stand for decision-making project. In order to eliminate the influence of correlation among indices, Mahalanobis distance is used to calculate the distance from each solutions to others which means that dynamic solutions are viewed as the reference. Secondly, due to the decision-maker's attitude can affect the results, this paper defines the prospect value function based on SNR which is from Mahalanobis-Taguchi system and attains the comprehensive prospect value of each program as well as the order. At last, the validity and reliability of this method is illustrated by examples which prove the method is more valuable and superiority than the other.
Photovoltaic subsystem marketing and distribution model: programming manual. Final report
Energy Technology Data Exchange (ETDEWEB)
1982-07-01
Complete documentation of the marketing and distribution (M and D) computer model is provided. The purpose is to estimate the costs of selling and transporting photovoltaic solar energy products from the manufacturer to the final customer. The model adjusts for the inflation and regional differences in marketing and distribution costs. The model consists of three major components: the marketing submodel, the distribution submodel, and the financial submodel. The computer program is explained including the input requirements, output reports, subprograms and operating environment. The program specifications discuss maintaining the validity of the data and potential improvements. An example for a photovoltaic concentrator collector demonstrates the application of the model.
DEFF Research Database (Denmark)
Liu, Zhaoxi; Wu, Qiuwei; Oren, Shmuel S.
2017-01-01
This paper presents a distribution locational marginal pricing (DLMP) method through chance constrained mixed-integer programming designed to alleviate the possible congestion in the future distribution network with high penetration of electric vehicles (EVs). In order to represent the stochastic...
A new method for assessing judgmental distributions
Moors, J.J.A.; Schuld, M.H.; Mathijssen, A.C.A.
1995-01-01
For a number of statistical applications subjective estimates of some distributional parameters - or even complete densities are needed. The literature agrees that it is wise behaviour to ask only for some quantiles of the distribution; from these, the desired quantities are extracted. Quite a lot
Simple Calculation Programs for Biology Immunological Methods
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. Simple Calculation Programs for Biology Immunological Methods. Computation of Ab/Ag Concentration from EISA data. Graphical Method; Raghava et al., 1992, J. Immuno. Methods 153: 263. Determination of affinity of Monoclonal Antibody. Using non-competitive ...
DEFF Research Database (Denmark)
Huang, Shaojun; Wu, Qiuwei; Liu, Zhaoxi
2014-01-01
This paper reviews the existing congestion management methods for distribution networks with high penetration of DERs documented in the recent research literatures. The congestion management methods for distribution networks reviewed can be grouped into two categories – market methods and direct...... control methods. The market methods consist of dynamic tariff, distribution capacity market, shadow price and flexible service market. The direct control methods are comprised of network reconfiguration, reactive power control and active power control. Based on the review of the existing methods...
International Nuclear Information System (INIS)
Wasastjerna, F.; Lux, I.
1980-03-01
A transmission probability method implemented in the program TPHEX is described. This program was developed for the calculation of neutron flux distributions in hexagonal light water reactor fuel assemblies. The accuracy appears to be superior to diffusion theory, and the computation time is shorter than that of the collision probability method. (author)
School Wellness Programs: Magnitude and Distribution in New York City Public Schools
Stiefel, Leanna; Elbel, Brian; Pflugh Prescott, Melissa; Aneja, Siddhartha; Schwartz, Amy E.
2017-01-01
Background: Public schools provide students with opportunities to participate in many discretionary, unmandated wellness programs. Little is known about the number of these programs, their distribution across schools, and the kinds of students served. We provide evidence on these questions for New York City (NYC) public schools. Methods: Data on…
Method for programming a flash memory
Energy Technology Data Exchange (ETDEWEB)
Brosky, Alexander R.; Locke, William N.; Maher, Conrado M.
2016-08-23
A method of programming a flash memory is described. The method includes partitioning a flash memory into a first group having a first level of write-protection, a second group having a second level of write-protection, and a third group having a third level of write-protection. The write-protection of the second and third groups is disabled using an installation adapter. The third group is programmed using a Software Installation Device.
GENGTC-JB: a computer program to calculate temperature distribution for cylindrical geometry capsule
International Nuclear Information System (INIS)
Someya, Hiroyuki; Kobayashi, Toshiki; Niimi, Motoji; Hoshiya, Taiji; Harayama, Yasuo
1987-09-01
In design of JMTR irradiation capsules contained specimens, a program (named GENGTC) has been generally used to evaluate temperature distributions in the capsules. The program was originally compiled by ORNL(U.S.A.) and consisted of very simple calculation methods. From the incorporated calculation methods, the program is easy to use, and has many applications to the capsule design. However, it was considered to replace original computing methods with advanced ones, when the program was checked from a standpoint of the recent computer abilities, and also to be complicated in data input. Therefore, the program was versioned up as aim to make better calculations and improve input method. The present report describes revised calculation methods and input/output guide of the version-up program. (author)
Discount method for programming language evaluation
DEFF Research Database (Denmark)
Kurtev, Svetomir; Christensen, Tommy Aagaard; Thomsen, Bent
2016-01-01
This paper presents work in progress on developing a Discount Method for Programming Language Evaluation inspired by the Discount Usability Evaluation method (Benyon 2010) and the Instant Data Analysis method (Kjeldskov et al. 2004). The method is intended to bridge the gap between small scale...... internal language design evaluation methods and large scale surveys and quantitative evaluation methods. The method is designed to be applicable even before a compiler or IDE is developed for a new language. To test the method, a usability evaluation experiment was carried out on the Quorum programming...... language (Stefik et al. 2016) using programmers with experience in C and C#. When comparing our results with previous studies of Quorum, most of the data was comparable though not strictly in agreement. However, the discrepancies were mainly related to the programmers pre-existing expectations...
Storage and distribution/Linear programming for storage operations
Energy Technology Data Exchange (ETDEWEB)
Coleman, D
1978-07-15
The techniques of linear programing to solve storage problems as applied in a tank farm tie-in with refinery throughput operation include: (1) the time-phased model which works on storage and refinery operations input parameters, e.g., production, distribution, cracking, etc., and is capable of representing product stockpiling in slack periods to meet future peak demands, and investigating alternative strategies such as exchange deals and purchase and leasing of additional storage, and (2) the Monte Carlo simulation method, which inputs parameters, e.g., arrival of crude products at refinery, tankage size, likely demand for products, etc., as probability distributions rather than single values, and is capable of showing the average utilization of facilities, potential bottlenecks, investment required to achieve an increase in utilization, and to enable the user to predict total investment, cash flow, and profit emanating from the original financing decision. The increasing use of computer techniques to solve refinery and storage problems is attributed to potential savings resulting from more effective planning, reduced computer costs, ease of access and more usable software. Diagrams.
Methods for evaluation of industry training programs
International Nuclear Information System (INIS)
Morisseau, D.S.; Roe, M.L.; Persensky, J.J.
1987-01-01
The NRC Policy Statement on Training and Qualification endorses the INPO-managed Training Accreditation Program in that it encompasses the elements of effective performance-based training. Those elements are: analysis of the job, performance-based learning objectives, training design and implementation, trainee evaluation, and program evaluation. As part of the NRC independent evaluation of utilities implementation of training improvement programs, the staff developed training review criteria and procedures that address all five elements of effective performance-based training. The staff uses these criteria to perform reviews of utility training programs that have already received accreditation. Although no performance-based training program can be said to be complete unless all five elements are in place, the last two, trainee and program evaluation, are perhaps the most important because they determine how well the first three elements have been implemented and ensure the dynamic nature of training. This paper discusses the evaluation elements of the NRC training review criteria. The discussion will detail the elements of evaluation methods and techniques that the staff expects to find as integral parts of performance-based training programs at accredited utilities. Further, the review of the effectiveness of implementation of the evaluation methods is discussed. The paper also addresses some of the qualitative differences between what is minimally acceptable and what is most desirable with respect to trainee and program evaluation mechanisms and their implementation
Simple Calculation Programs for Biology Other Methods
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. Simple Calculation Programs for Biology Other Methods. Hemolytic potency of drugs. Raghava et al., (1994) Biotechniques 17: 1148. FPMAP: methods for classification and identification of microorganisms 16SrRNA. graphical display of restriction and fragment map of ...
Sampling Methods for Wallenius' and Fisher's Noncentral Hypergeometric Distributions
DEFF Research Database (Denmark)
Fog, Agner
2008-01-01
the mode, ratio-of-uniforms rejection method, and rejection by sampling in the tau domain. Methods for the multivariate distributions include: simulation of urn experiments, conditional method, Gibbs sampling, and Metropolis-Hastings sampling. These methods are useful for Monte Carlo simulation of models...... of biased sampling and models of evolution and for calculating moments and quantiles of the distributions.......Several methods for generating variates with univariate and multivariate Wallenius' and Fisher's noncentral hypergeometric distributions are developed. Methods for the univariate distributions include: simulation of urn experiments, inversion by binary search, inversion by chop-down search from...
International Review of Standards and Labeling Programs for Distribution Transformers
Energy Technology Data Exchange (ETDEWEB)
Letschert, Virginie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Scholand, Michael [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Carreño, Ana MarÃa [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hernandez, Carolina [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2017-06-20
Transmission and distribution (T&D) losses in electricity networks represent 8.5% of final energy consumption in the world. In Latin America, T&D losses range between 6% and 20% of final energy consumption, and represent 7% in Chile. Because approximately one-third of T&D losses take place in distribution transformers alone, there is significant potential to save energy and reduce costs and carbon emissions through policy intervention to increase distribution transformer efficiency. A large number of economies around the world have recognized the significant impact of addressing distribution losses and have implemented policies to support market transformation towards more efficient distribution transformers. As a result, there is considerable international experience to be shared and leveraged to inform countries interested in reducing distribution losses through policy intervention. The report builds upon past international studies of standards and labeling (S&L) programs for distribution transformers to present the current energy efficiency programs for distribution transformers around the world.
DOE-EPRI distributed wind Turbine Verification Program (TVP III)
Energy Technology Data Exchange (ETDEWEB)
McGowin, C.; DeMeo, E. [Electric Power Research Institute, Palo Alto, CA (United States); Calvert, S. [Dept. of Energy, Washington, DC (United States)] [and others
1997-12-31
In 1992, the Electric Power Research Institute (EPRI) and the U.S. Department of Energy (DOE) initiated the Utility Wind Turbine Verification Program (TVP). The goal of the program is to evaluate prototype advanced wind turbines at several sites developed by U.S. electric utility companies. Two six MW wind projects have been installed under the TVP program by Central and South West Services in Fort Davis, Texas and Green Mountain Power Corporation in Searsburg, Vermont. In early 1997, DOE and EPRI selected five more utility projects to evaluate distributed wind generation using smaller {open_quotes}clusters{close_quotes} of wind turbines connected directly to the electricity distribution system. This paper presents an overview of the objectives, scope, and status of the EPRI-DOE TVP program and the existing and planned TVP projects.
Dedicated Programming Language for Small Distributed Control Divices
DEFF Research Database (Denmark)
Madsen, Per Printz; Borch, Ole
2007-01-01
. This paper describes a new, flexible and simple language for programming distributed control tasks. The compiler for this language generates a target code that is very easy to interpret. A interpreter, that can be easy ported to different hardwares, is described. The new language is simple and easy to learn...... become a reality if each of these controlling computers can be configured to perform a cooperative task. This again requires the necessary communicating facilities. In other words this requires that all these simple and distributed computers can be programmed in a simple and hardware independent way...
Electrical Distribution System Functional Inspection (EDSFI) data base program
International Nuclear Information System (INIS)
Gautam, A.
1993-01-01
This document describes the organization, installation procedures, and operating instructions for the database computer program containing inspection findings from the US Nuclear Regulatory Commission's (NRC's) Electrical Distribution System Functional Inspections (EDSFIs). The program enables the user to search and sort findings, ascertain trends, and obtain printed reports of the findings. The findings include observations, unresolved issues, or possible deficiencies in the design and implementation of electrical distribution systems in nuclear plants. This database will assist those preparing for electrical inspections, searching for deficiencies in a plant, and determining the corrective actions previously taken for similar deficiencies. This database will be updated as new EDSFIs are completed
Distribution of the product confidence limits for the indirect effect: Program PRODCLIN
MacKinnon, David P.; Fritz, Matthew S.; Williams, Jason; Lockwood, Chondra M.
2010-01-01
This article describes a program, PRODCLIN (distribution of the PRODuct Confidence Limits for INdirect effects), written for SAS, SPSS, and R, that computes confidence limits for the product of two normal random variables. The program is important because it can be used to obtain more accurate confidence limits for the indirect effect, as demonstrated in several recent articles (MacKinnon, Lockwood, & Williams, 2004; Pituch, Whittaker, & Stapleton, 2005). Tests of the significance of and confidence limits for indirect effects based on the distribution of the product method have more accurate Type I error rates and more power than other, more commonly used tests. Values for the two paths involved in the indirect effect and their standard errors are entered in the PRODCLIN program, and distribution of the product confidence limits are computed. Several examples are used to illustrate the PRODCLIN program. The PRODCLIN programs in rich text format may be downloaded from www.psychonomic.org/archive. PMID:17958149
Sparsity Prevention Pivoting Method for Linear Programming
DEFF Research Database (Denmark)
Li, Peiqiang; Li, Qiyuan; Li, Canbing
2018-01-01
When the simplex algorithm is used to calculate a linear programming problem, if the matrix is a sparse matrix, it will be possible to lead to many zero-length calculation steps, and even iterative cycle will appear. To deal with the problem, a new pivoting method is proposed in this paper....... The principle of this method is avoided choosing the row which the value of the element in the b vector is zero as the row of the pivot element to make the matrix in linear programming density and ensure that most subsequent steps will improve the value of the objective function. One step following...... this principle is inserted to reselect the pivot element in the existing linear programming algorithm. Both the conditions for inserting this step and the maximum number of allowed insertion steps are determined. In the case study, taking several numbers of linear programming problems as examples, the results...
Sparsity Prevention Pivoting Method for Linear Programming
DEFF Research Database (Denmark)
Li, Peiqiang; Li, Qiyuan; Li, Canbing
2018-01-01
. The principle of this method is avoided choosing the row which the value of the element in the b vector is zero as the row of the pivot element to make the matrix in linear programming density and ensure that most subsequent steps will improve the value of the objective function. One step following......When the simplex algorithm is used to calculate a linear programming problem, if the matrix is a sparse matrix, it will be possible to lead to many zero-length calculation steps, and even iterative cycle will appear. To deal with the problem, a new pivoting method is proposed in this paper...... this principle is inserted to reselect the pivot element in the existing linear programming algorithm. Both the conditions for inserting this step and the maximum number of allowed insertion steps are determined. In the case study, taking several numbers of linear programming problems as examples, the results...
Distributed optimization for systems design : an augmented Lagrangian coordination method
Tosserams, S.
2008-01-01
This thesis presents a coordination method for the distributed design optimization of engineering systems. The design of advanced engineering systems such as aircrafts, automated distribution centers, and microelectromechanical systems (MEMS) involves multiple components that together realize the
A method for statistically comparing spatial distribution maps
Directory of Open Access Journals (Sweden)
Reynolds Mary G
2009-01-01
Full Text Available Abstract Background Ecological niche modeling is a method for estimation of species distributions based on certain ecological parameters. Thus far, empirical determination of significant differences between independently generated distribution maps for a single species (maps which are created through equivalent processes, but with different ecological input parameters, has been challenging. Results We describe a method for comparing model outcomes, which allows a statistical evaluation of whether the strength of prediction and breadth of predicted areas is measurably different between projected distributions. To create ecological niche models for statistical comparison, we utilized GARP (Genetic Algorithm for Rule-Set Production software to generate ecological niche models of human monkeypox in Africa. We created several models, keeping constant the case location input records for each model but varying the ecological input data. In order to assess the relative importance of each ecological parameter included in the development of the individual predicted distributions, we performed pixel-to-pixel comparisons between model outcomes and calculated the mean difference in pixel scores. We used a two sample Student's t-test, (assuming as null hypothesis that both maps were identical to each other regardless of which input parameters were used to examine whether the mean difference in corresponding pixel scores from one map to another was greater than would be expected by chance alone. We also utilized weighted kappa statistics, frequency distributions, and percent difference to look at the disparities in pixel scores. Multiple independent statistical tests indicated precipitation as the single most important independent ecological parameter in the niche model for human monkeypox disease. Conclusion In addition to improving our understanding of the natural factors influencing the distribution of human monkeypox disease, such pixel-to-pixel comparison
Method of estimating the reactor power distribution
International Nuclear Information System (INIS)
Mitsuta, Toru; Fukuzaki, Takaharu; Doi, Kazuyori; Kiguchi, Takashi.
1984-01-01
Purpose: To improve the calculation accuracy for the power distribution thereby improve the reliability of power distribution monitor. Constitution: In detector containing strings disposed within a reactor core, movable type neutron flux monitors are provided in addition to position fixed type neutron monitors conventionally disposed so far. Upon periodical monitoring, a power distribution X1 is calculated from a physical reactor core model. Then, a higher power position X2 is detected by position detectors and value X2 is sent to a neutron flux monitor driving device to displace the movable type monitors to a higher power position in each of the strings. After displacement, the value X1 is amended by an amending device using measured values from the movable type and fixed type monitors and the amended value is sent to a reactor core monitor device. Upon failure of the fixed type monitors, the position is sent to the monitor driving device and the movable monitors are displaced to that position for measurement. (Sekiya, K.)
DEFF Research Database (Denmark)
Chen, Shuheng; Hu, Weihao; Su, Chi
2015-01-01
A new and efficient methodology for optimal reactive power and voltage control of distribution networks with distributed generators based on fuzzy adaptive hybrid PSO (FAHPSO) is proposed. The objective is to minimize comprehensive cost, consisting of power loss and operation cost of transformers...... that the proposed method can search a more promising control schedule of all transformers, all capacitors and all distributed generators with less time consumption, compared with other listed artificial intelligent methods....... algorithm is implemented in VC++ 6.0 program language and the corresponding numerical experiments are finished on the modified version of the IEEE 33-node distribution system with two newly installed distributed generators and eight newly installed capacitors banks. The numerical results prove...
Epistemic logic and explicit knowledge in distributed programming
Witzel, A.; Zvesper, J.A.; Padgham, L.; Parkes, D.; Müller, J.; Parsons, S.
2008-01-01
In this paper we propose an explicit form of knowledge-based programming. Our initial motivation is the distributed implementation of game-theoretical algorithms, but we abstract away from the game-theoretical details and describe a general scenario, where a group of agents each have some initially
A Game-Theoretic Model for Distributed Programming by Contract
DEFF Research Database (Denmark)
Henriksen, Anders Starcke; Hvitved, Tom; Filinski, Andrzej
2009-01-01
We present an extension of the programming-by-contract (PBC) paradigm to a concurrent and distributed environment. Classical PBC is characterized by absolute conformance of code to its specification, assigning blame in case of failures, and a hierarchical, cooperative decomposition model – none...
Data synthesis and display programs for wave distribution function analysis
Storey, L. R. O.; Yeh, K. J.
1992-01-01
At the National Space Science Data Center (NSSDC) software was written to synthesize and display artificial data for use in developing the methodology of wave distribution analysis. The software comprises two separate interactive programs, one for data synthesis and the other for data display.
Yang, Shan; Tong, Xiangqian
2016-01-01
Power flow calculation and short circuit calculation are the basis of theoretical research for distribution network with inverter based distributed generation. The similarity of equivalent model for inverter based distributed generation during normal and fault conditions of distribution network and the differences between power flow and short circuit calculation are analyzed in this paper. Then an integrated power flow and short circuit calculation method for distribution network with inverte...
Distributed gas detection system and method
Challener, William Albert; Palit, Sabarni; Karp, Jason Harris; Kasten, Ansas Matthias; Choudhury, Niloy
2017-11-21
A distributed gas detection system includes one or more hollow core fibers disposed in different locations, one or more solid core fibers optically coupled with the one or more hollow core fibers and configured to receive light of one or more wavelengths from a light source, and an interrogator device configured to receive at least some of the light propagating through the one or more solid core fibers and the one or more hollow core fibers. The interrogator device is configured to identify a location of a presence of a gas-of-interest by examining absorption of at least one of the wavelengths of the light at least one of the hollow core fibers.
Extending the alias Monte Carlo sampling method to general distributions
International Nuclear Information System (INIS)
Edwards, A.L.; Rathkopf, J.A.; Smidt, R.K.
1991-01-01
The alias method is a Monte Carlo sampling technique that offers significant advantages over more traditional methods. It equals the accuracy of table lookup and the speed of equal probable bins. The original formulation of this method sampled from discrete distributions and was easily extended to histogram distributions. We have extended the method further to applications more germane to Monte Carlo particle transport codes: continuous distributions. This paper presents the alias method as originally derived and our extensions to simple continuous distributions represented by piecewise linear functions. We also present a method to interpolate accurately between distributions tabulated at points other than the point of interest. We present timing studies that demonstrate the method's increased efficiency over table lookup and show further speedup achieved through vectorization. 6 refs., 12 figs., 2 tabs
Programming in a proposed 9X distributed Ada
Waldrop, Raymond S.; Volz, Richard A.; Goldsack, Stephen J.; Holzbach-Valero, A. A.
1991-01-01
The studies of the proposed Ada 9X constructs for distribution, now referred to as AdaPT are reported. The goals for this time period were to revise the chosen example scenario and to begin studying about how the proposed constructs might be implemented. The example scenario chosen is the Submarine Combat Information Center (CIC) developed by IBM for the Navy. The specification provided by IBM was preliminary and had several deficiencies. To address these problems, some changes to the scenario specification were made. Some of the more important changes include: (1) addition of a system database management function; (2) addition of a fourth processing unit to the standard resources; (3) addition of an operator console interface function; and (4) removal of the time synchronization function. To implement the CIC scenario in AdaPT, the decided strategy were publics, partitions, and nodes. The principle purpose for implementing the CIC scenario was to demonstrate how the AdaPT constructs interact with the program structure. While considering ways that the AdaPt constructs might be translated to Ada 83, it was observed that the partition construct could reasonably be modeled as an abstract data type. Although this gives a useful method of modeling partitions, it does not at all address the configuration aspects on the node construct.
Resident and program director gender distribution by specialty.
Long, Timothy R; Elliott, Beth A; Warner, Mary Ellen; Brown, Michael J; Rose, Steven H
2011-12-01
Although enrollment of women in U.S. medical schools has increased, women remain less likely to achieve senior academic rank, lead academic departments, or be appointed to national leadership positions. The purpose of this paper is to compare the gender distribution of residency program directors (PDs) with residents and faculty in the 10 largest specialties. The gender distribution of residents training in the 10 specialties with the largest enrollment was obtained from the annual education issue of Journal of the American Medical Association. The gender distribution of the residents was compared with the gender distribution of PDs and medical school faculty. The number of programs and the names of the PDs were identified by accessing the Accreditation Council for Graduate Medical Education web site. Gender was confirmed through electronic search of state medical board data, program web sites, or by using internet search engines. The gender distribution of medical school faculty was determined using the Association of American Medical Colleges faculty roster database (accessed June 15, 2011). The correlation between female residents and PDs was assessed using Pearson's product-moment correlation. The gender distribution of female PDs appointed June 1, 2006, through June 1, 2010, was compared with the distribution appointed before June 1, 2006, using chi square analysis. Specialties with higher percentages of female PDs had a higher percentage of female residents enrolled (r=0.81, p=0.005). The number of female PDs appointed from July 1, 2006, through June 30, 2010, was greater than the number appointed before July 1, 2006, in emergency medicine (pWomen remain underrepresented in PD appointments relative to the proportion of female medical school faculty and female residents. Mechanisms to address gender-based barriers to advancement should be considered.
Theoretical method for determining particle distribution functions of classical systems
International Nuclear Information System (INIS)
Johnson, E.
1980-01-01
An equation which involves the triplet distribution function and the three-particle direct correlation function is obtained. This equation was derived using an analogue of the Ornstein--Zernike equation. The new equation is used to develop a variational method for obtaining the triplet distribution function of uniform one-component atomic fluids from the pair distribution function. The variational method may be used with the first and second equations in the YBG hierarchy to obtain pair and triplet distribution functions. It should be easy to generalize the results to the n-particle distribution function
Identification of reactor failure states using noise methods, and spatial power distribution
International Nuclear Information System (INIS)
Vavrin, J.; Blazek, J.
1981-01-01
A survey is given of the results achieved. Methodical means and programs were developed for the control computer which may be used in noise diagnostics and in the control of reactor power distribution. Statistical methods of processing the noise components of the signals of measured variables were used for identifying failures of reactors. The method of the synthesis of the neutron flux was used for modelling and evaluating the reactor power distribution. For monitoring and controlling the power distribution a mathematical model of the reactor was constructed suitable for control computers. The uses of noise analysis methods are recommended and directions of further development shown. (J.P.)
Directory of Open Access Journals (Sweden)
Shan Yang
2016-01-01
Full Text Available Power flow calculation and short circuit calculation are the basis of theoretical research for distribution network with inverter based distributed generation. The similarity of equivalent model for inverter based distributed generation during normal and fault conditions of distribution network and the differences between power flow and short circuit calculation are analyzed in this paper. Then an integrated power flow and short circuit calculation method for distribution network with inverter based distributed generation is proposed. The proposed method let the inverter based distributed generation be equivalent to Iθ bus, which makes it suitable to calculate the power flow of distribution network with a current limited inverter based distributed generation. And the low voltage ride through capability of inverter based distributed generation can be considered as well in this paper. Finally, some tests of power flow and short circuit current calculation are performed on a 33-bus distribution network. The calculated results from the proposed method in this paper are contrasted with those by the traditional method and the simulation method, whose results have verified the effectiveness of the integrated method suggested in this paper.
Analyzed method for calculating the distribution of electrostatic field
International Nuclear Information System (INIS)
Lai, W.
1981-01-01
An analyzed method for calculating the distribution of electrostatic field under any given axial gradient in tandem accelerators is described. This method possesses satisfactory accuracy compared with the results of numerical calculation
Methods and Tools for Profiling and Control of Distributed Systems
Directory of Open Access Journals (Sweden)
Sukharev Roman
2017-01-01
Full Text Available The article analyzes and standardizes methods for profiling distributed systems that focus on simulation to conduct experiments and build a graph model of the system. The theory of queueing networks is used for simulation modeling of distributed systems, receiving and processing user requests. To automate the above method of profiling distributed systems the software application was developed with a modular structure and similar to a SCADA-system.
A method to measure depth distributions of implanted ions
International Nuclear Information System (INIS)
Arnesen, A.; Noreland, T.
1977-04-01
A new variant of the radiotracer method for depth distribution determinations has been tested. Depth distributions of radioactive implanted ions are determined by dissolving thin, uniform layers of evaporated material from the surface of a backing and by measuring the activity before and after the layer removal. The method has been used to determine depth distributions for 25 keV and 50 keV 57 Co ions in aluminium and gold. (Auth.)
Comparison of estimation methods for fitting weibull distribution to ...
African Journals Online (AJOL)
Comparison of estimation methods for fitting weibull distribution to the natural stand of Oluwa Forest Reserve, Ondo State, Nigeria. ... Journal of Research in Forestry, Wildlife and Environment ... The result revealed that maximum likelihood method was more accurate in fitting the Weibull distribution to the natural stand.
Calculation of pressure distribution in vacuum systems using a commercial finite element program
International Nuclear Information System (INIS)
Howell, J.; Wehrle, B.; Jostlein, H.
1991-01-01
The finite element method has proven to be a very useful tool for calculating pressure distributions in complex vacuum systems. A number of finite element programs have been developed for this specific task. For those who do not have access to one of these specialized programs and do not wish to develop their own program, another option is available. Any commercial finite element program with heat transfer analysis capabilities can be used to calculate pressure distributions. The approach uses an analogy between thermal conduction and gas conduction with the quantity temperature substituted for pressure. The thermal analogies for pumps, gas loads and tube conductances are described in detail. The method is illustrated for an example vacuum system. A listing of the ANSYS data input file for this example is included. 2 refs., 4 figs., 1 tab
Cathode power distribution system and method of using the same for power distribution
Williamson, Mark A; Wiedmeyer, Stanley G; Koehl, Eugene R; Bailey, James L; Willit, James L; Barnes, Laurel A; Blaskovitz, Robert J
2014-11-11
Embodiments include a cathode power distribution system and/or method of using the same for power distribution. The cathode power distribution system includes a plurality of cathode assemblies. Each cathode assembly of the plurality of cathode assemblies includes a plurality of cathode rods. The system also includes a plurality of bus bars configured to distribute current to each of the plurality of cathode assemblies. The plurality of bus bars include a first bus bar configured to distribute the current to first ends of the plurality of cathode assemblies and a second bus bar configured to distribute the current to second ends of the plurality of cathode assemblies.
Karian, Zaven A
2000-01-01
Throughout the physical and social sciences, researchers face the challenge of fitting statistical distributions to their data. Although the study of statistical modelling has made great strides in recent years, the number and variety of distributions to choose from-all with their own formulas, tables, diagrams, and general properties-continue to create problems. For a specific application, which of the dozens of distributions should one use? What if none of them fit well?Fitting Statistical Distributions helps answer those questions. Focusing on techniques used successfully across many fields, the authors present all of the relevant results related to the Generalized Lambda Distribution (GLD), the Generalized Bootstrap (GB), and Monte Carlo simulation (MC). They provide the tables, algorithms, and computer programs needed for fitting continuous probability distributions to data in a wide variety of circumstances-covering bivariate as well as univariate distributions, and including situations where moments do...
Medan, R. T.; Ray, K. S.
1974-01-01
A description of and users manual are presented for a U.S.A. FORTRAN 4 computer program which evaluates spanwise and chordwise loading distributions, lift coefficient, pitching moment coefficient, and other stability derivatives for thin wings in linearized, steady, subsonic flow. The program is based on a kernel function method lifting surface theory and is applicable to a large class of planforms including asymmetrical ones and ones with mixed straight and curved edges.
Sensitivity Analysis of Dynamic Tariff Method for Congestion Management in Distribution Networks
DEFF Research Database (Denmark)
Huang, Shaojun; Wu, Qiuwei; Liu, Zhaoxi
2015-01-01
The dynamic tariff (DT) method is designed for the distribution system operator (DSO) to alleviate the congestions that might occur in a distribution network with high penetration of distribute energy resources (DERs). Sensitivity analysis of the DT method is crucial because of its decentralized...... control manner. The sensitivity analysis can obtain the changes of the optimal energy planning and thereby the line loading profiles over the infinitely small changes of parameters by differentiating the KKT conditions of the convex quadratic programming, over which the DT method is formed. Three case...
A Comparison of Distribution Free and Non-Distribution Free Factor Analysis Methods
Ritter, Nicola L.
2012-01-01
Many researchers recognize that factor analysis can be conducted on both correlation matrices and variance-covariance matrices. Although most researchers extract factors from non-distribution free or parametric methods, researchers can also extract factors from distribution free or non-parametric methods. The nature of the data dictates the method…
Measurement of subcritical multiplication by the interval distribution method
International Nuclear Information System (INIS)
Nelson, G.W.
1985-01-01
The prompt decay constant or the subcritical neutron multiplication may be determined by measuring the distribution of the time intervals between successive neutron counts. The distribution data is analyzed by least-squares fitting to a theoretical distribution function derived from a point reactor probability model. Published results of measurements with one- and two-detector systems are discussed. Data collection times are shorter, and statistical errors are smaller the nearer the system is to delayed critical. Several of the measurements indicate that a shorter data collection time and higher accuracy are possible with the interval distribution method than with the Feynman variance method
Optimal Operation of Radial Distribution Systems Using Extended Dynamic Programming
DEFF Research Database (Denmark)
Lopez, Juan Camilo; Vergara, Pedro P.; Lyra, Christiano
2018-01-01
An extended dynamic programming (EDP) approach is developed to optimize the ac steady-state operation of radial electrical distribution systems (EDS). Based on the optimality principle of the recursive Hamilton-Jacobi-Bellman equations, the proposed EDP approach determines the optimal operation o...... approach is illustrated using real-scale systems and comparisons with commercial programming solvers. Finally, generalizations to consider other EDS operation problems are also discussed.......An extended dynamic programming (EDP) approach is developed to optimize the ac steady-state operation of radial electrical distribution systems (EDS). Based on the optimality principle of the recursive Hamilton-Jacobi-Bellman equations, the proposed EDP approach determines the optimal operation...... of the EDS by setting the values of the controllable variables at each time period. A suitable definition for the stages of the problem makes it possible to represent the optimal ac power flow of radial EDS as a dynamic programming problem, wherein the 'curse of dimensionality' is a minor concern, since...
Real-Time Reactive Power Distribution in Microgrids by Dynamic Programing
DEFF Research Database (Denmark)
Levron, Yoash; Beck, Yuval; Katzir, Liran
2017-01-01
In this paper a new real-time optimization method for reactive power distribution in microgrids is proposed. The method enables location of a globally optimal distribution of reactive power under normal operating conditions. The method exploits the typical compact structure of microgrids to obtain...... combination of reactive powers, by means of dynamic programming. Since every single step involves a one-dimensional problem, the complexity of the solution is only linear with the number of clusters, and as a result, a globally optimal solution may be obtained in real time. The paper includes the results...
Chen, Fan; Huang, Shaoxiong; Ding, Jinjin; Ding, Jinjin; Gao, Bo; Xie, Yuguang; Wang, Xiaoming
2018-01-01
This paper proposes a fast reliability assessing method for distribution grid with distributed renewable energy generation. First, the Weibull distribution and the Beta distribution are used to describe the probability distribution characteristics of wind speed and solar irradiance respectively, and the models of wind farm, solar park and local load are built for reliability assessment. Then based on power system production cost simulation probability discretization and linearization power flow, a optimal power flow objected with minimum cost of conventional power generation is to be resolved. Thus a reliability assessment for distribution grid is implemented fast and accurately. The Loss Of Load Probability (LOLP) and Expected Energy Not Supplied (EENS) are selected as the reliability index, a simulation for IEEE RBTS BUS6 system in MATLAB indicates that the fast reliability assessing method calculates the reliability index much faster with the accuracy ensured when compared with Monte Carlo method.
Programming by Numbers -- A Programming Method for Complete Novices
Glaser, Hugh; Hartel, Pieter H.
2000-01-01
Students often have difficulty with the minutiae of program construction. We introduce the idea of `Programming by Numbers', which breaks some of the programming process down into smaller steps, giving such students a way into the process of Programming in the Small. Programming by Numbers does not
School Wellness Programs: Magnitude and Distribution in New York City Public Schools
Stiefel, Leanna; Elbel, Brian; Prescott, Melissa Pflugh; Aneja, Siddhartha; Schwartz, Amy Ellen
2016-01-01
BACKGROUND Public schools provide students with opportunities to participate in many discretionary, unmandated wellness programs. Little is known about the number of these programs, their distribution across schools, and the kinds of students served. We provide evidence on these questions for New York City (NYC) public schools. METHODS Data on wellness programs were collected from program websites, NYC’s Office of School Food and Wellness, and direct contact with program sponsors for 2013. Programs were grouped into categories, nutrition, fitness, and comprehensive, and were combined with data on school characteristics available from NYC’s Department of Education. Numbers of programs and provision of programs were analyzed for relationships with demographic and school structural characteristics, using descriptive statistics and multiple regression. RESULTS Discretionary wellness programs are numerous, at 18 programs. Little evidence supports inequity according to student race/ethnicity, income, or nativity, but high schools, new schools, co-located schools, small schools, and schools with larger proportions of inexperienced teachers are less likely to provide wellness programs. CONCLUSIONS Opportunities exist to further the reach of wellness programs in public schools by modifying them for high school adoption and building capacity in schools less likely to have the administrative support to house them. PMID:27917485
Comparing four methods to estimate usual intake distributions
Souverein, O.W.; Dekkers, A.L.; Geelen, A.; Haubrock, J.; Vries, de J.H.M.; Ocke, M.C.; Harttig, U.; Boeing, H.; Veer, van 't P.
2011-01-01
Background/Objectives: The aim of this paper was to compare methods to estimate usual intake distributions of nutrients and foods. As ‘true’ usual intake distributions are not known in practice, the comparison was carried out through a simulation study, as well as empirically, by application to data
Determining on-fault earthquake magnitude distributions from integer programming
Geist, Eric L.; Parsons, Thomas E.
2018-01-01
Earthquake magnitude distributions among faults within a fault system are determined from regional seismicity and fault slip rates using binary integer programming. A synthetic earthquake catalog (i.e., list of randomly sampled magnitudes) that spans millennia is first formed, assuming that regional seismicity follows a Gutenberg-Richter relation. Each earthquake in the synthetic catalog can occur on any fault and at any location. The objective is to minimize misfits in the target slip rate for each fault, where slip for each earthquake is scaled from its magnitude. The decision vector consists of binary variables indicating which locations are optimal among all possibilities. Uncertainty estimates in fault slip rates provide explicit upper and lower bounding constraints to the problem. An implicit constraint is that an earthquake can only be located on a fault if it is long enough to contain that earthquake. A general mixed-integer programming solver, consisting of a number of different algorithms, is used to determine the optimal decision vector. A case study is presented for the State of California, where a 4 kyr synthetic earthquake catalog is created and faults with slip ≥3 mm/yr are considered, resulting in >106 variables. The optimal magnitude distributions for each of the faults in the system span a rich diversity of shapes, ranging from characteristic to power-law distributions.
Dynamic Subsidy Method for Congestion Management in Distribution Networks
DEFF Research Database (Denmark)
Huang, Shaojun; Wu, Qiuwei
2016-01-01
Dynamic subsidy (DS) is a locational price paid by the distribution system operator (DSO) to its customers in order to shift energy consumption to designated hours and nodes. It is promising for demand side management and congestion management. This paper proposes a new DS method for congestion...... management in distribution networks, including the market mechanism, the mathematical formulation through a two-level optimization, and the method solving the optimization by tightening the constraints and linearization. Case studies were conducted with a one node system and the Bus 4 distribution network...... of the Roy Billinton Test System (RBTS) with high penetration of electric vehicles (EVs) and heat pumps (HPs). The case studies demonstrate the efficacy of the DS method for congestion management in distribution networks. Studies in this paper show that the DS method offers the customers a fair opportunity...
The simplex method of linear programming
Ficken, Frederick A
1961-01-01
This concise but detailed and thorough treatment discusses the rudiments of the well-known simplex method for solving optimization problems in linear programming. Geared toward undergraduate students, the approach offers sufficient material for readers without a strong background in linear algebra. Many different kinds of problems further enrich the presentation. The text begins with examinations of the allocation problem, matrix notation for dual problems, feasibility, and theorems on duality and existence. Subsequent chapters address convex sets and boundedness, the prepared problem and boun
Method of controlling power distribution in FBR type reactors
International Nuclear Information System (INIS)
Sawada, Shusaku; Kaneto, Kunikazu.
1982-01-01
Purpose: To attain the power distribution flattening with ease by obtaining a radial power distribution substantially in a constant configuration not depending on the burn-up cycle. Method: As the fuel burning proceeds, the radial power distribution is effected by the accumulation of fission products in the inner blancket fuel assemblies which varies the effect thereof as the neutron absorbing substances. Taking notice of the above fact, the power distribution is controlled in a heterogeneous FBR type reactor by varying the core residence period of the inner blancket assemblies in accordance with the charging density of the inner blancket assemblies in the reactor core. (Kawakami, Y.)
Methods of assessing grain-size distribution during grain growth
DEFF Research Database (Denmark)
Tweed, Cherry J.; Hansen, Niels; Ralph, Brian
1985-01-01
This paper considers methods of obtaining grain-size distributions and ways of describing them. In order to collect statistically useful amounts of data, an automatic image analyzer is used, and the resulting data are subjected to a series of tests that evaluate the differences between two related...... distributions (before and after grain growth). The distributions are measured from two-dimensional sections, and both the data and the corresponding true three-dimensional grain-size distributions (obtained by stereological analysis) are collected. The techniques described here are illustrated by reference...
A Comparative Study of Distribution System Parameter Estimation Methods
Energy Technology Data Exchange (ETDEWEB)
Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup
2016-07-17
In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.
Governance and assessment in a widely distributed medical education program in Australia.
Solarsh, Geoff; Lindley, Jennifer; Whyte, Gordon; Fahey, Michael; Walker, Amanda
2012-06-01
The learning objectives, curriculum content, and assessment standards for distributed medical education programs must be aligned across the health care systems and community contexts in which their students train. In this article, the authors describe their experiences at Monash University implementing a distributed medical education program at metropolitan, regional, and rural Australian sites and an offshore Malaysian site, using four different implementation models. Standardizing learning objectives, curriculum content, and assessment standards across all sites while allowing for site-specific implementation models created challenges for educational alignment. At the same time, this diversity created opportunities to customize the curriculum to fit a variety of settings and for innovations that have enriched the educational system as a whole.Developing these distributed medical education programs required a detailed review of Monash's learning objectives and curriculum content and their relevance to the four different sites. It also required a review of assessment methods to ensure an identical and equitable system of assessment for students at all sites. It additionally demanded changes to the systems of governance and the management of the educational program away from a centrally constructed and mandated curriculum to more collaborative approaches to curriculum design and implementation involving discipline leaders at multiple sites.Distributed medical education programs, like that at Monash, in which cohorts of students undertake the same curriculum in different contexts, provide potentially powerful research platforms to compare different pedagogical approaches to medical education and the impact of context on learning outcomes.
Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.
2005-01-01
The manual application of formal methods in system specification has produced successes, but in the end, despite any claims and assertions by practitioners, there is no provable relationship between a manually derived system specification or formal model and the customer's original requirements. Complex parallel and distributed system present the worst case implications for today s dearth of viable approaches for achieving system dependability. No avenue other than formal methods constitutes a serious contender for resolving the problem, and so recognition of requirements-based programming has come at a critical juncture. We describe a new, NASA-developed automated requirement-based programming method that can be applied to certain classes of systems, including complex parallel and distributed systems, to achieve a high degree of dependability.
A uniform approach for programming distributed heterogeneous computing systems.
Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas
2014-12-01
Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater's performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations.
Score Function of Distribution and Revival of the Moment Method
Czech Academy of Sciences Publication Activity Database
Fabián, Zdeněk
2016-01-01
Roč. 45, č. 4 (2016), s. 1118-1136 ISSN 0361-0926 R&D Projects: GA MŠk(CZ) LG12020 Institutional support: RVO:67985807 Keywords : characteristics of distributions * data characteristics * general moment method * Huber moment estimator * parametric methods * score function Subject RIV: BB - Applied Statistics , Operational Research Impact factor: 0.311, year: 2016
Distributed MIMO-ISAR Sub-image Fusion Method
Directory of Open Access Journals (Sweden)
Gu Wenkun
2017-02-01
Full Text Available The fast fluctuation associated with maneuvering a target’s radar cross-section often affects the imaging performance stability of traditional monostatic Inverse Synthetic Aperture Radar (ISAR. To address this problem, in this study, we propose an imaging method based on the fusion of sub-images of frequencydiversity-distributed multiple Input-Multiple Output-Inverse Synthetic Aperture Radar (MIMO-ISAR. First, we establish the analytic expression of a two-dimensional ISAR sub-image acquired by different channels of distributed MIMO-ISAR. Then, we derive the distance and azimuth distortion factors of the image acquired by the different channels. By compensating for the distortion of the ISAR image, we ultimately realize distributed MIMO-ISAR fusion imaging. Simulations verify the validity of this imaging method using distributed MIMO-ISAR.
Space program management methods and tools
Spagnulo, Marcello; Balduccini, Mauro; Nasini, Federico
2013-01-01
Beginning with the basic elements that differentiate space programs from other management challenges, Space Program Management explains through theory and example of real programs from around the world, the philosophical and technical tools needed to successfully manage large, technically complex space programs both in the government and commercial environment. Chapters address both systems and configuration management, the management of risk, estimation, measurement and control of both funding and the program schedule, and the structure of the aerospace industry worldwide.
Information-theoretic methods for estimating of complicated probability distributions
Zong, Zhi
2006-01-01
Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur
Mathematical methods linear algebra normed spaces distributions integration
Korevaar, Jacob
1968-01-01
Mathematical Methods, Volume I: Linear Algebra, Normed Spaces, Distributions, Integration focuses on advanced mathematical tools used in applications and the basic concepts of algebra, normed spaces, integration, and distributions.The publication first offers information on algebraic theory of vector spaces and introduction to functional analysis. Discussions focus on linear transformations and functionals, rectangular matrices, systems of linear equations, eigenvalue problems, use of eigenvectors and generalized eigenvectors in the representation of linear operators, metric and normed vector
Three-Phase Harmonic Analysis Method for Unbalanced Distribution Systems
Directory of Open Access Journals (Sweden)
Jen-Hao Teng
2014-01-01
Full Text Available Due to the unbalanced features of distribution systems, a three-phase harmonic analysis method is essential to accurately analyze the harmonic impact on distribution systems. Moreover, harmonic analysis is the basic tool for harmonic filter design and harmonic resonance mitigation; therefore, the computational performance should also be efficient. An accurate and efficient three-phase harmonic analysis method for unbalanced distribution systems is proposed in this paper. The variations of bus voltages, bus current injections and branch currents affected by harmonic current injections can be analyzed by two relationship matrices developed from the topological characteristics of distribution systems. Some useful formulas are then derived to solve the three-phase harmonic propagation problem. After the harmonic propagation for each harmonic order is calculated, the total harmonic distortion (THD for bus voltages can be calculated accordingly. The proposed method has better computational performance, since the time-consuming full admittance matrix inverse employed by the commonly-used harmonic analysis methods is not necessary in the solution procedure. In addition, the proposed method can provide novel viewpoints in calculating the branch currents and bus voltages under harmonic pollution which are vital for harmonic filter design. Test results demonstrate the effectiveness and efficiency of the proposed method.
Experiment in Application of Methods of Programmed Instruction.
Fradkin, S. L.
In a document translated from the Russian, an analysis is made of various forms and methods of programed learning. The primary developments in the introduction of programed learning methods are: creation of programed teaching aids; use of existing textbooks for programed lectures with feedback; and use of both teaching machines and machineless…
Directory of Open Access Journals (Sweden)
Azman Ismail
2009-03-01
Full Text Available This study was conducted to examine the moderating effect of distributive justice in the relationship between the forms of benefits program and job commitment. A survey research method was used to gather 150 usable questionnaires from employees who have worked in Malaysian federal government linked companies in Sarawak (MFGLS. The outcomes of testing moderating model using a hierarchical regression analysis showed two major findings: (1 distributive justice had not increased the effect of physical and safety benefits (i.e., health care, insurance, loan and claim on job commitment, and (2 distributive justice had increased the effect of self-satisfaction benefits (i.e., promotion opportunity and training on job commitment. This result confirms that distributive justice does act as a partial moderating variable in the benefit program models of the organizational sector sample. In addition, the implications of this study to benefit system theory and practice, methodological and conceptual limitations, and directions for future research are also discussed. Keywords: Forms of Benefits Program, Distributive Justice and Job Commitment
Type systems for distributed programs components and sessions
Dardha, Ornela
2016-01-01
In this book we develop powerful techniques based on formal methods for the verification of correctness, consistency and safety properties related to dynamic reconfiguration and communication in complex distributed systems. In particular, static analysis techniques based on types and type systems are an adequate methodology considering their success in guaranteeing not only basic safety properties, but also more sophisticated ones like deadlock or lock freedom in concurrent settings. The main contributions of this book are twofold. i) We design a type system for a concurrent object-oriented calculus to statically ensure consistency of dynamic reconfigurations. ii) We define an encoding of the session pi-calculus, which models communication in distributed systems, into the standard typed pi-calculus. We use this encoding to derive properties like type safety and progress in the session pi-calculus by exploiting the corresponding properties in the standard typed pi-calculus.
Electric Utility Transmission and Distribution Line Engineering Program
Energy Technology Data Exchange (ETDEWEB)
Peter McKenny
2010-08-31
Economic development in the United States depends on a reliable and affordable power supply. The nation will need well educated engineers to design a modern, safe, secure, and reliable power grid for our future needs. An anticipated shortage of qualified engineers has caused considerable concern in many professional circles, and various steps are being taken nationwide to alleviate the potential shortage and ensure the North American power system's reliability, and our world-wide economic competitiveness. To help provide a well-educated and trained workforce which can sustain and modernize the nation's power grid, Gonzaga University's School of Engineering and Applied Science has established a five-course (15-credit hour) Certificate Program in Transmission and Distribution (T&D) Engineering. The program has been specifically designed to provide working utility engineering professionals with on-line access to advanced engineering courses which cover modern design practice with an industry-focused theoretical foundation. A total of twelve courses have been developed to-date and students may select any five in their area of interest for the T&D Certificate. As each course is developed and taught by a team of experienced engineers (from public and private utilities, consultants, and industry suppliers), students are provided a unique opportunity to interact directly with different industry experts over the eight weeks of each course. Course material incorporates advanced aspects of civil, electrical, and mechanical engineering disciplines that apply to power system design and are appropriate for graduate engineers. As such, target students for the certificate program include: (1) recent graduates with a Bachelor of Science Degree in an engineering field (civil, mechanical, electrical, etc.); (2) senior engineers moving from other fields to the utility industry (i.e. paper industry to utility engineering or project management positions); and (3) regular
The frequency-independent control method for distributed generation systems
DEFF Research Database (Denmark)
Naderi, Siamak; Pouresmaeil, Edris; Gao, Wenzhong David
2012-01-01
In this paper a novel frequency-independent control method suitable for distributed generation (DG) is presented. This strategy is derived based on the . abc/. αβ transformation and . abc/. dq transformation of the ac system variables. The active and reactive currents injected by the DG are contr......In this paper a novel frequency-independent control method suitable for distributed generation (DG) is presented. This strategy is derived based on the . abc/. αβ transformation and . abc/. dq transformation of the ac system variables. The active and reactive currents injected by the DG...
Computationally intensive econometrics using a distributed matrix-programming language.
Doornik, Jurgen A; Hendry, David F; Shephard, Neil
2002-06-15
This paper reviews the need for powerful computing facilities in econometrics, focusing on concrete problems which arise in financial economics and in macroeconomics. We argue that the profession is being held back by the lack of easy-to-use generic software which is able to exploit the availability of cheap clusters of distributed computers. Our response is to extend, in a number of directions, the well-known matrix-programming interpreted language Ox developed by the first author. We note three possible levels of extensions: (i) Ox with parallelization explicit in the Ox code; (ii) Ox with a parallelized run-time library; and (iii) Ox with a parallelized interpreter. This paper studies and implements the first case, emphasizing the need for deterministic computing in science. We give examples in the context of financial economics and time-series modelling.
International Nuclear Information System (INIS)
Rijssel, Jos van; Kuipers, Bonny W.M.; Erné, Ben H.
2014-01-01
A numerical inversion method known from the analysis of light scattering by colloidal dispersions is now applied to magnetization curves of ferrofluids. The distribution of magnetic particle sizes or dipole moments is determined without assuming that the distribution is unimodal or of a particular shape. The inversion method enforces positive number densities via a non-negative least squares procedure. It is tested successfully on experimental and simulated data for ferrofluid samples with known multimodal size distributions. The created computer program MINORIM is made available on the web. - Highlights: • A method from light scattering is applied to analyze ferrofluid magnetization curves. • A magnetic size distribution is obtained without prior assumption of its shape. • The method is tested successfully on ferrofluids with a known size distribution. • The practical limits of the method are explored with simulated data including noise. • This method is implemented in the program MINORIM, freely available online
Poppe, L.J.; Eliason, A.H.; Hastings, M.E.
2004-01-01
Measures that describe and summarize sediment grain-size distributions are important to geologists because of the large amount of information contained in textural data sets. Statistical methods are usually employed to simplify the necessary comparisons among samples and quantify the observed differences. The two statistical methods most commonly used by sedimentologists to describe particle distributions are mathematical moments (Krumbein and Pettijohn, 1938) and inclusive graphics (Folk, 1974). The choice of which of these statistical measures to use is typically governed by the amount of data available (Royse, 1970). If the entire distribution is known, the method of moments may be used; if the next to last accumulated percent is greater than 95, inclusive graphics statistics can be generated. Unfortunately, earlier programs designed to describe sediment grain-size distributions statistically do not run in a Windows environment, do not allow extrapolation of the distribution's tails, or do not generate both moment and graphic statistics (Kane and Hubert, 1963; Collias et al., 1963; Schlee and Webster, 1967; Poppe et al., 2000)1.Owing to analytical limitations, electro-resistance multichannel particle-size analyzers, such as Coulter Counters, commonly truncate the tails of the fine-fraction part of grain-size distributions. These devices do not detect fine clay in the 0.6–0.1 μm range (part of the 11-phi and all of the 12-phi and 13-phi fractions). Although size analyses performed down to 0.6 μm microns are adequate for most freshwater and near shore marine sediments, samples from many deeper water marine environments (e.g. rise and abyssal plain) may contain significant material in the fine clay fraction, and these analyses benefit from extrapolation.The program (GSSTAT) described herein generates statistics to characterize sediment grain-size distributions and can extrapolate the fine-grained end of the particle distribution. It is written in Microsoft
Method for distributed agent-based non-expert simulation of manufacturing process behavior
Ivezic, Nenad; Potok, Thomas E.
2004-11-30
A method for distributed agent based non-expert simulation of manufacturing process behavior on a single-processor computer comprises the steps of: object modeling a manufacturing technique having a plurality of processes; associating a distributed agent with each the process; and, programming each the agent to respond to discrete events corresponding to the manufacturing technique, wherein each discrete event triggers a programmed response. The method can further comprise the step of transmitting the discrete events to each agent in a message loop. In addition, the programming step comprises the step of conditioning each agent to respond to a discrete event selected from the group consisting of a clock tick message, a resources received message, and a request for output production message.
Reliability Assessment of Active Distribution System Using Monte Carlo Simulation Method
Directory of Open Access Journals (Sweden)
Shaoyun Ge
2014-01-01
Full Text Available In this paper we have treated the reliability assessment problem of low and high DG penetration level of active distribution system using the Monte Carlo simulation method. The problem is formulated as a two-case program, the program of low penetration simulation and the program of high penetration simulation. The load shedding strategy and the simulation process were introduced in detail during each FMEA process. Results indicate that the integration of DG can improve the reliability of the system if the system was operated actively.
Modeling error distributions of growth curve models through Bayesian methods.
Zhang, Zhiyong
2016-06-01
Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.
Radioactivity standards distribution program, 1978--1979. Interim report, 1978--1979
International Nuclear Information System (INIS)
Ziegler, L.H.
1978-06-01
A program for the distribution of calibrated radioactive samples, as one function of EPA's quality assurance program for environmental radiation measurements, is described. Included is a discussion of the objectives of the distribution program and a description of the preparation, availability, and distribution of calibrated radioactive samples. Instructions and application forms are included for laboratories desiring to participate in the program. This document is not a research report. It is designed for use by personnel of laboratories participating or desiring to participate in the Radioactivity Standards Distribution Program, which is a part of the U.S. Environmental Protection Agency's quality assurance program
Dual reference point temperature interrogating method for distributed temperature sensor
International Nuclear Information System (INIS)
Ma, Xin; Ju, Fang; Chang, Jun; Wang, Weijie; Wang, Zongliang
2013-01-01
A novel method based on dual temperature reference points is presented to interrogate the temperature in a distributed temperature sensing (DTS) system. This new method is suitable to overcome deficiencies due to the impact of DC offsets and the gain difference in the two signal channels of the sensing system during temperature interrogation. Moreover, this method can in most cases avoid the need to calibrate the gain and DC offsets in the receiver, data acquisition and conversion. An improved temperature interrogation formula is presented and the experimental results show that this method can efficiently estimate the channel amplification and system DC offset, thus improving the system accuracy. (letter)
A code for obtaining temperature distribution by finite element method
International Nuclear Information System (INIS)
Bloch, M.
1984-01-01
The ELEFIB Fortran language computer code using finite element method for calculating temperature distribution of linear and two dimensional problems, in permanent region or in the transient phase of heat transfer, is presented. The formulation of equations uses the Galerkin method. Some examples are shown and the results are compared with other papers. The comparative evaluation shows that the elaborated code gives good values. (M.C.K.) [pt
Distributed Interior-point Method for Loosely Coupled Problems
DEFF Research Database (Denmark)
Pakazad, Sina Khoshfetrat; Hansson, Anders; Andersen, Martin Skovgaard
2014-01-01
In this paper, we put forth distributed algorithms for solving loosely coupled unconstrained and constrained optimization problems. Such problems are usually solved using algorithms that are based on a combination of decomposition and first order methods. These algorithms are commonly very slow a...
Community Based Distribution of Child Spacing Methods at ...
African Journals Online (AJOL)
uses volunteer CBD agents. Mrs. E.F. Pelekamoyo. Service Delivery Officer. National Family Welfare Council of Malawi. Private Bag 308. Lilongwe 3. Malawi. Community Based Distribution of. Child Spacing Methods ... than us at the Hospital; male motivators by talking to their male counterparts help them to accept that their ...
Correction of measured multiplicity distributions by the simulated annealing method
International Nuclear Information System (INIS)
Hafidouni, M.
1993-01-01
Simulated annealing is a method used to solve combinatorial optimization problems. It is used here for the correction of the observed multiplicity distribution from S-Pb collisions at 200 GeV/c per nucleon. (author) 11 refs., 2 figs
International Nuclear Information System (INIS)
Hida, Kazuki; Yoshioka, Ritsuo
1992-01-01
A method has been developed for optimizing the axial enrichment and gadolinia distributions for the reload BWR fuel under control rod programming. The problem was to minimize the enrichment requirement subject to the criticality and axial power peaking constraints. The optimization technique was based on the successive linear programming method, each linear programming problem being solved by a goal programming algorithm. A rapid and practically accurate core neutronics model, named the modified one-dimensional core model, was developed to describe the batch-averaged burnup behavior of the reload fuel. A core burnup simulation algorithm, employing a burnup-power-void iteration, was also developed to calculate the rigorous equilibrium cycle performance. This method was applied to the optimization of axial two- and 24-region fuels for demonstrative purposes. The optimal solutions for both fuels have proved the optimality of what is called burnup shape optimization spectral shift. For the two-region fuel with a practical power peaking of 1.4, the enrichment distribution was nearly uniform, because a bottom-peaked burnup shape flattens the axial power shape. Optimization of the 24-region fuel has shown a potential improvement in BWR fuel cycle economics, which will guide future advancement in BWR fuel designs. (author)
Integrating packing and distribution problems and optimization through mathematical programming
Directory of Open Access Journals (Sweden)
Fabio Miguel
2016-06-01
Full Text Available This paper analyzes the integration of two combinatorial problems that frequently arise in production and distribution systems. One is the Bin Packing Problem (BPP problem, which involves finding an ordering of some objects of different volumes to be packed into the minimal number of containers of the same or different size. An optimal solution to this NP-Hard problem can be approximated by means of meta-heuristic methods. On the other hand, we consider the Capacitated Vehicle Routing Problem with Time Windows (CVRPTW, which is a variant of the Travelling Salesman Problem (again a NP-Hard problem with extra constraints. Here we model those two problems in a single framework and use an evolutionary meta-heuristics to solve them jointly. Furthermore, we use data from a real world company as a test-bed for the method introduced here.
Isotope Production and Distribution Program`s Fiscal Year 1997 financial statement audit
Energy Technology Data Exchange (ETDEWEB)
NONE
1998-03-27
The Department of Energy Isotope Production and Distribution Program mission is to serve the national need for a reliable supply of isotope products and services for medicine, industry and research. The program produces and sells hundreds of stable and radioactive isotopes that are widely utilized by domestic and international customers. Isotopes are produced only where there is no U.S. private sector capability or other production capacity is insufficient to meet U.S. needs. The Department encourages private sector investment in new isotope production ventures and will sell or lease its existing facilities and inventories for commercial purposes. The Isotope Program reports to the Director of the Office of Nuclear Energy, Science and Technology. The Isotope Program operates under a revolving fund established by the Fiscal Year (FY) 1990 Energy and Water Appropriations Act and maintains financial viability by earning revenues from the sale of isotopes and services and through annual appropriations. The FY 1995 Energy and Water Appropriations Act modified predecessor acts to allow prices charged for Isotope Program products and services to be based on production costs, market value, the needs of the research community, and other factors. Although the Isotope Program functions as a business, prices set for small-volume, high-cost isotopes that are needed for research purposes may not achieve full-cost recovery. As a result, isotopes produced by the Isotope Program for research and development are priced to provide a reasonable return to the U.S. Government without discouraging their use. Commercial isotopes are sold on a cost-recovery basis. Because of its pricing structure, when selecting isotopes for production, the Isotope Program must constantly balance current isotope demand, market conditions, and societal benefits with its determination to operate at the lowest possible cost to U.S. taxpayers. Thus, this report provides a financial analysis of this situation.
Methods for reconstruction of the density distribution of nuclear power
International Nuclear Information System (INIS)
Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.
2015-01-01
Highlights: • Two methods for reconstruction of the pin power distribution are presented. • The ARM method uses analytical solution of the 2D diffusion equation. • The PRM method uses polynomial solution without boundary conditions. • The maximum errors in pin power reconstruction occur in the peripheral water region. • The errors are significantly less in the inner area of the core. - Abstract: In analytical reconstruction method (ARM), the two-dimensional (2D) neutron diffusion equation is analytically solved for two energy groups (2G) and homogeneous nodes with dimensions of a fuel assembly (FA). The solution employs a 2D fourth-order expansion for the axial leakage term. The Nodal Expansion Method (NEM) provides the solution average values as the four average partial currents on the surfaces of the node, the average flux in the node and the multiplying factor of the problem. The expansion coefficients for the axial leakage are determined directly from NEM method or can be determined in the reconstruction method. A new polynomial reconstruction method (PRM) is implemented based on the 2D expansion for the axial leakage term. The ARM method use the four average currents on the surfaces of the node and four average fluxes in corners of the node as boundary conditions and the average flux in the node as a consistency condition. To determine the average fluxes in corners of the node an analytical solution is employed. This analytical solution uses the average fluxes on the surfaces of the node as boundary conditions and discontinuities in corners are incorporated. The polynomial and analytical solutions to the PRM and ARM methods, respectively, represent the homogeneous flux distributions. The detailed distributions inside a FA are estimated by product of homogeneous distribution by local heterogeneous form function. Moreover, the form functions of power are used. The results show that the methods have good accuracy when compared with reference values and
Multi-level methods and approximating distribution functions
International Nuclear Information System (INIS)
Wilson, D.; Baker, R. E.
2016-01-01
Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.
Multi-level methods and approximating distribution functions
Energy Technology Data Exchange (ETDEWEB)
Wilson, D., E-mail: daniel.wilson@dtc.ox.ac.uk; Baker, R. E. [Mathematical Institute, University of Oxford, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG (United Kingdom)
2016-07-15
Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.
Advanced airflow distribution methods for reducing exposure of indoor pollution
DEFF Research Database (Denmark)
Cao, Guangyu; Nielsen, Peter Vilhelm; Melikov, Arsen
2017-01-01
The adverse effect of various indoor pollutants on occupants’ health have been recognized. In public spaces flu viruses may spread from person to person by airflow generated by various traditional ventilation methods, like natural ventilation and mixing ventilation (MV Personalized ventilation (PV......) supplies clean air close to the occupant and directly into the breathing zone. Studies show that it improves the inhaled air quality and reduces the risk of airborne cross-infection in comparison with total volume (TV) ventilation. However, it is still challenging for PV and other advanced air distribution...... methods to reduce the exposure to gaseous and particulate pollutants under disturbed conditions and to ensure thermal comfort at the same time. The objective of this study is to analyse the performance of different advanced airflow distribution methods for protection of occupants from exposure to indoor...
Advanced airflow distribution methods for reducing exposure of indoor pollution
DEFF Research Database (Denmark)
Cao, Guangyu; Nielsen, Peter Vilhelm; Melikov, Arsen Krikor
methods to reduce the exposure to gaseous and particulate pollutants under disturbed conditions and to ensure thermal comfort at the same time. The objective of this study is to analyse the performance of different advanced airflow distribution methods for protection of occupants from exposure to indoor......The adverse effect of various indoor pollutants on occupants’ health have been recognized. In public spaces flu viruses may spread from person to person by airflow generated by various traditional ventilation methods, like natural ventilation and mixing ventilation (MV Personalized ventilation (PV......) supplies clean air close to the occupant and directly into the breathing zone. Studies show that it improves the inhaled air quality and reduces the risk of airborne cross-infection in comparison with total volume (TV) ventilation. However, it is still challenging for PV and other advanced air distribution...
System and Method for Monitoring Distributed Asset Data
Gorinevsky, Dimitry (Inventor)
2015-01-01
A computer-based monitoring system and monitoring method implemented in computer software for detecting, estimating, and reporting the condition states, their changes, and anomalies for many assets. The assets are of same type, are operated over a period of time, and outfitted with data collection systems. The proposed monitoring method accounts for variability of working conditions for each asset by using regression model that characterizes asset performance. The assets are of the same type but not identical. The proposed monitoring method accounts for asset-to-asset variability; it also accounts for drifts and trends in the asset condition and data. The proposed monitoring system can perform distributed processing of massive amounts of historical data without discarding any useful information where moving all the asset data into one central computing system might be infeasible. The overall processing is includes distributed preprocessing data records from each asset to produce compressed data.
Synchronization Methods for Three Phase Distributed Power Generation Systems
DEFF Research Database (Denmark)
Timbus, Adrian Vasile; Teodorescu, Remus; Blaabjerg, Frede
2005-01-01
Nowadays, it is a general trend to increase the electricity production using Distributed Power Generation Systems (DPGS) based on renewable energy resources such as wind, sun or hydrogen. If these systems are not properly controlled, their connection to the utility network can generate problems...... on the grid side. Therefore, considerations about power generation, safe running and grid synchronization must be done before connecting these systems to the utility network. This paper is mainly dealing with the grid synchronization issues of distributed systems. An overview of the synchronization methods...
Method of imaging the electrical conductivity distribution of a subsurface
Johnson, Timothy C.
2017-09-26
A method of imaging electrical conductivity distribution of a subsurface containing metallic structures with known locations and dimensions is disclosed. Current is injected into the subsurface to measure electrical potentials using multiple sets of electrodes, thus generating electrical resistivity tomography measurements. A numeric code is applied to simulate the measured potentials in the presence of the metallic structures. An inversion code is applied that utilizes the electrical resistivity tomography measurements and the simulated measured potentials to image the subsurface electrical conductivity distribution and remove effects of the subsurface metallic structures with known locations and dimensions.
Translation techniques for distributed-shared memory programming models
Energy Technology Data Exchange (ETDEWEB)
Fuller, Douglas James [Iowa State Univ., Ames, IA (United States)
2005-01-01
The high performance computing community has experienced an explosive improvement in distributed-shared memory hardware. Driven by increasing real-world problem complexity, this explosion has ushered in vast numbers of new systems. Each new system presents new challenges to programmers and application developers. Part of the challenge is adapting to new architectures with new performance characteristics. Different vendors release systems with widely varying architectures that perform differently in different situations. Furthermore, since vendors need only provide a single performance number (total MFLOPS, typically for a single benchmark), they only have strong incentive initially to optimize the API of their choice. Consequently, only a fraction of the available APIs are well optimized on most systems. This causes issues porting and writing maintainable software, let alone issues for programmers burdened with mastering each new API as it is released. Also, programmers wishing to use a certain machine must choose their API based on the underlying hardware instead of the application. This thesis argues that a flexible, extensible translator for distributed-shared memory APIs can help address some of these issues. For example, a translator might take as input code in one API and output an equivalent program in another. Such a translator could provide instant porting for applications to new systems that do not support the application's library or language natively. While open-source APIs are abundant, they do not perform optimally everywhere. A translator would also allow performance testing using a single base code translated to a number of different APIs. Most significantly, this type of translator frees programmers to select the most appropriate API for a given application based on the application (and developer) itself instead of the underlying hardware.
Visualizing measurement for 3D smooth density distributions by means of linear programming
International Nuclear Information System (INIS)
Tayama, Norio; Yang, Xue-dong
1994-01-01
This paper is concerned with a theoretical possibility of a new visualizing measurement method based on an optimum 3D reconstruction from a few selected projections. A theory of optimum 3D reconstruction by a linear programming is discussed, utilizing a few projections for sampled 3D smooth-density-distribution model which satisfies the condition of the 3D sampling theorem. First by use of the sampling theorem, it is shown that we can set up simultaneous simple equations which corresponds to the case of the parallel beams. Then we solve the simultaneous simple equations by means of linear programming algorithm, and we can get an optimum 3D density distribution images with minimum error in the reconstruction. The results of computer simulation with the algorithm are presented. (author)
Chassin, David P [Pasco, WA; Donnelly, Matthew K [Kennewick, WA; Dagle, Jeffery E [Richland, WA
2011-12-06
Electrical power distribution control methods, electrical energy demand monitoring methods, and power management devices are described. In one aspect, an electrical power distribution control method includes providing electrical energy from an electrical power distribution system, applying the electrical energy to a load, providing a plurality of different values for a threshold at a plurality of moments in time and corresponding to an electrical characteristic of the electrical energy, and adjusting an amount of the electrical energy applied to the load responsive to an electrical characteristic of the electrical energy triggering one of the values of the threshold at the respective moment in time.
Communication Systems and Study Method for Active Distribution Power systems
DEFF Research Database (Denmark)
Wei, Mu; Chen, Zhe
Due to the involvement and evolvement of communication technologies in contemporary power systems, the applications of modern communication technologies in distribution power system are becoming increasingly important. In this paper, the International Organization for Standardization (ISO......) reference seven-layer model of communication systems, and the main communication technologies and protocols on each corresponding layer are introduced. Some newly developed communication techniques, like Ethernet, are discussed with reference to the possible applications in distributed power system....... The suitability of the communication technology to the distribution power system with active renewable energy based generation units is discussed. Subsequently the typical possible communication systems are studied by simulation. In this paper, a novel method of integrating communication system impact into power...
Distribution-independent hierarchicald N-body methods
International Nuclear Information System (INIS)
Aluru, S.
1994-01-01
The N-body problem is to simulate the motion of N particles under the influence of mutual force fields based on an inverse square law. The problem has applications in several domains including astrophysics, molecular dynamics, fluid dynamics, radiosity methods in computer graphics and numerical complex analysis. Research efforts have focused on reducing the O(N 2 ) time per iteration required by the naive algorithm of computing each pairwise interaction. Widely respected among these are the Barnes-Hut and Greengard methods. Greengard claims his algorithm reduces the complexity to O(N) time per iteration. Throughout this thesis, we concentrate on rigorous, distribution-independent, worst-case analysis of the N-body methods. We show that Greengard's algorithm is not O(N), as claimed. Both Barnes-Hut and Greengard's methods depend on the same data structure, which we show is distribution-dependent. For the distribution that results in the smallest running time, we show that Greengard's algorithm is Ω(N log 2 N) in two dimensions and Ω(N log 4 N) in three dimensions. We have designed a hierarchical data structure whose size depends entirely upon the number of particles and is independent of the distribution of the particles. We show that both Greengard's and Barnes-Hut algorithms can be used in conjunction with this data structure to reduce their complexity. Apart from reducing the complexity of the Barnes-Hut algorithm, the data structure also permits more accurate error estimation. We present two- and three-dimensional algorithms for creating the data structure. The multipole method designed using this data structure has a complexity of O(N log N) in two dimensions and O(N log 2 N) in three dimensions
International Nuclear Information System (INIS)
Zakariazadeh, Alireza; Jadid, Shahram; Siano, Pierluigi
2014-01-01
Highlights: • Environmental/economical scheduling of energy and reserve. • Simultaneous participation of loads in both energy and reserve scheduling. • Aggregate wind generation and demand uncertainties in a stochastic model. • Stochastic scheduling of energy and reserve in a distribution system. • Demand response providers’ participation in energy and reserve scheduling. - Abstract: In this paper a stochastic multi-objective economical/environmental operational scheduling method is proposed to schedule energy and reserve in a smart distribution system with high penetration of wind generation. The proposed multi-objective framework, based on augmented ε-constraint method, is used to minimize the total operational costs and emissions and to generate Pareto-optimal solutions for the energy and reserve scheduling problem. Moreover, fuzzy decision making process is employed to extract one of the Pareto-optimal solutions as the best compromise non-dominated solution. The wind power and demand forecast errors are considered in this approach and the reserve can be furnished by the main grid as well as distributed generators and responsive loads. The consumers participate in both energy and reserve markets using various demand response programs. In order to facilitate small and medium loads participation in demand response programs, a Demand Response Provider (DRP) aggregates offers for load reduction. In order to solve the proposed optimization model, the Benders decomposition technique is used to convert the large scale mixed integer non-linear problem into mixed-integer linear programming and non-linear programming problems. The effectiveness of the proposed scheduling approach is verified on a 41-bus distribution test system over a 24-h period
45 CFR 2519.600 - How are funds for Higher Education programs distributed?
2010-10-01
... 45 Public Welfare 4 2010-10-01 2010-10-01 false How are funds for Higher Education programs...) CORPORATION FOR NATIONAL AND COMMUNITY SERVICE HIGHER EDUCATION INNOVATIVE PROGRAMS FOR COMMUNITY SERVICE Distribution of Funds § 2519.600 How are funds for Higher Education programs distributed? All funds under this...
Energy Technology Data Exchange (ETDEWEB)
Moskovitz, D.; Harrington, C.; Shirley, W.; Cowart, R.; Sedano, R.; Weston, F.
2002-10-01
Designing and implementing credit-based pilot programs for distributed resources distribution is a low-cost, low-risk opportunity to find out how these resources can help defer or avoid costly electric power system (utility grid) distribution upgrades. This report describes implementation options for deaveraged distribution credits and distributed resource development zones. Developing workable programs implementing these policies can dramatically increase the deployment of distributed resources in ways that benefit distributed resource vendors, users, and distribution utilities. This report is one in the State Electricity Regulatory Policy and Distributed Resources series developed under contract to NREL (see Annual Technical Status Report of the Regulatory Assistance Project: September 2000-September 2001, NREL/SR-560-32733). Other titles in this series are: (1) Accommodating Distributed Resources in Wholesale Markets, NREL/SR-560-32497; (2) Distributed Resources and Electric System Re liability, NREL/SR-560-32498; (3) Distribution System Cost Methodologies for Distributed Generation, NREL/SR-560-32500; (4) Distribution System Cost Methodologies for Distributed Generation Appendices, NREL/SR-560-32501.
Application of autoradiographic methods for contaminant distribution studies in soils
International Nuclear Information System (INIS)
Povetko, O.G.; Higley, K.A.
2000-01-01
In order to determine physical location of contaminants in soil, solidified soil 'thin' sections, which preserve the undisturbed structural characteristics of the original soil, were prepared. This paper describes an application of different autoradiographic methods to identify the distribution of selected nuclides along key structural features of sample soils and sizes of 'hot particles' of contaminant. These autoradiographic methods included contact autoradiography using CR-39 (Homalite Plastics) plastic alpha track detectors and neutron-induced autoradiography that produced fission fragment tracks in Lexan (Thrust Industries, Inc.) plastic detectors. Intact soil samples containing weapons-grade plutonium from Rocky Flats Environmental Test Site and control samples from outside the site location were used in thin soil section preparation. Distribution of particles of actinides was observed and analyzed through the soil section depth profile from the surface to the 15-cm depth. The combination of two autoradiographic methods allowed to distinguish alpha- emitting particles of natural U, 239+240 Pu and non-fissile alpha-emitters. Locations of 990 alpha 'stars' caused by 239+240 Pu and 241 Am 'hot particles' were recorded, particles were sized, their size-frequency, depth and activity distributions were analyzed. Several large colloidal conglomerates of 239+240 Pu and 241 Am 'hot particles' were found in soil profile. Their alpha and fission fragment 'star' images were micro photographed. (author)
Rock sampling. [method for controlling particle size distribution
Blum, P. (Inventor)
1971-01-01
A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.
Method for adding nodes to a quantum key distribution system
Grice, Warren P
2015-02-24
An improved quantum key distribution (QKD) system and method are provided. The system and method introduce new clients at intermediate points along a quantum channel, where any two clients can establish a secret key without the need for a secret meeting between the clients. The new clients perform operations on photons as they pass through nodes in the quantum channel, and participate in a non-secret protocol that is amended to include the new clients. The system and method significantly increase the number of clients that can be supported by a conventional QKD system, with only a modest increase in cost. The system and method are compatible with a variety of QKD schemes, including polarization, time-bin, continuous variable and entanglement QKD.
A two-stage stochastic programming model for the optimal design of distributed energy systems
International Nuclear Information System (INIS)
Zhou, Zhe; Zhang, Jianyun; Liu, Pei; Li, Zheng; Georgiadis, Michael C.; Pistikopoulos, Efstratios N.
2013-01-01
Highlights: ► The optimal design of distributed energy systems under uncertainty is studied. ► A stochastic model is developed using genetic algorithm and Monte Carlo method. ► The proposed system possesses inherent robustness under uncertainty. ► The inherent robustness is due to energy storage facilities and grid connection. -- Abstract: A distributed energy system is a multi-input and multi-output energy system with substantial energy, economic and environmental benefits. The optimal design of such a complex system under energy demand and supply uncertainty poses significant challenges in terms of both modelling and corresponding solution strategies. This paper proposes a two-stage stochastic programming model for the optimal design of distributed energy systems. A two-stage decomposition based solution strategy is used to solve the optimization problem with genetic algorithm performing the search on the first stage variables and a Monte Carlo method dealing with uncertainty in the second stage. The model is applied to the planning of a distributed energy system in a hotel. Detailed computational results are presented and compared with those generated by a deterministic model. The impacts of demand and supply uncertainty on the optimal design of distributed energy systems are systematically investigated using proposed modelling framework and solution approach.
Interior-Point Methods for Linear Programming: A Review
Singh, J. N.; Singh, D.
2002-01-01
The paper reviews some recent advances in interior-point methods for linear programming and indicates directions in which future progress can be made. Most of the interior-point methods belong to any of three categories: affine-scaling methods, potential reduction methods and central path methods. These methods are discussed together with…
2010-02-02
... Management Program for Gas Distribution Pipelines; Correction AGENCY: Pipeline and Hazardous Materials Safety... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part... Regulations to require operators of gas distribution pipelines to develop and implement integrity management...
Distributed Research Project Scheduling Based on Multi-Agent Methods
Directory of Open Access Journals (Sweden)
Constanta Nicoleta Bodea
2011-01-01
Full Text Available Different project planning and scheduling approaches have been developed. The Operational Research (OR provides two major planning techniques: CPM (Critical Path Method and PERT (Program Evaluation and Review Technique. Due to projects complexity and difficulty to use classical methods, new approaches were developed. Artificial Intelligence (AI initially promoted the automatic planner concept, but model-based planning and scheduling methods emerged later on. The paper adresses the project scheduling optimization problem, when projects are seen as Complex Adaptive Systems (CAS. Taken into consideration two different approaches for project scheduling optimization: TCPSP (Time- Constrained Project Scheduling and RCPSP (Resource-Constrained Project Scheduling, the paper focuses on a multiagent implementation in MATLAB for TCSP. Using the research project as a case study, the paper includes a comparison between two multi-agent methods: Genetic Algorithm (GA and Ant Colony Algorithm (ACO.
Discrete method for design of flow distribution in manifolds
International Nuclear Information System (INIS)
Wang, Junye; Wang, Hualin
2015-01-01
Flow in manifold systems is encountered in designs of various industrial processes, such as fuel cells, microreactors, microchannels, plate heat exchanger, and radial flow reactors. The uniformity of flow distribution in manifold is a key indicator for performance of the process equipment. In this paper, a discrete method for a U-type arrangement was developed to evaluate the uniformity of the flow distribution and the pressure drop and then was used for direct comparisons between the U-type and the Z-type. The uniformity of the U-type is generally better than that of the Z-type in most of cases for small ζ and large M. The U-type and the Z-type approach each other as ζ increases or M decreases. However, the Z-type is more sensitive to structures than the U-type and approaches uniform flow distribution faster than the U-type as M decreases or ζ increases. This provides a simple yet powerful tool for the designers to evaluate and select a flow arrangement and offers practical measures for industrial applications. - Highlights: • Discrete methodology of flow field designs in manifolds with U-type arrangements. • Quantitative comparison between U-type and Z-type arrangements. • Discrete solution of flow distribution with varying flow coefficients. • Practical measures and guideline to design of manifold systems.
Corbin, B. A.; Seager, S.; Ross, A.; Hoffman, J.
2017-12-01
Distributed satellite systems (DSS) have emerged as an effective and cheap way to conduct space science, thanks to advances in the small satellite industry. However, relatively few space science missions have utilized multiple assets to achieve their primary scientific goals. Previous research on methods for evaluating mission concepts designs have shown that distributed systems are rarely competitive with monolithic systems, partially because it is difficult to quantify the added value of DSSs over monolithic systems. Comparatively little research has focused on how DSSs can be used to achieve new, fundamental space science goals that cannot be achieved with monolithic systems or how to choose a design from a larger possible tradespace of options. There are seven emergent capabilities of distributed satellites: shared sampling, simultaneous sampling, self-sampling, census sampling, stacked sampling, staged sampling, and sacrifice sampling. These capabilities are either fundamentally, analytically, or operationally unique in their application to distributed science missions, and they can be leveraged to achieve science goals that are either impossible or difficult and costly to achieve with monolithic systems. The Responsive Systems Comparison (RSC) method combines Multi-Attribute Tradespace Exploration with Epoch-Era Analysis to examine benefits, costs, and flexible options in complex systems over the mission lifecycle. Modifications to the RSC method as it exists in previously published literature were made in order to more accurately characterize how value is derived from space science missions. New metrics help rank designs by the value derived over their entire mission lifecycle and show more accurate cumulative value distributions. The RSC method was applied to four case study science missions that leveraged the emergent capabilities of distributed satellites to achieve their primary science goals. In all four case studies, RSC showed how scientific value was
Probabilistic methods for maintenance program optimization
International Nuclear Information System (INIS)
Liming, J.K.; Smith, M.J.; Gekler, W.C.
1989-01-01
In today's regulatory and economic environments, it is more important than ever that managers, engineers, and plant staff join together in developing and implementing effective management plans for safety and economic risk. This need applied to both power generating stations and other process facilities. One of the most critical parts of these management plans is the development and continuous enhancement of a maintenance program that optimizes plant or facility safety and profitability. The ultimate objective is to maximize the potential for station or facility success, usually measured in terms of projected financial profitability, while meeting or exceeding meaningful and reasonable safety goals, usually measured in terms of projected damage or consequence frequencies. This paper describes the use of the latest concepts in developing and evaluating maintenance programs to achieve maintenance program optimization (MPO). These concepts are based on significant field experience gained through the integration and application of fundamentals developed for industry and Electric Power Research Institute (EPRI)-sponsored projects on preventive maintenance (PM) program development and reliability-centered maintenance (RCM)
Analytical method for determining the channel-temperature distribution
International Nuclear Information System (INIS)
Kurbatov, I.M.
1992-01-01
The distribution of the predicted temperature over the volume or cross section of the active zone is important for thermal calculations of reactors taking into account random deviations. This requires a laborious calculation which includes the following steps: separation of the nominal temperature field, within the temperature range, into intervals, in each of which the temperature is set equal to its average value in the interval; determination of the number of channels whose temperature falls within each interval; construction of the channel-temperature distribution in each interval in accordance with the weighted error function; and summation of the number of channels with the same temperature over all intervals. This procedure can be greatly simplified with the help of methods which eliminate numerous variant calculations when the nominal temperature field is open-quotes refinedclose quotes up to the optimal field according to different criteria. In the present paper a universal analytical method is proposed for determining, by changing the coefficients in the channel-temperature distribution function, the form of this function that reflects all conditions of operation of the elements in the active zone. The problem is solved for the temperature of the coolant at the outlet from the reactor channels
Nahar, J.; Rusyaman, E.; Putri, S. D. V. E.
2018-03-01
This research was conducted at Perum BULOG Sub-Divre Medan which is the implementing institution of Raskin program for several regencies and cities in North Sumatera. Raskin is a program of distributing rice to the poor. In order to minimize rice distribution costs then rice should be allocated optimally. The method used in this study consists of the Improved Vogel Approximation Method (IVAM) to analyse the initial feasible solution, and Modified Distribution (MODI) to test the optimum solution. This study aims to determine whether the IVAM method can provide savings or cost efficiency of rice distribution. From the calculation with IVAM obtained the optimum cost is lower than the company's calculation of Rp945.241.715,5 while the cost of the company's calculation of Rp958.073.750,40. Thus, the use of IVAM can save rice distribution costs of Rp12.832.034,9.
International Nuclear Information System (INIS)
Fukushima, Edwardo F.; Hirose, Shigeo
2000-01-01
This paper introduces an attitude control scheme based in optimal force distribution using quadratic programming which minimizes joint energy consumption. This method shares similarities with force distribution for multifingered hands, multiple coordinated manipulators and legged walking robots. In particular, an attitude control scheme was introduced inside the force distribution problem, and successfully implemented for control of the articulated body mobile robot KR-II. This is an actual mobile robot composed of cylindrical segments linked in series by prismatic joints and has a long snake-like appearance. These prismatic joints are force controlled so that each segment's vertical motion can automatically follow the terrain irregularities. An attitude control is necessary because this system acts like a system of wheeled inverted pendulum carts connected in series, being unstable by nature. The validity and effectiveness of the proposed method is verified by computer simulation and experiments with the robot KR-II. (author)
A Distributed System for Learning Programming On-Line
Verdu, Elena; Regueras, Luisa M.; Verdu, Maria J.; Leal, Jose P.; de Castro, Juan P.; Queiros, Ricardo
2012-01-01
Several Web-based on-line judges or on-line programming trainers have been developed in order to allow students to train their programming skills. However, their pedagogical functionalities in the learning of programming have not been clearly defined. EduJudge is a project which aims to integrate the "UVA On-line Judge", an existing…
Standard test method for distribution coefficients of inorganic species by the batch method
American Society for Testing and Materials. Philadelphia
2010-01-01
1.1 This test method covers the determination of distribution coefficients of chemical species to quantify uptake onto solid materials by a batch sorption technique. It is a laboratory method primarily intended to assess sorption of dissolved ionic species subject to migration through pores and interstices of site specific geomedia. It may also be applied to other materials such as manufactured adsorption media and construction materials. Application of the results to long-term field behavior is not addressed in this method. Distribution coefficients for radionuclides in selected geomedia are commonly determined for the purpose of assessing potential migratory behavior of contaminants in the subsurface of contaminated sites and waste disposal facilities. This test method is also applicable to studies for parametric studies of the variables and mechanisms which contribute to the measured distribution coefficient. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement a...
Radon Measurement Proficiency (RMP) Program methods and devices
International Nuclear Information System (INIS)
Harrison, J.; Hoornbeek, J.; Jalbert, P.; Sensintaffar, E.; Hopper, R.
1991-01-01
The US EPA developed the voluntary Radon Measurement Proficiency Program in 1986 in response to a Federal and State need for measurement services firms to demonstrate their proficiency with radon measurement methods and devices. Since that time, the program has set basic standards for the radon measurement industry. The program has grown dramatically since its inception. In 1986, fewer than 50 companies participated in the program. By 1989, more than 5,000 companies were participating. Participants represent firms with an analytical capability as well as firms that rely upon another firm for analysis service. Since the beginning of the RMP Program, the Agency has learned a great deal about radon measurement methods and devices. This paper reviews the measurement devices used in the program and what the EPA has learned about them since the program's inception. Performance data from the RMP Program are used to highlight relevant findings
The use of linear programming in optimization of HDR implant dose distributions
International Nuclear Information System (INIS)
Jozsef, Gabor; Streeter, Oscar E.; Astrahan, Melvin A.
2003-01-01
The introduction of high dose rate brachytherapy enabled optimization of dose distributions to be used on a routine basis. The objective of optimization is to homogenize the dose distribution within the implant while simultaneously satisfying dose constraints on certain points. This is accomplished by varying the time the source dwells at different locations. As the dose at any point is a linear function of the dwell times, a linear programming approach seems to be a natural choice. The dose constraints are inherently linear inequalities. Homogeneity requirements are linearized by minimizing the maximum deviation of the doses at points inside the implant from a prescribed dose. The revised simplex method was applied for the solution of this linear programming problem. In the homogenization process the possible source locations were chosen as optimization points. To avoid the problem of the singular value of the dose at a source location from the source itself we define the 'self-contribution' as the dose at a small distance from the source. The effect of varying this distance is discussed. Test cases were optimized for planar, biplanar and cylindrical implants. A semi-irregular, fan-like implant with diverging needles was also investigated. Mean central dose calculation based on 3D Delaunay-triangulation of the source locations was used to evaluate the dose distributions. The optimization method resulted in homogeneous distributions (for brachytherapy). Additional dose constraints--when applied--were satisfied. The method is flexible enough to include other linear constraints such as the inclusion of the centroids of the Delaunay-triangulation for homogenization, or limiting the maximum allowable dwell time
Planning and Optimization Methods for Active Distribution Systems
DEFF Research Database (Denmark)
Abbey, Chad; Baitch, Alex; Bak-Jensen, Birgitte
distribution planning. Active distribution networks (ADNs) have systems in place to control a combination of distributed energy resources (DERs), defined as generators, loads and storage. With these systems in place, the AND becomes an Active Distribution System (ADS). Distribution system operators (DSOs) have...
Data distribution method of workflow in the cloud environment
Wang, Yong; Wu, Junjuan; Wang, Ying
2017-08-01
Cloud computing for workflow applications provides the required high efficiency calculation and large storage capacity and it also brings challenges to the protection of trade secrets and other privacy data. Because of privacy data will cause the increase of the data transmission time, this paper presents a new data allocation algorithm based on data collaborative damage degree, to improve the existing data allocation strategy? Safety and public cloud computer algorithm depends on the private cloud; the static allocation method in the initial stage only to the non-confidential data division to improve the original data, in the operational phase will continue to generate data to dynamically adjust the data distribution scheme. The experimental results show that the improved method is effective in reducing the data transmission time.
Combustor and method for distributing fuel in the combustor
Uhm, Jong Ho; Ziminsky, Willy Steve; Johnson, Thomas Edward; York, William David
2016-04-26
A combustor includes a tube bundle that extends radially across at least a portion of the combustor. The tube bundle includes an upstream surface axially separated from a downstream surface. A plurality of tubes extends from the upstream surface through the downstream surface, and each tube provides fluid communication through the tube bundle. A baffle extends axially inside the tube bundle between adjacent tubes. A method for distributing fuel in a combustor includes flowing a fuel into a fuel plenum defined at least in part by an upstream surface, a downstream surface, a shroud, and a plurality of tubes that extend from the upstream surface to the downstream surface. The method further includes impinging the fuel against a baffle that extends axially inside the fuel plenum between adjacent tubes.
The synchronization method for distributed small satellite SAR
Xing, Lei; Gong, Xiaochun; Qiu, Wenxun; Sun, Zhaowei
2007-11-01
One of critical requirement for distributed small satellite SAR is the trigger time precision when all satellites turning on radar loads. This trigger operation is controlled by a dedicated communication tool or GPS system. In this paper a hardware platform is proposed which has integrated navigation, attitude control, and data handling system together. Based on it, a probabilistic synchronization method is proposed for SAR time precision requirement with ring architecture. To simplify design of transceiver, half-duplex communication way is used in this method. Research shows that time precision is relevant to relative frequency drift rate, satellite number, retry times, read error and round delay length. Installed with crystal oscillator short-term stability 10 -11 magnitude, this platform can achieve and maintain nanosecond order time error with a typical three satellites formation experiment during whole operating process.
Computer program determines exact two-sided tolerance limits for normal distributions
Friedman, H. A.; Webb, S. R.
1968-01-01
Computer program determines by numerical integration the exact statistical two-sided tolerance limits, when the proportion between the limits is at least a specified number. The program is limited to situations in which the underlying probability distribution for the population sampled is the normal distribution with unknown mean and variance.
Crovelli, R.A.; Balay, R.H.
1991-01-01
A general risk-analysis method was developed for petroleum-resource assessment and other applications. The triangular probability distribution is used as a model with an analytic aggregation methodology based on probability theory rather than Monte-Carlo simulation. Among the advantages of the analytic method are its computational speed and flexibility, and the saving of time and cost on a microcomputer. The input into the model consists of a set of components (e.g. geologic provinces) and, for each component, three potential resource estimates: minimum, most likely (mode), and maximum. Assuming a triangular probability distribution, the mean, standard deviation, and seven fractiles (F100, F95, F75, F50, F25, F5, and F0) are computed for each component, where for example, the probability of more than F95 is equal to 0.95. The components are aggregated by combining the means, standard deviations, and respective fractiles under three possible siutations (1) perfect positive correlation, (2) complete independence, and (3) any degree of dependence between these two polar situations. A package of computer programs named the TRIAGG system was written in the Turbo Pascal 4.0 language for performing the analytic probabilistic methodology. The system consists of a program for processing triangular probability distribution assessments and aggregations, and a separate aggregation routine for aggregating aggregations. The user's documentation and program diskette of the TRIAGG system are available from USGS Open File Services. TRIAGG requires an IBM-PC/XT/AT compatible microcomputer with 256kbyte of main memory, MS-DOS 3.1 or later, either two diskette drives or a fixed disk, and a 132 column printer. A graphics adapter and color display are optional. ?? 1991.
The QUELCE Method: Using Change Drivers to Estimate Program Costs
2016-08-01
Analysis 4 2.4 Assign Conditional Probabilities 5 2.5 Apply Uncertainty to Cost Formula Inputs for Scenarios 5 2.6 Perform Monte Carlo Simulation to...Distribution Statement A: Approved for Public Release; Distribution is Unlimited 1 Introduction: The Cost Estimation Challenge Because large-scale programs... challenged [Bliss 2012]. Improvements in cost estimation that would make these assumptions more precise and reduce early lifecycle uncertainty can
International Nuclear Information System (INIS)
Yang, Mou; Zhao, Xiangyang; Meng, Yingfeng; Li, Gao; Zhang, Lin; Xu, Haiming; Tang, Daqian
2017-01-01
Highlights: • The different wellbore conditions of heat transfer models were developed. • Drill string assembly and casing programs impact on down-hole temperatures. • The thermal performance in circulation and shut-in stages were deeply investigated. • Full-scale model coincided with the measured field data preferably. - Abstract: Heat exchange efficiency between each region of the wellbore and formation systems is influenced by the high thermal conductivity of the drill string and casing, which further affects temperature distribution of the wellbore. Based on the energy conservation principle, the Modified Raymond, Simplified and Full-scale models were developed, which were solved by the fully implicit finite difference method. The results indicated that wellbore and formation temperatures were significantly influenced at the connection points between the drill collar and drill pipe, as well as the casing shoe. Apart from the near surface, little change was observed in temperature distribution in the cement section. In the open-hole section, the temperature rapidly decreased in the circulation stage and gradually increased in the shut-in stage. Most important, the simulated result from the full-scale model coincided with the measured field data better than the other numerical models. These findings not only confirm the effect of the drill string assembly and casing programs on the wellbore and formation temperature distribution, but also contribute to resource exploration, drilling safety and reduced drilling costs.
Semi-definite Programming: methods and algorithms for energy management
International Nuclear Information System (INIS)
Gorge, Agnes
2013-01-01
The present thesis aims at exploring the potentialities of a powerful optimization technique, namely Semi-definite Programming, for addressing some difficult problems of energy management. We pursue two main objectives. The first one consists of using SDP to provide tight relaxations of combinatorial and quadratic problems. A first relaxation, called 'standard' can be derived in a generic way but it is generally desirable to reinforce them, by means of tailor-made tools or in a systematic fashion. These two approaches are implemented on different models of the Nuclear Outages Scheduling Problem, a famous combinatorial problem. We conclude this topic by experimenting the Lasserre's hierarchy on this problem, leading to a sequence of semi-definite relaxations whose optimal values tends to the optimal value of the initial problem. The second objective deals with the use of SDP for the treatment of uncertainty. We investigate an original approach called 'distributionally robust optimization', that can be seen as a compromise between stochastic and robust optimization and admits approximations under the form of a SDP. We compare the benefits of this method w.r.t classical approaches on a demand/supply equilibrium problem. Finally, we propose a scheme for deriving SDP relaxations of MISOCP and we report promising computational results indicating that the semi-definite relaxation improves significantly the continuous relaxation, while requiring a reasonable computational effort. SDP therefore proves to be a promising optimization method that offers great opportunities for innovation in energy management. (author)
A simple nodal force distribution method in refined finite element meshes
Energy Technology Data Exchange (ETDEWEB)
Park, Jai Hak [Chungbuk National University, Chungju (Korea, Republic of); Shin, Kyu In [Gentec Co., Daejeon (Korea, Republic of); Lee, Dong Won [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Cho, Seungyon [National Fusion Research Institute, Daejeon (Korea, Republic of)
2017-05-15
In finite element analyses, mesh refinement is frequently performed to obtain accurate stress or strain values or to accurately define the geometry. After mesh refinement, equivalent nodal forces should be calculated at the nodes in the refined mesh. If field variables and material properties are available at the integration points in each element, then the accurate equivalent nodal forces can be calculated using an adequate numerical integration. However, in certain circumstances, equivalent nodal forces cannot be calculated because field variable data are not available. In this study, a very simple nodal force distribution method was proposed. Nodal forces of the original finite element mesh are distributed to the nodes of refined meshes to satisfy the equilibrium conditions. The effect of element size should also be considered in determining the magnitude of the distributing nodal forces. A program was developed based on the proposed method, and several example problems were solved to verify the accuracy and effectiveness of the proposed method. From the results, accurate stress field can be recognized to be obtained from refined meshes using the proposed nodal force distribution method. In example problems, the difference between the obtained maximum stress and target stress value was less than 6 % in models with 8-node hexahedral elements and less than 1 % in models with 20-node hexahedral elements or 10-node tetrahedral elements.
Bayesian analysis of general failure data from an ageing distribution: advances in numerical methods
International Nuclear Information System (INIS)
Procaccia, H.; Villain, B.; Clarotti, C.A.
1996-01-01
EDF and ENEA carried out a joint research program for developing the numerical methods and computer codes needed for Bayesian analysis of component-lives in the case of ageing. Early results of this study were presented at ESREL'94. Since then the following further steps have been gone: input data have been generalized to the case that observed lives are censored both on the right and on the left; allowable life distributions are Weibull and gamma - their parameters are both unknown and can be statistically dependent; allowable priors are histograms relative to different parametrizations of the life distribution of concern; first-and-second-order-moments of the posterior distributions can be computed. In particular the covariance will give some important information about the degree of the statistical dependence between the parameters of interest. An application of the code to the appearance of a stress corrosion cracking in a tube of the PWR Steam Generator system is presented. (authors)
2012-02-24
... Conservation Program: Energy Conservation Standards for Distribution Transformers; Correction AGENCY: Office of... standards for distribution transformers. It was recently discovered that values in certain tables of the...,'' including distribution transformers. The Energy Policy Act of 1992 (EPACT 1992), Public Law 102-486, amended...
Singh, Sunita; Sylvia, Monica R.; Ridzi, Frank
2015-01-01
This ethnographic study presents findings of the literacy practices of Burmese refugee families and their interaction with a book distribution program paired with an intergenerational family literacy program. The project was organized at the level of Bronfenbrenner's exosystem (in "Ecology of human development". Cambridge, Harvard…
A FORTRAN program for numerical solution of the Altarelli-Parisi equations by the Laguerre method
International Nuclear Information System (INIS)
Kumano, S.; Londergan, J.T.
1992-01-01
We review the Laguerre method for solving the Altarelli-Parisi equations. The Laguerre method allows one to expand quark/parton distributions and splitting functions in orthonormal polynomials. The desired quark distributions are themselves expanded in terms of evolution operators, and we derive the integrodifferential equations satisfied by the evolution operators. We give relevant equations for both flavor nonsinglet and singlet distributions, for both spin-independent and spin-dependent distributions. We discuss stability and accuracy of the results using this method. For intermediate values of Bjorken x (0.03< x<0.7), one can obtain accurate results with a modest number of Laguerre polynomials (N≅20); we discuss requirements for convergence also for the regions of large or small x. A FORTRAN program is provided which implements the Laguerre method; test results are given for both the spin-independent and spin-dependent cases. (orig.)
Simple method of generating and distributing frequency-entangled qudits
Jin, Rui-Bo; Shimizu, Ryosuke; Fujiwara, Mikio; Takeoka, Masahiro; Wakabayashi, Ryota; Yamashita, Taro; Miki, Shigehito; Terai, Hirotaka; Gerrits, Thomas; Sasaki, Masahide
2016-11-01
High-dimensional, frequency-entangled photonic quantum bits (qudits for d-dimension) are promising resources for quantum information processing in an optical fiber network and can also be used to improve channel capacity and security for quantum communication. However, up to now, it is still challenging to prepare high-dimensional frequency-entangled qudits in experiments, due to technical limitations. Here we propose and experimentally implement a novel method for a simple generation of frequency-entangled qudts with d\\gt 10 without the use of any spectral filters or cavities. The generated state is distributed over 15 km in total length. This scheme combines the technique of spectral engineering of biphotons generated by spontaneous parametric down-conversion and the technique of spectrally resolved Hong-Ou-Mandel interference. Our frequency-entangled qudits will enable quantum cryptographic experiments with enhanced performances. This distribution of distinct entangled frequency modes may also be useful for improved metrology, quantum remote synchronization, as well as for fundamental test of stronger violation of local realism.
Dirichlet and Related Distributions Theory, Methods and Applications
Ng, Kai Wang; Tang, Man-Lai
2011-01-01
The Dirichlet distribution appears in many areas of application, which include modelling of compositional data, Bayesian analysis, statistical genetics, and nonparametric inference. This book provides a comprehensive review of the Dirichlet distribution and two extended versions, the Grouped Dirichlet Distribution (GDD) and the Nested Dirichlet Distribution (NDD), arising from likelihood and Bayesian analysis of incomplete categorical data and survey data with non-response. The theoretical properties and applications are also reviewed in detail for other related distributions, such as the inve
Development of advanced methods for planning electric energy distribution systems. Final report
Energy Technology Data Exchange (ETDEWEB)
Goenen, T.; Foote, B.L.; Thompson, J.C.; Fagan, J.E.
1979-10-01
An extensive search was made for the identification and collection of reports published in the open literature which describes distribution planning methods and techniques. In addition, a questionnaire has been prepared and sent to a large number of electric power utility companies. A large number of these companies were visited and/or their distribution planners interviewed for the identification and description of distribution system planning methods and techniques used by these electric power utility companies and other commercial entities. Distribution systems planning models were reviewed and a set of new mixed-integer programming models were developed for the optimal expansion of the distribution systems. The models help the planner to select: (1) optimum substation locations; (2) optimum substation expansions; (3) optimum substation transformer sizes; (4) optimum load transfers between substations; (5) optimum feeder routes and sizes subject to a set of specified constraints. The models permit following existing right-of-ways and avoid areas where feeders and substations cannot be constructed. The results of computer runs were analyzed for adequacy in serving projected loads within regulation limits for both normal and emergency operation.
Pair Programming as a Modern Method of Teaching Computer Science
Directory of Open Access Journals (Sweden)
Irena Nančovska Šerbec
2008-10-01
Full Text Available At the Faculty of Education, University of Ljubljana we educate future computer science teachers. Beside didactical, pedagogical, mathematical and other interdisciplinary knowledge, students gain knowledge and skills of programming that are crucial for computer science teachers. For all courses, the main emphasis is the absorption of professional competences, related to the teaching profession and the programming profile. The latter are selected according to the well-known document, the ACM Computing Curricula. The professional knowledge is therefore associated and combined with the teaching knowledge and skills. In the paper we present how to achieve competences related to programming by using different didactical models (semiotic ladder, cognitive objectives taxonomy, problem solving and modern teaching method “pair programming”. Pair programming differs from standard methods (individual work, seminars, projects etc.. It belongs to the extreme programming as a discipline of software development and is known to have positive effects on teaching first programming language. We have experimentally observed pair programming in the introductory programming course. The paper presents and analyzes the results of using this method: the aspects of satisfaction during programming and the level of gained knowledge. The results are in general positive and demonstrate the promising usage of this teaching method.
Method to render second order beam optics programs symplectic
International Nuclear Information System (INIS)
Douglas, D.; Servranckx, R.V.
1984-10-01
We present evidence that second order matrix-based beam optics programs violate the symplectic condition. A simple method to avoid this difficulty, based on a generating function approach to evaluating transfer maps, is described. A simple example illustrating the non-symplectricity of second order matrix methods, and the effectiveness of our solution to the problem, is provided. We conclude that it is in fact possible to bring second order matrix optics methods to a canonical form. The procedure for doing so has been implemented in the program DIMAT, and could be implemented in programs such as TRANSPORT and TURTLE, making them useful in multiturn applications. 15 refs
Population Estimation with Mark and Recapture Method Program
International Nuclear Information System (INIS)
Limohpasmanee, W.; Kaewchoung, W.
1998-01-01
Population estimation is the important information which required for the insect control planning especially the controlling with SIT. Moreover, It can be used to evaluate the efficiency of controlling method. Due to the complexity of calculation, the population estimation with mark and recapture methods were not used widely. So that, this program is developed with Qbasic on the purpose to make it accuracy and easier. The program evaluation consists with 6 methods; follow Seber's, Jolly-seber's, Jackson's Ito's, Hamada's and Yamamura's methods. The results are compared with the original methods, found that they are accuracy and more easier to applied
Equitably Distributing Quality of Marine Security Guards Using Integer Programming
2013-03-01
ARB BALMOD COM DoD DoS E HAF HQ 10 IP IQ LP MOS MCESG MSG MSGAT NLP NMC OSAB PCS PP&O Q RSO SAl SD SE SNCO T-ODP LIST OF...and Eurasia 2 Abu Dhabi, United Arab Emirates India and the Middle East 3 Bangkok, Thailand East Asia and Pacific 4 Fort Lauderdale, Florida South...integer, goal, and quadratic programming. LP models and nonlinear programming ( NLP ) models are very similar in model development for both maximizing
An overview of solution methods for multi-objective mixed integer linear programming programs
DEFF Research Database (Denmark)
Andersen, Kim Allan; Stidsen, Thomas Riis
Multiple objective mixed integer linear programming (MOMIP) problems are notoriously hard to solve to optimality, i.e. finding the complete set of non-dominated solutions. We will give an overview of existing methods. Among those are interactive methods, the two phases method and enumeration...... methods. In particular we will discuss the existing branch and bound approaches for solving multiple objective integer programming problems. Despite the fact that branch and bound methods has been applied successfully to integer programming problems with one criterion only a few attempts has been made...
Systems and methods for interpolation-based dynamic programming
Rockwood, Alyn
2013-01-03
Embodiments of systems and methods for interpolation-based dynamic programming. In one embodiment, the method includes receiving an object function and a set of constraints associated with the objective function. The method may also include identifying a solution on the objective function corresponding to intersections of the constraints. Additionally, the method may include generating an interpolated surface that is in constant contact with the solution. The method may also include generating a vector field in response to the interpolated surface.
Systems and methods for interpolation-based dynamic programming
Rockwood, Alyn
2013-01-01
Embodiments of systems and methods for interpolation-based dynamic programming. In one embodiment, the method includes receiving an object function and a set of constraints associated with the objective function. The method may also include identifying a solution on the objective function corresponding to intersections of the constraints. Additionally, the method may include generating an interpolated surface that is in constant contact with the solution. The method may also include generating a vector field in response to the interpolated surface.
Simple Calculation Programs for Biology Methods in Molecular ...
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. Simple Calculation Programs for Biology Methods in Molecular Biology. GMAP: A program for mapping potential restriction sites. RE sites in ambiguous and non-ambiguous DNA sequence; Minimum number of silent mutations required for introducing a RE sites; Set ...
Application of the simplex method of linear programming model to ...
African Journals Online (AJOL)
This work discussed how the simplex method of linear programming could be used to maximize the profit of any business firm using Saclux Paint Company as a case study. It equally elucidated the effect variation in the optimal result obtained from linear programming model, will have on any given firm. It was demonstrated ...
99Tc in the environment. Sources, distribution and methods
International Nuclear Information System (INIS)
Garcia-Leon, Manuel
2005-01-01
99 Tc is a β-emitter, E max =294 keV, with a very long half-life (T 1/2 =2.11 x 10 5 y). It is mainly produced in the fission of 235 U and 239 Pu at a rate of about 6%. This rate together with its long half-life makes it a significant nuclide in the whole nuclear fuel cycle, from which it can be introduced into the environment at different rates depending on the cycle step. A gross estimation shows that adding all the possible sources, at least 2000 TBq had been released into the environment up to 2000 and that up to the middle of the nineties of the last century some 64000 TBq had been produced worldwide. Nuclear explosions have liberated some 160 TBq into the environment. In this work, environmental distribution of 99 Tc as well as the methods for its determination will be discussed. Emphasis is put on the environmental relevance of 99 Tc, mainly with regard to the future committed radiation dose received by the population and to the problem of nuclear waste management. Its determination at environmental levels is a challenging task. For that, special mention is made about the mass spectrometric methods for its measurement. (author)
Moisture distribution in sludges based on different testing methods
Institute of Scientific and Technical Information of China (English)
Wenyi Deng; Xiaodong Li; Jianhua Yan; Fei Wang; Yong Chi; Kefa Cen
2011-01-01
Moisture distributions in municipal sewage sludge, printing and dyeing sludge and paper mill sludge were experimentally studied based on four different methods, i.e., drying test, thermogravimetric-differential thermal analysis (TG-DTA) test, thermogravimetricdifferential scanning calorimetry (TG-DSC) test and water activity test. The results indicated that the moistures in the mechanically dewatered sludges were interstitial water, surface water and bound water. The interstitial water accounted for more than 50％ wet basis (wb) of the total moisture content. The bond strength of sludge moisture increased with decreasing moisture content, especially when the moisture content was lower than 50％ wb. Furthermore, the comparison among the four different testing methods was presented.The drying test was advantaged by its ability to quantify free water, interstitial water, surface water and bound water; while TG-DSC test, TG-DTA test and water activity test were capable of determining the bond strength of moisture in sludge. It was found that the results from TG-DSC and TG-DTA test are more persuasive than water activity test.
Development of methods for DSM and distribution automation planning
International Nuclear Information System (INIS)
Kaerkkaeinen, S.; Kekkonen, V.; Rissanen, P.
1998-01-01
Demand-Side Management (DSM) is usually an utility (or sometimes governmental) activity designed to influence energy demand of customers (both level and load variation). It includes basic options like strategic conservation or load growth, peak clipping. Load shifting and fuel switching. Typical ways to realize DSM are direct load control, innovative tariffs, different types of campaign etc. Restructuring of utility in Finland and increased competition in electricity market have had dramatic influence on the DSM. Traditional ways are impossible due to the conflicting interests of generation, network and supply business and increased competition between different actors in the market. Costs and benefits of DSM are divided to different companies, and different type of utilities are interested only in those activities which are beneficial to them. On the other hand, due to the increased competition the suppliers are diversifying to different types of products and increasing number of customer services partly based on DSM are available. The aim of this project was to develop and assess methods for DSM and distribution automation planning from the utility point of view. The methods were also applied to case studies at utilities
Development of methods for DSM and distribution automation planning
Energy Technology Data Exchange (ETDEWEB)
Kaerkkaeinen, S; Kekkonen, V [VTT Energy, Espoo (Finland); Rissanen, P [Tietosavo Oy (Finland)
1998-08-01
Demand-Side Management (DSM) is usually an utility (or sometimes governmental) activity designed to influence energy demand of customers (both level and load variation). It includes basic options like strategic conservation or load growth, peak clipping. Load shifting and fuel switching. Typical ways to realize DSM are direct load control, innovative tariffs, different types of campaign etc. Restructuring of utility in Finland and increased competition in electricity market have had dramatic influence on the DSM. Traditional ways are impossible due to the conflicting interests of generation, network and supply business and increased competition between different actors in the market. Costs and benefits of DSM are divided to different companies, and different type of utilities are interested only in those activities which are beneficial to them. On the other hand, due to the increased competition the suppliers are diversifying to different types of products and increasing number of customer services partly based on DSM are available. The aim of this project was to develop and assess methods for DSM and distribution automation planning from the utility point of view. The methods were also applied to case studies at utilities
Review of islanding detection methods for distributed generation
DEFF Research Database (Denmark)
Chen, Zhe; Mahat, Pukar; Bak-Jensen, Birgitte
2008-01-01
This paper presents an overview of power system islanding and islanding detection techniques. Islanding detection techniques, for a distribution system with distributed generation (DG), can broadly be divided into remote and local techniques. A remote islanding detection technique is associated...
Fast crawling methods of exploring content distributed over large graphs
Wang, Pinghui
2018-03-15
Despite recent effort to estimate topology characteristics of large graphs (e.g., online social networks and peer-to-peer networks), little attention has been given to develop a formal crawling methodology to characterize the vast amount of content distributed over these networks. Due to the large-scale nature of these networks and a limited query rate imposed by network service providers, exhaustively crawling and enumerating content maintained by each vertex is computationally prohibitive. In this paper, we show how one can obtain content properties by crawling only a small fraction of vertices and collecting their content. We first show that when sampling is naively applied, this can produce a huge bias in content statistics (i.e., average number of content replicas). To remove this bias, one may use maximum likelihood estimation to estimate content characteristics. However, our experimental results show that this straightforward method requires to sample most vertices to obtain accurate estimates. To address this challenge, we propose two efficient estimators: special copy estimator (SCE) and weighted copy estimator (WCE) to estimate content characteristics using available information in sampled content. SCE uses the special content copy indicator to compute the estimate, while WCE derives the estimate based on meta-information in sampled vertices. We conduct experiments on a variety of real-word and synthetic datasets, and the results show that WCE and SCE are cost effective and also “asymptotically unbiased”. Our methodology provides a new tool for researchers to efficiently query content distributed in large-scale networks.
Improving Distribution of Military Programs’ Technical Criteria
1993-08-01
Vacuum System ETL 1110-3-380 01/29/88 Std Distribution of Military Arfid Pavement Dsg ETL 1110-3-381 01/29/88 Airfield Pavement Design ETL 1110-3...Army Arfid O&M Facilities TM 5-825-2 08/01/78 Flexible Pavement Design for Airfields TM 5-825-2-1 11/01/89 Army Airfields Pavements, Flex (Appendix
Making Improvements to The Army Distributed Learning Program
2012-01-01
ing focuses on leadership and management as well as technical skills, and involves the creation of global virtual teams. e training often deals...develop and distribute knowledge via a dynamic, global knowledge network called the Battle Command Knowledge System with a purpose of providing...Levels of Interactivity,” paper presented at 2006 dL Workshop, March 14, 2006. Wexler, S., et al., E-Learning 2.0., Santa Rosa, Calif.: e ELearning
Checking for Circular Dependencies in Distributed Stream Programs
2011-08-29
extensions to express new complexities more conve- nient. Teleport messaging ( TMG ) in the StreamIt language [30] is an example. 1.1 StreamIt Language...dynamicities to an FIR computation Thies et al. in [30] give a TMG model for distributed stream pro- grams. TMG is a mechanism that implements control...messages for stream graphs. The TMG mechanism is designed not to interfere with original dataflow graphs’ structures and scheduling, therefore a key
Blanchard, Philippe
2015-01-01
The second edition of this textbook presents the basic mathematical knowledge and skills that are needed for courses on modern theoretical physics, such as those on quantum mechanics, classical and quantum field theory, and related areas. The authors stress that learning mathematical physics is not a passive process and include numerous detailed proofs, examples, and over 200 exercises, as well as hints linking mathematical concepts and results to the relevant physical concepts and theories. All of the material from the first edition has been updated, and five new chapters have been added on such topics as distributions, Hilbert space operators, and variational methods. The text is divided into three main parts. Part I is a brief introduction to distribution theory, in which elements from the theories of ultradistributions and hyperfunctions are considered in addition to some deeper results for Schwartz distributions, thus providing a comprehensive introduction to the theory of generalized functions. P...
A two-step method for developing a control rod program for boiling water reactors
International Nuclear Information System (INIS)
Taner, M.S.; Levine, S.H.; Hsiao, M.Y.
1992-01-01
This paper reports on a two-step method that is established for the generation of a long-term control rod program for boiling water reactors (BWRs). The new method assumes a time-variant target power distribution in core depletion. In the new method, the BWR control rod programming is divided into two steps. In step 1, a sequence of optimal, exposure-dependent Haling power distribution profiles is generated, utilizing the spectral shift concept. In step 2, a set of exposure-dependent control rod patterns is developed by using the Haling profiles generated at step 1 as a target. The new method is implemented in a computer program named OCTOPUS. The optimization procedure of OCTOPUS is based on the method of approximation programming, in which the SIMULATE-E code is used to determine the nucleonics characteristics of the reactor core state. In a test in cycle length over a time-invariant, target Haling power distribution case because of a moderate application of spectral shift. No thermal limits of the core were violated. The gain in cycle length could be increased further by broadening the extent of the spetral shift
No-signaling quantum key distribution: solution by linear programming
Hwang, Won-Young; Bae, Joonwoo; Killoran, Nathan
2015-02-01
We outline a straightforward approach for obtaining a secret key rate using only no-signaling constraints and linear programming. Assuming an individual attack, we consider all possible joint probabilities. Initially, we study only the case where Eve has binary outcomes, and we impose constraints due to the no-signaling principle and given measurement outcomes. Within the remaining space of joint probabilities, by using linear programming, we get bound on the probability of Eve correctly guessing Bob's bit. We then make use of an inequality that relates this guessing probability to the mutual information between Bob and a more general Eve, who is not binary-restricted. Putting our computed bound together with the Csiszár-Körner formula, we obtain a positive key generation rate. The optimal value of this rate agrees with known results, but was calculated in a more straightforward way, offering the potential of generalization to different scenarios.
Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri
2016-01-01
This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality.
Matlab and C programming for Trefftz finite element methods
Qin, Qing-Hua
2008-01-01
Although the Trefftz finite element method (FEM) has become a powerful computational tool in the analysis of plane elasticity, thin and thick plate bending, Poisson's equation, heat conduction, and piezoelectric materials, there are few books that offer a comprehensive computer programming treatment of the subject. Collecting results scattered in the literature, MATLAB® and C Programming for Trefftz Finite Element Methods provides the detailed MATLAB® and C programming processes in applications of the Trefftz FEM to potential and elastic problems. The book begins with an introduction to th
A mathematical method for boiling water reactor control rod programming
International Nuclear Information System (INIS)
Tokumasu, S.; Hiranuma, H.; Ozawa, M.; Yokomi, M.
1985-01-01
A new mathematical programming method has been developed and utilized in OPROD, an existing computer code for automatic generation of control rod programs as an alternative inner-loop routine for the method of approximate programming. The new routine is constructed of a dual feasible direction algorithm, and consists essentially of two stages of iterative optimization procedures Optimization Procedures I and II. Both follow almost the same algorithm; Optimization Procedure I searches for feasible solutions and Optimization Procedure II optimizes the objective function. Optimization theory and computer simulations have demonstrated that the new routine could find optimum solutions, even if deteriorated initial control rod patterns were given
Step by step parallel programming method for molecular dynamics code
International Nuclear Information System (INIS)
Orii, Shigeo; Ohta, Toshio
1996-07-01
Parallel programming for a numerical simulation program of molecular dynamics is carried out with a step-by-step programming technique using the two phase method. As a result, within the range of a certain computing parameters, it is found to obtain parallel performance by using the level of parallel programming which decomposes the calculation according to indices of do-loops into each processor on the vector parallel computer VPP500 and the scalar parallel computer Paragon. It is also found that VPP500 shows parallel performance in wider range computing parameters. The reason is that the time cost of the program parts, which can not be reduced by the do-loop level of the parallel programming, can be reduced to the negligible level by the vectorization. After that, the time consuming parts of the program are concentrated on less parts that can be accelerated by the do-loop level of the parallel programming. This report shows the step-by-step parallel programming method and the parallel performance of the molecular dynamics code on VPP500 and Paragon. (author)
Directory of Open Access Journals (Sweden)
Qingwu Gong
2017-03-01
Full Text Available The intermittency and variability of permeated distributed generators (DGs could cause many critical security and economy risks to distribution systems. This paper applied a certain mathematical distribution to imitate the output variability and uncertainty of DGs. Then, four risk indices—EENS (expected energy not supplied, PLC (probability of load curtailment, EFLC (expected frequency of load curtailment, and SI (severity index—were established to reflect the system risk level of the distribution system. For the certain mathematical distribution of the DGs’ output power, an improved PEM (point estimate method-based method was proposed to calculate these four system risk indices. In this improved PEM-based method, an enumeration method was used to list the states of distribution systems, and an improved PEM was developed to deal with the uncertainties of DGs, and the value of load curtailment in distribution systems was calculated by an optimal power flow algorithm. Finally, the effectiveness and advantages of this proposed PEM-based method for distribution system assessment were verified by testing a modified IEEE 30-bus system. Simulation results have shown that this proposed PEM-based method has a high computational accuracy and highly reduced computational costs compared with other risk assessment methods and is very effective for risk assessments.
International Nuclear Information System (INIS)
Fazekas, A.; Posch, E.; Harsing, L.
1979-01-01
The blood circulation of incisors, dental pulp and tongue was detemined using the measurement of 86 Rb distribution in rats. The results were compared with those obtained by a simultaneous micropearl method. It was found that 37 per cent of 86 Rb in dental tissues is localized in the hard propiodentium, with a high proportion diffusing from the periodontium. The 86 Rb fraction localized in the tongue represents its blood circulation. (author)
A Novel Method of Clock Synchronization in Distributed Systems
Li, Gun; Niu, Meng-jie; Chai, Yang-shun; Chen, Xin; Ren, Yan-qiu
2017-04-01
Time synchronization plays an important role in the spacecraft formation flight and constellation autonomous navigation, etc. For the application of clock synchronization in a network system, it is not always true that all the observed nodes in the network are interconnected, therefore, it is difficult to achieve the high-precision time synchronization of a network system in the condition that a certain node can only obtain the measurement information of clock from a single neighboring node, but cannot obtain it from other nodes. Aiming at this problem, a novel method of high-precision time synchronization in a network system is proposed. In this paper, each clock is regarded as a node in the network system, and based on the definition of different topological structures of a distributed system, the three control algorithms of time synchronization under the following three cases are designed: without a master clock (reference clock), with a master clock (reference clock), and with a fixed communication delay in the network system. And the validity of the designed clock synchronization protocol is proved by both stability analysis and numerical simulation.
An experiment with content distribution methods in touchscreen mobile devices.
Garcia-Lopez, Eva; Garcia-Cabot, Antonio; de-Marcos, Luis
2015-09-01
This paper compares the usability of three different content distribution methods (scrolling, paging and internal links) in touchscreen mobile devices as means to display web documents. Usability is operationalized in terms of effectiveness, efficiency and user satisfaction. These dimensions are then measured in an experiment (N = 23) in which users are required to find words in regular-length web documents. Results suggest that scrolling is statistically better in terms of efficiency and user satisfaction. It is also found to be more effective but results were not significant. Our findings are also compared with existing literature to propose the following guideline: "try to use vertical scrolling in web pages for mobile devices instead of paging or internal links, except when the content is too large, then paging is recommended". With an ever increasing number of touchscreen web-enabled mobile devices, this new guideline can be relevant for content developers targeting the mobile web as well as institutions trying to improve the usability of their content for mobile platforms. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
CDFMC: a program that calculates the fixed neutron source distribution for a BWR using Monte Carlo
International Nuclear Information System (INIS)
Gomez T, A.M.; Xolocostli M, J.V.; Palacios H, J.C.
2006-01-01
The three-dimensional neutron flux calculation using the synthesis method, it requires of the determination of the neutron flux in two two-dimensional configurations as well as in an unidimensional one. Most of the standard guides for the neutron flux calculation or fluences in the vessel of a nuclear reactor, make special emphasis in the appropriate calculation of the fixed neutron source that should be provided to the used transport code, with the purpose of finding sufficiently approximated flux values. The reactor core assemblies configuration is based on X Y geometry, however the considered problem is solved in R θ geometry for what is necessary to make an appropriate mapping to find the source term associated to the R θ intervals starting from a source distribution in rectangular coordinates. To develop the CDFMC computer program (Source Distribution calculation using Monte Carlo), it was necessary to develop a theory of independent mapping to those that have been in the literature. The method of meshes overlapping here used, is based on a technique of random points generation, commonly well-known as Monte Carlo technique. Although the 'randomness' of this technique it implies considering errors in the calculations, it is well known that when increasing the number of points randomly generated to measure an area or some other quantity of interest, the precision of the method increases. In the particular case of the CDFMC computer program, the developed technique reaches a good general behavior when it is used a considerably high number of points (bigger or equal to a hundred thousand), with what makes sure errors in the calculations of the order of 1%. (Author)
Evaluation of the League General Insurance Company child safety seat distribution program
1982-05-01
This report presents an evaluation of the child safety seat distribution initiated by the League General Insurance Company in June 1979. The program provides child safety seats as a benefit under the company's auto insurance policies to policy-holder...
A Review of Distributed Parameter Groundwater Management Modeling Methods
Gorelick, Steven M.
1983-04-01
Models which solve the governing groundwater flow or solute transport equations in conjunction with optimization techniques, such as linear and quadratic programing, are powerful aquifer management tools. Groundwater management models fall in two general categories: hydraulics or policy evaluation and water allocation. Groundwater hydraulic management models enable the determination of optimal locations and pumping rates of numerous wells under a variety of restrictions placed upon local drawdown, hydraulic gradients, and water production targets. Groundwater policy evaluation and allocation models can be used to study the influence upon regional groundwater use of institutional policies such as taxes and quotas. Furthermore, fairly complex groundwater-surface water allocation problems can be handled using system decomposition and multilevel optimization. Experience from the few real world applications of groundwater optimization-management techniques is summarized. Classified separately are methods for groundwater quality management aimed at optimal waste disposal in the subsurface. This classification is composed of steady state and transient management models that determine disposal patterns in such a way that water quality is protected at supply locations. Classes of research missing from the literature are groundwater quality management models involving nonlinear constraints, models which join groundwater hydraulic and quality simulations with political-economic management considerations, and management models that include parameter uncertainty.
Mamalakis, Antonios; Langousis, Andreas; Deidda, Roberto
2016-04-01
Estimation of extreme rainfall from data constitutes one of the most important issues in statistical hydrology, as it is associated with the design of hydraulic structures and flood water management. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a generalized Pareto (GP) distribution model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches, such as non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data, graphical methods where one studies the dependence of GP distribution parameters (or related metrics) on the threshold level u, and Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GP distribution model is applicable. In this work, we review representative methods for GP threshold detection, discuss fundamental differences in their theoretical bases, and apply them to 1714 daily rainfall records from the NOAA-NCDC open-access database, with more than 110 years of data. We find that non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while methods that are based on asymptotic properties of the upper distribution tail lead to unrealistically high threshold and shape parameter estimates. The latter is justified by theoretical arguments, and it is especially the case in rainfall applications, where the shape parameter of the GP distribution is low; i.e. on the order of 0.1 ÷ 0.2. Better performance is demonstrated by graphical methods and GoF metrics that rely on pre-asymptotic properties of the GP distribution. For daily rainfall, we find that GP threshold estimates range between 2÷12 mm/d with a mean value of 6.5 mm/d, while the existence of quantization in the
Interior Point Methods for Large-Scale Nonlinear Programming
Czech Academy of Sciences Publication Activity Database
Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan
2005-01-01
Roč. 20, č. 4-5 (2005), s. 569-582 ISSN 1055-6788 R&D Projects: GA AV ČR IAA1030405 Institutional research plan: CEZ:AV0Z10300504 Keywords : nonlinear programming * interior point methods * KKT systems * indefinite preconditioners * filter methods * algorithms Subject RIV: BA - General Mathematics Impact factor: 0.477, year: 2005
Pyrochemical and Dry Processing Methods Program. A selected bibliography
International Nuclear Information System (INIS)
McDuffie, H.F.; Smith, D.H.; Owen, P.T.
1979-03-01
This selected bibliography with abstracts was compiled to provide information support to the Pyrochemical and Dry Processing Methods (PDPM) Program sponsored by DOE and administered by the Argonne National Laboratory. Objectives of the PDPM Program are to evaluate nonaqueous methods of reprocessing spent fuel as a route to the development of proliferation-resistant and diversion-resistant methods for widespread use in the nuclear industry. Emphasis was placed on the literature indexed in the ERDA--DOE Energy Data Base (EDB). The bibliography includes indexes to authors, subject descriptors, EDB subject categories, and titles
Pyrochemical and Dry Processing Methods Program. A selected bibliography
Energy Technology Data Exchange (ETDEWEB)
McDuffie, H.F.; Smith, D.H.; Owen, P.T.
1979-03-01
This selected bibliography with abstracts was compiled to provide information support to the Pyrochemical and Dry Processing Methods (PDPM) Program sponsored by DOE and administered by the Argonne National Laboratory. Objectives of the PDPM Program are to evaluate nonaqueous methods of reprocessing spent fuel as a route to the development of proliferation-resistant and diversion-resistant methods for widespread use in the nuclear industry. Emphasis was placed on the literature indexed in the ERDA--DOE Energy Data Base (EDB). The bibliography includes indexes to authors, subject descriptors, EDB subject categories, and titles.
Program for searching for semiempirical parameters by the MNDO method
International Nuclear Information System (INIS)
Bliznyuk, A.A.; Voityuk, A.A.
1987-01-01
The authors describe an program for optimizing atomic models constructed using the MNDO method which varies not only the parameters but also the scope for simple changes in the calculation scheme. The target function determines properties such as formation enthalpies, dipole moments, ionization potentials, and geometrical parameters. Software used to minimize the target function is based on the simplex method on the Nelder-Mead algorithm and on the Fletcher variable-metric method. The program is written in FORTRAN IV and implemented on the ES computer
Numerical methods of mathematical optimization with Algol and Fortran programs
Künzi, Hans P; Zehnder, C A; Rheinboldt, Werner
1971-01-01
Numerical Methods of Mathematical Optimization: With ALGOL and FORTRAN Programs reviews the theory and the practical application of the numerical methods of mathematical optimization. An ALGOL and a FORTRAN program was developed for each one of the algorithms described in the theoretical section. This should result in easy access to the application of the different optimization methods.Comprised of four chapters, this volume begins with a discussion on the theory of linear and nonlinear optimization, with the main stress on an easily understood, mathematically precise presentation. In addition
Harada, Ryuhei; Nakamura, Tomotake; Shigeta, Yasuteru
2016-03-30
As an extension of the Outlier FLOODing (OFLOOD) method [Harada et al., J. Comput. Chem. 2015, 36, 763], the sparsity of the outliers defined by a hierarchical clustering algorithm, FlexDice, was considered to achieve an efficient conformational search as sparsity-weighted "OFLOOD." In OFLOOD, FlexDice detects areas of sparse distribution as outliers. The outliers are regarded as candidates that have high potential to promote conformational transitions and are employed as initial structures for conformational resampling by restarting molecular dynamics simulations. When detecting outliers, FlexDice defines a rank in the hierarchy for each outlier, which relates to sparsity in the distribution. In this study, we define a lower rank (first ranked), a medium rank (second ranked), and the highest rank (third ranked) outliers, respectively. For instance, the first-ranked outliers are located in a given conformational space away from the clusters (highly sparse distribution), whereas those with the third-ranked outliers are nearby the clusters (a moderately sparse distribution). To achieve the conformational search efficiently, resampling from the outliers with a given rank is performed. As demonstrations, this method was applied to several model systems: Alanine dipeptide, Met-enkephalin, Trp-cage, T4 lysozyme, and glutamine binding protein. In each demonstration, the present method successfully reproduced transitions among metastable states. In particular, the first-ranked OFLOOD highly accelerated the exploration of conformational space by expanding the edges. In contrast, the third-ranked OFLOOD reproduced local transitions among neighboring metastable states intensively. For quantitatively evaluations of sampled snapshots, free energy calculations were performed with a combination of umbrella samplings, providing rigorous landscapes of the biomolecules. © 2015 Wiley Periodicals, Inc.
FURNACE; a toroidal geometry neutronic program system method description and users manual
International Nuclear Information System (INIS)
Verschuur, K.A.
1984-12-01
The FURNACE program system performs neutronic and photonic calculations in 3D toroidal geometry for application to fusion reactors. The geometry description is quite general, allowing any torus cross section and any neutron source density distribution for the plasma, as well as simple parametric representations of circular, elliptic and D-shaped tori and plasmas. The numerical method is based on an approximate transport model that produces results with sufficient accuracy for reactor-design purposes, at acceptable calculational costs. A short description is given of the numerical method, and a user manual for the programs of the system: FURNACE, ANISN-PT, LIBRA, TAPEMA and DRAWER is presented
comparison of estimation methods for fitting weibull distribution
African Journals Online (AJOL)
Tersor
Tree diameter characterisation using probability distribution functions is essential for determining the structure of forest stands. This has been an intrinsic part of forest management planning, decision-making and research in recent times. The distribution of species and tree size in a forest area gives the structure of the stand.
Marketing and Distribution: New Minimum Wage Legislation: Impact on Co-Op DE Programs.
Husted, Stewart W.
1978-01-01
Impact on distributive education cooperative programs due to the legislation increasing the minimum wage effective January 1, 1978, indicates that the change could greatly restrict future cooperative placements, thereby reducing distributive education enrollments. Employer strategies (for example, reducing student work hours) to overcome wage…
Log-normal spray drop distribution...analyzed by two new computer programs
Gerald S. Walton
1968-01-01
Results of U.S. Forest Service research on chemical insecticides suggest that large drops are not as effective as small drops in carrying insecticides to target insects. Two new computer programs have been written to analyze size distribution properties of drops from spray nozzles. Coded in Fortran IV, the programs have been tested on both the CDC 6400 and the IBM 7094...
Evolution of a Family Nurse Practitioner Program to Improve Primary Care Distribution
Andrus, Len Hughes; Fenley, Mary D.
1976-01-01
Describes a Family Nurse Practitioner Program that has effectively improved the distribution of primary health care manpower in rural areas. Program characteristics include selection of personnel from areas of need, decentralization of clinical and didactic training sites, competency-based portable curriculum, and circuit-riding institutionally…
DepositScan, a Scanning Program to Measure Spray Deposition Distributions
DepositScan, a scanning program was developed to quickly measure spray deposit distributions on water sensitive papers or Kromekote cards which are widely used for determinations of pesticide spray deposition quality on target areas. The program is installed in a portable computer and works with a ...
2012-08-14
... Office 37 CFR Part 42 Transitional Program for Covered Business Method Patents--Definitions of Covered... Business Method Patents-- Definitions of Covered Business Method Patent and Technological Invention AGENCY... forth in detail the definitions of the terms ``covered business method patent'' and ``technological...
Nuclear power reactor analysis, methods, algorithms and computer programs
International Nuclear Information System (INIS)
Matausek, M.V
1981-01-01
Full text: For a developing country buying its first nuclear power plants from a foreign supplier, disregarding the type and scope of the contract, there is a certain number of activities which have to be performed by local stuff and domestic organizations. This particularly applies to the choice of the nuclear fuel cycle strategy and the choice of the type and size of the reactors, to bid parameters specification, bid evaluation and final safety analysis report evaluation, as well as to in-core fuel management activities. In the Nuclear Engineering Department of the Boris Kidric Institute of Nuclear Sciences (NET IBK) the continual work is going on, related to the following topics: cross section and resonance integral calculations, spectrum calculations, generation of group constants, lattice and cell problems, criticality and global power distribution search, fuel burnup analysis, in-core fuel management procedures, cost analysis and power plant economics, safety and accident analysis, shielding problems and environmental impact studies, etc. The present paper gives the details of the methods developed and the results achieved, with the particular emphasis on the NET IBK computer program package for the needs of planning, construction and operation of nuclear power plants. The main problems encountered so far were related to small working team, lack of large and powerful computers, absence of reliable basic nuclear data and shortage of experimental and empirical results for testing theoretical models. Some of these difficulties have been overcome thanks to bilateral and multilateral cooperation with developed countries, mostly through IAEA. It is the authors opinion, however, that mutual cooperation of developing countries, having similar problems and similar goals, could lead to significant results. Some activities of this kind are suggested and discussed. (author)
Method of preparing mercury with an arbitrary isotopic distribution
Grossman, M.W.; George, W.A.
1986-12-16
This invention provides for a process for preparing mercury with a predetermined, arbitrary, isotopic distribution. In one embodiment, different isotopic types of Hg[sub 2]Cl[sub 2], corresponding to the predetermined isotopic distribution of Hg desired, are placed in an electrolyte solution of HCl and H[sub 2]O. The resulting mercurous ions are then electrolytically plated onto a cathode wire producing mercury containing the predetermined isotopic distribution. In a similar fashion, Hg with a predetermined isotopic distribution is obtained from different isotopic types of HgO. In this embodiment, the HgO is dissolved in an electrolytic solution of glacial acetic acid and H[sub 2]O. The isotopic specific Hg is then electrolytically plated onto a cathode and then recovered. 1 fig.
Problem-Solving Methods for the Prospective Development of Urban Power Distribution Network
Directory of Open Access Journals (Sweden)
A. P. Karpenko
2014-01-01
Full Text Available This article succeeds the former A. P. K nko’ and A. I. Kuzmina’ ubl t on titled "A mathematical model of urban distribution electro-network considering its future development" (electronic scientific and technical magazine "Science and education" No. 5, 2014.The article offers a model of urban power distribution network as a set of transformer and distribution substations and cable lines. All elements of the network and new consumers are determined owing to vectors of parameters consistent with them.A problem of the urban power distribution network design, taking into account a prospective development of the city, is presented as a problem of discrete programming. It is in deciding on the optimal option to connect new consumers to the power supply network, on the number and sites to build new substations, and on the option to include them in the power supply network.Two methods, namely a reduction method for a set the nested tasks of global minimization and a decomposition method are offered to solve the problem.In reduction method the problem of prospective development of power supply network breaks into three subtasks of smaller dimension: a subtask to define the number and sites of new transformer and distribution substations, a subtask to define the option to connect new consumers to the power supply network, and a subtask to include new substations in the power supply network. The vector of the varied parameters is broken into three subvectors consistent with the subtasks. Each subtask is solved using an area of admissible vector values of the varied parameters at the fixed components of the subvectors obtained when solving the higher subtasks.In decomposition method the task is presented as a set of three, similar to reduction method, reductions of subtasks and a problem of coordination. The problem of coordination specifies a sequence of the subtasks solution, defines the moment of calculation termination. Coordination is realized by
ROTAX: a nonlinear optimization program by axes rotation method
International Nuclear Information System (INIS)
Suzuki, Tadakazu
1977-09-01
A nonlinear optimization program employing the axes rotation method has been developed for solving nonlinear problems subject to nonlinear inequality constraints and its stability and convergence efficiency were examined. The axes rotation method is a direct search of the optimum point by rotating the orthogonal coordinate system in a direction giving the minimum objective. The searching direction is rotated freely in multi-dimensional space, so the method is effective for the problems represented with the contours having deep curved valleys. In application of the axes rotation method to the optimization problems subject to nonlinear inequality constraints, an improved version of R.R. Allran and S.E.J. Johnsen's method is used, which deals with a new objective function composed of the original objective and a penalty term to consider the inequality constraints. The program is incorporated in optimization code system SCOOP. (auth.)
Bateev, A. B.; Filippov, V. P.
2017-01-01
The principle possibility of using computer program Univem MS for Mössbauer spectra fitting as a demonstration material at studying such disciplines as atomic and nuclear physics and numerical methods by students is shown in the article. This program is associated with nuclear-physical parameters such as isomer (or chemical) shift of nuclear energy level, interaction of nuclear quadrupole moment with electric field and of magnetic moment with surrounded magnetic field. The basic processing algorithm in such programs is the Least Square Method. The deviation of values of experimental points on spectra from the value of theoretical dependence is defined on concrete examples. This value is characterized in numerical methods as mean square deviation. The shape of theoretical lines in the program is defined by Gaussian and Lorentzian distributions. The visualization of the studied material on atomic and nuclear physics can be improved by similar programs of the Mössbauer spectroscopy, X-ray Fluorescence Analyzer or X-ray diffraction analysis.
DeFosset, Amelia R; Gase, Lauren N; Webber, Eliza; Kuo, Tony
2017-10-01
Healthy food distribution programs that allow small retailers to purchase fresh fruits and vegetables at wholesale prices may increase the profitability of selling produce. While promising, little is known about how these programs affect the availability of fresh fruits and vegetables in underserved communities. This study examined the impacts of a healthy food distribution program in Los Angeles County over its first year of operation (August 2015-2016). Assessment methods included: (1) a brief survey examining the characteristics, purchasing habits, and attitudes of stores entering the program; (2) longitudinal tracking of sales data examining changes in the volume and variety of fruits and vegetables distributed through the program; and (3) the collection of comparison price data from wholesale market databases and local grocery stores. Seventeen stores participated in the program over the study period. One-fourth of survey respondents reported no recent experience selling produce. Analysis of sales data showed that, on average, the total volume of produce distributed through the program increased by six pounds per week over the study period (95% confidence limit: 4.50, 7.50); trends varied by store and produce type. Produce prices offered through the program approximated those at wholesale markets, and were lower than prices at full-service grocers. Results suggest that healthy food distribution programs may reduce certain supply-side barriers to offering fresh produce in small retail venues. While promising, more work is needed to understand the impacts of such programs on in-store environments and consumer behaviors.
Evaluating a physician leadership development program - a mixed methods approach.
Throgmorton, Cheryl; Mitchell, Trey; Morley, Tom; Snyder, Marijo
2016-05-16
Purpose - With the extent of change in healthcare today, organizations need strong physician leaders. To compensate for the lack of physician leadership education, many organizations are sending physicians to external leadership programs or developing in-house leadership programs targeted specifically to physicians. The purpose of this paper is to outline the evaluation strategy and outcomes of the inaugural year of a Physician Leadership Academy (PLA) developed and implemented at a Michigan-based regional healthcare system. Design/methodology/approach - The authors applied the theoretical framework of Kirkpatrick's four levels of evaluation and used surveys, observations, activity tracking, and interviews to evaluate the program outcomes. The authors applied grounded theory techniques to the interview data. Findings - The program met targeted outcomes across all four levels of evaluation. Interview themes focused on the significance of increasing self-awareness, building relationships, applying new skills, and building confidence. Research limitations/implications - While only one example, this study illustrates the importance of developing the evaluation strategy as part of the program design. Qualitative research methods, often lacking from learning evaluation design, uncover rich themes of impact. The study supports how a PLA program can enhance physician learning, engagement, and relationship building throughout and after the program. Physician leaders' partnership with organization development and learning professionals yield results with impact to individuals, groups, and the organization. Originality/value - Few studies provide an in-depth review of evaluation methods and outcomes of physician leadership development programs. Healthcare organizations seeking to develop similar in-house programs may benefit applying the evaluation strategy outlined in this study.
International Nuclear Information System (INIS)
Huang, P.H.
1995-01-01
Taiwan Power Company's (TPC's) power distribution analysis and fuel thermal margin verification methods for pressurized water reactors (PWRs) are examined. The TPC and the Institute of Nuclear Energy Research started a joint 5-yr project in 1989 to establish independent capabilities to perform reload design and transient analysis utilizing state-of-the-art computer programs. As part of the effort, these methods were developed to allow TPC to independently perform verifications of the local power density and departure from nucleate boiling design bases, which are required by the reload safety evaluation for the Maanshan PWR plant. The computer codes utilized were extensively validated for the intended applications. Sample calculations were performed for up to six reload cycles of the Maanshan plant, and the results were found to be quite consistent with the vendor's calculational results
A Capacity Dimensioning Method for Broadband Distribution Networks
DEFF Research Database (Denmark)
Shawky, Ahmed; Pedersen, Jens Myrup; Bergheim, Hans
2010-01-01
This paper presents capacity dimensioning for a hypothetical distribution network in the Danish municipality of Aalborg. The number of customers in need for a better service level and the continuous increase in network traffic makes it harder for ISPs to deliver high levels of service to their cu......This paper presents capacity dimensioning for a hypothetical distribution network in the Danish municipality of Aalborg. The number of customers in need for a better service level and the continuous increase in network traffic makes it harder for ISPs to deliver high levels of service...... to their customers. This paper starts by defining three levels of services, together with traffic demands based on research of traffic distribution and generation in networks. Calculations for network dimension are then calculated. The results from the dimensioning are used to compare different network topologies...
A new kind of droplet space distribution measuring method
International Nuclear Information System (INIS)
Ma Chao; Bo Hanliang
2012-01-01
A new kind of droplet space distribution measuring technique was introduced mainly, and the experimental device which was designed for the measuring the space distribution and traces of the flying film droplet produced by the bubble breaking up near the free surface of the water. This experiment was designed with a kind of water-sensitivity test paper (rice paper) which could record the position and size of the colored scattering droplets precisely. The rice papers were rolled into cylinders with different diameters by using tools. The bubbles broke up exactly in the center of the cylinder, and the space distribution and the traces of the droplets would be received by analysing all the positions of the droplets produced by the same size bubble on the rice papers. (authors)
Power operation, measurement and methods of calculation of power distribution
International Nuclear Information System (INIS)
Lindahl, S.O.; Bernander, O.; Olsson, S.
1982-01-01
During the initial fuel loading of a BWR core, extensive checks and measurements of the fuel are performed. The measurements are designed to verify that the reactor can always be safely operated in compliance with the regulatory constraints. The power distribution within the reactor core is evaluated by means of instrumentation and elaborate computer calculations. The power distribution forms the basis for the evaluation of thermal limits. The behaviour of the reactor during the ordinary modes of operation as well as during transients shall be well understood and such that the integrity of the fuel and the reactor systems is always well preserved. (author)
International Nuclear Information System (INIS)
Badenhop, C.T.
1983-01-01
Presented here is a method for the determination of the pore size distribution of a membrane microfilter. Existing test metods are either cumbersome, as is the Erbe method; time consuming, as is the evaluation of electron microscope photographs; do not really measure the pore distribution, as the mercury intrusion method; or do not satisfactorily evaluate the large pore range of the filter, as is the case with the automated ASTM method. The new method described in this paper is based upon the solution of the integral flow equation for the pore distribution function. A computer program evaluates the flow test data and calculates the numerical pore distribution, water-flow distribution, air-flow distribution and capillary area distribution, as a function of the pore size. (orig./RW)
Method and computer program product for maintenance and modernization backlogging
Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M
2013-02-19
According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.
Mathematical programming methods for large-scale topology optimization problems
DEFF Research Database (Denmark)
Rojas Labanda, Susana
for mechanical problems, but has rapidly extended to many other disciplines, such as fluid dynamics and biomechanical problems. However, the novelty and improvements of optimization methods has been very limited. It is, indeed, necessary to develop of new optimization methods to improve the final designs......, and at the same time, reduce the number of function evaluations. Nonlinear optimization methods, such as sequential quadratic programming and interior point solvers, have almost not been embraced by the topology optimization community. Thus, this work is focused on the introduction of this kind of second...... for the classical minimum compliance problem. Two of the state-of-the-art optimization algorithms are investigated and implemented for this structural topology optimization problem. A Sequential Quadratic Programming (TopSQP) and an interior point method (TopIP) are developed exploiting the specific mathematical...
Relaxation and decomposition methods for mixed integer nonlinear programming
Nowak, Ivo; Bank, RE
2005-01-01
This book presents a comprehensive description of efficient methods for solving nonconvex mixed integer nonlinear programs, including several numerical and theoretical results, which are presented here for the first time. It contains many illustrations and an up-to-date bibliography. Because on the emphasis on practical methods, as well as the introduction into the basic theory, the book is accessible to a wide audience. It can be used both as a research and as a graduate text.
El-Zawawy, Mohamed A.
2014-01-01
This paper introduces new approaches for the analysis of frequent statement and dereference elimination for imperative and object-oriented distributed programs running on parallel machines equipped with hierarchical memories. The paper uses languages whose address spaces are globally partitioned. Distributed programs allow defining data layout and threads writing to and reading from other thread memories. Three type systems (for imperative distributed programs) are the tools of the proposed techniques. The first type system defines for every program point a set of calculated (ready) statements and memory accesses. The second type system uses an enriched version of types of the first type system and determines which of the ready statements and memory accesses are used later in the program. The third type system uses the information gather so far to eliminate unnecessary statement computations and memory accesses (the analysis of frequent statement and dereference elimination). Extensions to these type systems are also presented to cover object-oriented distributed programs. Two advantages of our work over related work are the following. The hierarchical style of concurrent parallel computers is similar to the memory model used in this paper. In our approach, each analysis result is assigned a type derivation (serves as a correctness proof). PMID:24892098
DISTRIBUTED ELECTRICAL POWER PRODUCTION SYSTEM AND METHOD OF CONTROL THEREOF
DEFF Research Database (Denmark)
2010-01-01
The present invention relates to a distributed electrical power production system wherein two or more electrical power units comprise respective sets of power supply attributes. Each set of power supply attributes is associated with a dynamic operating state of a particular electrical power unit....
The effects of different irrigation methods on root distribution ...
African Journals Online (AJOL)
drip, subsurface drip, surface and under-tree micro sprinkler) on the root distribution, intensity and effective root depth of “Williams Pride” and “Jersey Mac” apple cultivars budded on M9, rapidly grown in Isparta Region. The rootstocks were ...
Study program for constant current capacitor charging method
Energy Technology Data Exchange (ETDEWEB)
Pugh, C.
1978-10-04
The objective of the study program was to determine the best method of charging 20,000 to 132,000 microfarads of capacitance to 22 kVdc in 14 to 15 sec. Component costs, sizes, weights, line current graphs, copies of calculations and manufacturer's data are included.
Dynamic Frames Based Verification Method for Concurrent Java Programs
Mostowski, Wojciech
2016-01-01
In this paper we discuss a verification method for concurrent Java programs based on the concept of dynamic frames. We build on our earlier work that proposes a new, symbolic permission system for concurrent reasoning and we provide the following new contributions. First, we describe our approach
Heuristic Methods of Integer Programming and Its Applications in Economics
Directory of Open Access Journals (Sweden)
Dominika Crnjac Milić
2010-12-01
Full Text Available A short overview of the results related to integer programming is described in the introductory part of this paper. Furthermore, there is a list of literature related to this field. The main part of the paper analyses the Heuristic method which yields a very fast result without the use of significant mathematical tools.
Path Following in the Exact Penalty Method of Convex Programming.
Zhou, Hua; Lange, Kenneth
2015-07-01
Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value.
Directory of Open Access Journals (Sweden)
Nandi Vijay
2007-01-01
Full Text Available Abstract Background Fatal heroin overdose is a significant cause of mortality for injection drug users (IDUs. Many of these deaths are preventable because opiate overdoses can be quickly and safely reversed through the injection of Naloxone [brand name Narcan], a prescription drug used to revive persons who have overdosed on heroin or other opioids. Currently, in several cities in the United States, drug users are being trained in naloxone administration and given naloxone for immediate and successful reversals of opiate overdoses. There has been very little formal description of the challenges faced in the development and implementation of large-scale IDU naloxone administration training and distribution programs and the lessons learned during this process. Methods During a one year period, over 1,000 participants were trained in SKOOP (Skills and Knowledge on Opiate Prevention and received a prescription for naloxone by a medical doctor on site at a syringe exchange program (SEP in New York City. Participants in SKOOP were over the age of 18, current participants of SEPs, and current or former drug users. We present details about program design and lessons learned during the development and implementation of SKOOP. Lessons learned described in the manuscript are collectively articulated by the evaluators and implementers of the project. Results There were six primary challenges and lessons learned in developing, implementing, and evaluating SKOOP. These include a political climate surrounding naloxone distribution; b extant prescription drug laws; c initial low levels of recruitment into the program; d development of participant appropriate training methodology; e challenges in the design of a suitable formal evaluation; and f evolution of program response to naloxone. Conclusion Other naloxone distribution programs may anticipate similar challenges to SKOOP and we identify mechanisms to address them. Strategies include being flexible in
International Nuclear Information System (INIS)
Hubicki, W.; Hubicka, H.
1980-01-01
The method of basic precipitation of lanthanons was combined with the ion exchange distribution method using ammonium acetate. As a result of chromatogram development 1:2 the good results of distribution of Sm -Nd, the fractions 99,9% Nd 2 O 3 and Pr 6 O 11 and 99,5% La 2 O 3 were obtained. It was found that the way of packing the column influenced greatly the efficiency of ion distribution. (author)
A robust fusion method for multiview distributed video coding
DEFF Research Database (Denmark)
Salmistraro, Matteo; Ascenso, Joao; Brites, Catarina
2014-01-01
Distributed video coding (DVC) is a coding paradigm which exploits the redundancy of the source (video) at the decoder side, as opposed to predictive coding, where the encoder leverages the redundancy. To exploit the correlation between views, multiview predictive video codecs require the encoder...... with a robust fusion system able to improve the quality of the fused SI along the decoding process through a learning process using already decoded data. We shall here take the approach to fuse the estimated distributions of the SIs as opposed to a conventional fusion algorithm based on the fusion of pixel...... values. The proposed solution is able to achieve gains up to 0.9 dB in Bjøntegaard difference when compared with the best-performing (in a RD sense) single SI DVC decoder, chosen as the best of an inter-view and a temporal SI-based decoder one....
Visual Method for Spectral Energy Distribution Calculation of ...
Indian Academy of Sciences (India)
Abstract. In this work, we propose to use 'The Geometer's Sketchpad' to the fitting of a spectral energy distribution of blazar based on three effective spectral indices, αRO, αOX, and αRX and the flux density in the radio band. It can make us to see the fitting in detail with both the peak frequency and peak luminosity given ...
Nuclear methods - an integral part of the NBS certification program
International Nuclear Information System (INIS)
Gills, T.E.
1984-01-01
Within the past twenty years, new techniques and methods have emerged in response to new technologies that are based upon the performance of high-purity and well-characterized materials. The National Bureau of Standards, through its Standard Reference Materials (SRM's) Program, provides standards in the form of many of these materials to ensure accuracy and the compatibility of measurements throughout the US and the world. These standards, defined by the National Bureau of Standards as Standard Reference Materials (SRMs), are developed by using state-of-the-art methods and procedures for both preparation and analysis. Nuclear methods-activation analysis constitute an integral part of that analysis process
New Jersey's residential radon remediation program - methods and experience
International Nuclear Information System (INIS)
Pluta, T.A.; Cosolita, F.J.; Rothfuss, E.
1986-01-01
As part of a remedial action program to decontaminate over 200 residential properties, 12 typical properties were selected and a demonstration program was initiated in the spring of 1985. The residences selected represented a range of contamination levels and configurations and differing architectural styles representative of the age of construction. The physical limitations of the sites and the overall nature of a decontamination project in active residential communities imposed a number of severe restrictions on work methods and equipment. Regulations governing transportation and disposal set virtually zero defect standards for the condition of containers. The intrusive nature of the work in residential neighborhoods required continual interaction with local residents, public officials and citizen task forces. Media coverage was very high. Numerous briefings were held to allay fears and promote public understanding. Numerous issues ranging in content from public health and safety to engineering and construction methods arose during the remedial action program. These issues were resolved by a multi-disciplined management team which was knowledgeable in public administration, radiation physics, and engineering design and construction. This paper discusses the nature of the problem, the methods applied to resolve the problem and the experience gained as a result of a remedial action program
Polyhedral and semidefinite programming methods in combinatorial optimization
Tunçel, Levent
2010-01-01
Since the early 1960s, polyhedral methods have played a central role in both the theory and practice of combinatorial optimization. Since the early 1990s, a new technique, semidefinite programming, has been increasingly applied to some combinatorial optimization problems. The semidefinite programming problem is the problem of optimizing a linear function of matrix variables, subject to finitely many linear inequalities and the positive semidefiniteness condition on some of the matrix variables. On certain problems, such as maximum cut, maximum satisfiability, maximum stable set and geometric r
Response Matrix Method Development Program at Savannah River Laboratory
International Nuclear Information System (INIS)
Sicilian, J.M.
1976-01-01
The Response Matrix Method Development Program at Savannah River Laboratory (SRL) has concentrated on the development of an effective system of computer codes for the analysis of Savannah River Plant (SRP) reactors. The most significant contribution of this program to date has been the verification of the accuracy of diffusion theory codes as used for routine analysis of SRP reactor operation. This paper documents the two steps carried out in achieving this verification: confirmation of the accuracy of the response matrix technique through comparison with experiment and Monte Carlo calculations; and establishment of agreement between diffusion theory and response matrix codes in situations which realistically approximate actual operating conditions
A fully distributed method for dynamic spectrum sharing in femtocells
DEFF Research Database (Denmark)
Da Costa, Gustavo Wagner Oliveira; Cattoni, Andrea Fabio; Kovacs, Istvan
2012-01-01
such characteristics are combined the traditional network planning and optimization of cellular networks fails to be cost effective. Therefore, a greater deal of automation is needed in femtocells. In particular, this paper proposes a novel method for autonomous selection of spectrum/ channels in femtocells....... This method effectively mitigates cotier interference with no signaling at all across different femtocells. Still, the method has a remarkable simple implementation. The efficiency of the proposed method was evaluated by system level simulations. The results show large throughput gains for the cells...
P3T+: A Performance Estimator for Distributed and Parallel Programs
Directory of Open Access Journals (Sweden)
T. Fahringer
2000-01-01
Full Text Available Developing distributed and parallel programs on today's multiprocessor architectures is still a challenging task. Particular distressing is the lack of effective performance tools that support the programmer in evaluating changes in code, problem and machine sizes, and target architectures. In this paper we introduce P3T+ which is a performance estimator for mostly regular HPF (High Performance Fortran programs but partially covers also message passing programs (MPI. P3T+ is unique by modeling programs, compiler code transformations, and parallel and distributed architectures. It computes at compile-time a variety of performance parameters including work distribution, number of transfers, amount of data transferred, transfer times, computation times, and number of cache misses. Several novel technologies are employed to compute these parameters: loop iteration spaces, array access patterns, and data distributions are modeled by employing highly effective symbolic analysis. Communication is estimated by simulating the behavior of a communication library used by the underlying compiler. Computation times are predicted through pre-measured kernels on every target architecture of interest. We carefully model most critical architecture specific factors such as cache lines sizes, number of cache lines available, startup times, message transfer time per byte, etc. P3T+ has been implemented and is closely integrated with the Vienna High Performance Compiler (VFC to support programmers develop parallel and distributed applications. Experimental results for realistic kernel codes taken from real-world applications are presented to demonstrate both accuracy and usefulness of P3T+.
Congestion management of electric distribution networks through market based methods
DEFF Research Database (Denmark)
Huang, Shaojun
EVs and HPs. Market-based congestion management methods are the focus of the thesis. They handle the potential congestion at the energy planning stage; therefore, the aggregators can optimally plan the energy consumption and have the least impact on the customers. After reviewing and identifying...... the shortcomings of the existing methods, the thesis fully studies and improves the dynamic tariff (DT) method, and proposes two new market-based congestion management methods, namely the dynamic subsidy (DS) method and the flexible demand swap method. The thesis improves the DT method from four aspects......Rapidly increasing share of intermittent renewable energy production poses a great challenge of the management and operation of the modern power systems. Deployment of a large number of flexible demands, such as electrical vehicles (EVs) and heat pumps (HPs), is believed to be a promising solution...
Calculation of Pressure Distribution at Rotary Body Surface with the Vortex Element Method
Directory of Open Access Journals (Sweden)
S. A. Dergachev
2014-01-01
Full Text Available Vortex element method allows to simulate unsteady hydrodynamic processes in incompressible environment, taking into account the evolution of the vortex sheet, including taking into account the deformation or moving of the body or part of construction.For the calculation of the hydrodynamic characteristics of the method based on vortex element software package was developed MVE3D. Vortex element (VE in program is symmetrical Vorton-cut. For satisfying the boundary conditions at the surface used closed frame of vortons.With this software system modeled incompressible flow around a cylindrical body protection elongation L / D = 13 with a front spherical blunt with the angle of attack of 10 °. We analyzed the distribution of the pressure coefficient on the body surface of the top and bottom forming.The calculate results were compared with known Results of experiment.Considered design schemes with different number of Vorton framework. Also varied radius of VE. Calculation make possible to establish the degree of sampling surface needed to produce close to experiment results. It has been shown that an adequate reproducing the pressure distribution in the transition region spherical cylindrical surface, on the windward side requires a high degree of sampling.Based on these results Can be possible need to improve on the design scheme of body's surface, allowing more accurate to describe the flow vorticity in areas with abrupt changes of geometry streamlined body.
Distributed Cooperation Solution Method of Complex System Based on MAS
Weijin, Jiang; Yuhui, Xu
To adapt the model in reconfiguring fault diagnosing to dynamic environment and the needs of solving the tasks of complex system fully, the paper introduced multi-Agent and related technology to the complicated fault diagnosis, an integrated intelligent control system is studied in this paper. Based on the thought of the structure of diagnostic decision and hierarchy in modeling, based on multi-layer decomposition strategy of diagnosis task, a multi-agent synchronous diagnosis federation integrated different knowledge expression modes and inference mechanisms are presented, the functions of management agent, diagnosis agent and decision agent are analyzed, the organization and evolution of agents in the system are proposed, and the corresponding conflict resolution algorithm in given, Layered structure of abstract agent with public attributes is build. System architecture is realized based on MAS distributed layered blackboard. The real world application shows that the proposed control structure successfully solves the fault diagnose problem of the complex plant, and the special advantage in the distributed domain.
Integrated Data Collection Analysis (IDCA) Program - SSST Testing Methods
Energy Technology Data Exchange (ETDEWEB)
Sandstrom, Mary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Geoffrey W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Daniel N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pollard, Colin J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warner, Kirstin F. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Remmers, Daniel L. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Sorensen, Daniel N. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Whinnery, LeRoy L. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Phillips, Jason J. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Shelley, Timothy J. [Bureau of Alcohol, Tobacco and Firearms (ATF), Huntsville, AL (United States); Reyes, Jose A. [Applied Research Associates, Tyndall AFB, FL (United States); Hsu, Peter C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reynolds, John G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2013-03-25
The Integrated Data Collection Analysis (IDCA) program is conducting a proficiency study for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are the methods used for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis during the IDCA program. These methods changed throughout the Proficiency Test and the reasons for these changes are documented in this report. The most significant modifications in standard testing methods are: 1) including one specified sandpaper in impact testing among all the participants, 2) diversifying liquid test methods for selected participants, and 3) including sealed sample holders for thermal testing by at least one participant. This effort, funded by the Department of Homeland Security (DHS), is putting the issues of safe handling of these materials in perspective with standard military explosives. The study is adding SSST testing results for a broad suite of different HMEs to the literature. Ultimately the study will suggest new guidelines and methods and possibly establish the SSST testing accuracies needed to develop safe handling practices for HMEs. Each participating testing laboratory uses identical test materials and preparation methods wherever possible. The testing performers involved are Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory (LANL), Indian Head Division, Naval Surface Warfare Center, (NSWC IHD), Sandia National Laboratories (SNL), and Air Force Research Laboratory (AFRL/RXQL). These tests are conducted as a proficiency study in order to establish some consistency in test protocols, procedures, and experiments and to compare results when these testing variables cannot be made consistent.
MORSEC-SP, Step Function Angular Distribution for Cross-Sections Calculation by Program MORSE
International Nuclear Information System (INIS)
1980-01-01
1 - Description of problem or function: MORSEC-SP allows one to utilize a step distribution to describe the angular dependence of the multi- group function in the MORSEC cross section module of the MORSE Monte Carlo code. The step distribution is always non-negative and may be used in the random walk and for making point detector estimators. 2 - Method of solution: MORSEC-SP utilizes a table look up procedure to provide the probability of scattering when making point detector estimates for a given incident energy group and scattering angle. In the random walk, the step distributions are converted to cumulative distributions and an angle of scatter is selected from the cumulative distributions. Step distributions are obtained from calculation using the converted moments from the given Legendre coefficients of the scattering distributions. 3 - Restrictions on the complexity of the problem: Additional coding to the MORSEC module is variable dimensional and fully incorporated into blank common
Extension of the pseudo dynamic method to test structures with distributed mass
International Nuclear Information System (INIS)
Renda, V.; Papa, L.; Bellorini, S.
1993-01-01
The PsD method is a mixed numerical and experimental procedure. At each time step the dynamic deformation of the structure, computed by solving the equation of the motion for a given input signal, is reproduced in the laboratory by means of actuators attached to the sample at specific points. The reaction forces at those points are measured and used to compute the deformation for the next time step. The reaction forces being known, knowledge of the stiffness of the structure is not needed, so that the method can be effective also for deformations leading to strong nonlinear behaviour of the structure. On the contrary, the mass matrix and the applied forces must be well known. For this reason the PsD method can be applied without approximations when the masses can be considered as lumped at the testing points of the sample. The present work investigates the possibility to extend the PsD method to test structures with distributed mass. A standard procedure is proposed to provide an equivalent mass matrix and force vector reduced to the testing points and to verify the reliability of the model. The verification is obtained comparing the results of multi-degrees of freedom dynamic analysis, done by means of a Finite Elements (FE) numerical program, with a simulation of the PsD method based on the reduced degrees of freedom mass matrix and external forces, assuming in place of the experimental reactions, those computed with the general FE model. The method has been applied to a numerical simulation of the behaviour of a realistic and complex structure with distributed mass consisting of a masonry building of two floors. The FE model consists of about two thousand degrees of freedom and the condensation has been made for four testing points. A dynamic analysis has been performed with the general FE model and the reactions of the structure have been recorded in a file and used as input for the PsD simulation with the four degree of freedom model. The comparison between
A Sequential Quadratically Constrained Quadratic Programming Method of Feasible Directions
International Nuclear Information System (INIS)
Jian Jinbao; Hu Qingjie; Tang Chunming; Zheng Haiyan
2007-01-01
In this paper, a sequential quadratically constrained quadratic programming method of feasible directions is proposed for the optimization problems with nonlinear inequality constraints. At each iteration of the proposed algorithm, a feasible direction of descent is obtained by solving only one subproblem which consist of a convex quadratic objective function and simple quadratic inequality constraints without the second derivatives of the functions of the discussed problems, and such a subproblem can be formulated as a second-order cone programming which can be solved by interior point methods. To overcome the Maratos effect, an efficient higher-order correction direction is obtained by only one explicit computation formula. The algorithm is proved to be globally convergent and superlinearly convergent under some mild conditions without the strict complementarity. Finally, some preliminary numerical results are reported
International Nuclear Information System (INIS)
Sharma, R.B.; Ghildyal, B.P.
1976-01-01
The root distribution of wheat variety UP 301 was obtained by determining the 32 P activity in soil-root cores by two methods, viz., ignition and triacid digestion. Root distribution obtained by these two methods was compared with that by standard root core washing procedure. The percent error in root distribution as determined by triacid digestion method was within +- 2.1 to +- 9.0 as against +- 5.5 to +- 21.2 by ignition method. Thus triacid digestion method proved better over the ignition method. (author)
Method for measuring the size distribution of airborne rhinovirus
International Nuclear Information System (INIS)
Russell, M.L.; Goth-Goldstein, R.; Apte, M.G.; Fisk, W.J.
2002-01-01
About 50% of viral-induced respiratory illnesses are caused by the human rhinovirus (HRV). Measurements of the concentrations and sizes of bioaerosols are critical for research on building characteristics, aerosol transport, and mitigation measures. We developed a quantitative reverse transcription-coupled polymerase chain reaction (RT-PCR) assay for HRV and verified that this assay detects HRV in nasal lavage samples. A quantitation standard was used to determine a detection limit of 5 fg of HRV RNA with a linear range over 1000-fold. To measure the size distribution of HRV aerosols, volunteers with a head cold spent two hours in a ventilated research chamber. Airborne particles from the chamber were collected using an Andersen Six-Stage Cascade Impactor. Each stage of the impactor was analyzed by quantitative RT-PCR for HRV. For the first two volunteers with confirmed HRV infection, but with mild symptoms, we were unable to detect HRV on any stage of the impactor
Development of methods for DSM and distribution automation planning
Energy Technology Data Exchange (ETDEWEB)
Lehtonen, M.; Seppaelae, A.; Kekkonen, V.; Koreneff, G. [VTT Energy, Espoo (Finland)
1996-12-31
In the de-regulated electricity market, the power trading companies have to face new problems. The biggest challenges are caused by the uncertainty in the load magnitudes. In order to minimize the risks in power purchase and also in retail sales, the power traders should have as reliable and accurate estimates for hourly demands of their customers as possible. New tools have been developed for the distribution load estimation and for the management of energy balances of the trading companies. These tools are based on the flexible combination of the information available from several sources, like direct customer measurements, network measurements, load models and statistical data. These functions also serve as an information source for higher level activities of the electricity selling companies. These activities and the associated functions have been studied in the prototype system called DEM, which is now being developed for the operation of Finnish utilities in the newly de-regulated power market
Development of methods for DSM and distribution automation planning
International Nuclear Information System (INIS)
Lehtonen, M.; Seppaelae, A.; Kekkonen, V.; Koreneff, G.
1996-01-01
In the de-regulated electricity market, the power trading companies have to face new problems. The biggest challenges are caused by the uncertainty in the load magnitudes. In order to minimize the risks in power purchase and also in retail sales, the power traders should have as reliable and accurate estimates for hourly demands of their customers as possible. New tools have been developed for the distribution load estimation and for the management of energy balances of the trading companies. These tools are based on the flexible combination of the information available from several sources, like direct customer measurements, network measurements, load models and statistical data. These functions also serve as an information source for higher level activities of the electricity selling companies. These activities and the associated functions have been studied in the prototype system called DEM, which is now being developed for the operation of Finnish utilities in the newly de-regulated power market
Method for measuring the size distribution of airborne rhinovirus
Energy Technology Data Exchange (ETDEWEB)
Russell, M.L.; Goth-Goldstein, R.; Apte, M.G.; Fisk, W.J.
2002-01-01
About 50% of viral-induced respiratory illnesses are caused by the human rhinovirus (HRV). Measurements of the concentrations and sizes of bioaerosols are critical for research on building characteristics, aerosol transport, and mitigation measures. We developed a quantitative reverse transcription-coupled polymerase chain reaction (RT-PCR) assay for HRV and verified that this assay detects HRV in nasal lavage samples. A quantitation standard was used to determine a detection limit of 5 fg of HRV RNA with a linear range over 1000-fold. To measure the size distribution of HRV aerosols, volunteers with a head cold spent two hours in a ventilated research chamber. Airborne particles from the chamber were collected using an Andersen Six-Stage Cascade Impactor. Each stage of the impactor was analyzed by quantitative RT-PCR for HRV. For the first two volunteers with confirmed HRV infection, but with mild symptoms, we were unable to detect HRV on any stage of the impactor.
Development of methods for DSM and distribution automation planning
Energy Technology Data Exchange (ETDEWEB)
Lehtonen, M; Seppaelae, A; Kekkonen, V; Koreneff, G [VTT Energy, Espoo (Finland)
1997-12-31
In the de-regulated electricity market, the power trading companies have to face new problems. The biggest challenges are caused by the uncertainty in the load magnitudes. In order to minimize the risks in power purchase and also in retail sales, the power traders should have as reliable and accurate estimates for hourly demands of their customers as possible. New tools have been developed for the distribution load estimation and for the management of energy balances of the trading companies. These tools are based on the flexible combination of the information available from several sources, like direct customer measurements, network measurements, load models and statistical data. These functions also serve as an information source for higher level activities of the electricity selling companies. These activities and the associated functions have been studied in the prototype system called DEM, which is now being developed for the operation of Finnish utilities in the newly de-regulated power market
Iterative methods for distributed parameter estimation in parabolic PDE
Energy Technology Data Exchange (ETDEWEB)
Vogel, C.R. [Montana State Univ., Bozeman, MT (United States); Wade, J.G. [Bowling Green State Univ., OH (United States)
1994-12-31
The goal of the work presented is the development of effective iterative techniques for large-scale inverse or parameter estimation problems. In this extended abstract, a detailed description of the mathematical framework in which the authors view these problem is presented, followed by an outline of the ideas and algorithms developed. Distributed parameter estimation problems often arise in mathematical modeling with partial differential equations. They can be viewed as inverse problems; the `forward problem` is that of using the fully specified model to predict the behavior of the system. The inverse or parameter estimation problem is: given the form of the model and some observed data from the system being modeled, determine the unknown parameters of the model. These problems are of great practical and mathematical interest, and the development of efficient computational algorithms is an active area of study.
Next Generation Nuclear Plant Methods Technical Program Plan
Energy Technology Data Exchange (ETDEWEB)
Richard R. Schultz; Abderrafi M. Ougouag; David W. Nigg; Hans D. Gougar; Richard W. Johnson; William K. Terry; Chang H. Oh; Donald W. McEligot; Gary W. Johnsen; Glenn E. McCreery; Woo Y. Yoon; James W. Sterbentz; J. Steve Herring; Temitope A. Taiwo; Thomas Y. C. Wei; William D. Pointer; Won S. Yang; Michael T. Farmer; Hussein S. Khalil; Madeline A. Feltus
2010-12-01
One of the great challenges of designing and licensing the Very High Temperature Reactor (VHTR) is to confirm that the intended VHTR analysis tools can be used confidently to make decisions and to assure all that the reactor systems are safe and meet the performance objectives of the Generation IV Program. The research and development (R&D) projects defined in the Next Generation Nuclear Plant (NGNP) Design Methods Development and Validation Program will ensure that the tools used to perform the required calculations and analyses can be trusted. The Methods R&D tasks are designed to ensure that the calculational envelope of the tools used to analyze the VHTR reactor systems encompasses, or is larger than, the operational and transient envelope of the VHTR itself. The Methods R&D focuses on the development of tools to assess the neutronic and thermal fluid behavior of the plant. The fuel behavior and fission product transport models are discussed in the Advanced Gas Reactor (AGR) program plan. Various stress analysis and mechanical design tools will also need to be developed and validated and will ultimately also be included in the Methods R&D Program Plan. The calculational envelope of the neutronics and thermal-fluids software tools intended to be used on the NGNP is defined by the scenarios and phenomena that these tools can calculate with confidence. The software tools can only be used confidently when the results they produce have been shown to be in reasonable agreement with first-principle results, thought-problems, and data that describe the “highly ranked” phenomena inherent in all operational conditions and important accident scenarios for the VHTR.
Next Generation Nuclear Plant Methods Technical Program Plan -- PLN-2498
Energy Technology Data Exchange (ETDEWEB)
Richard R. Schultz; Abderrafi M. Ougouag; David W. Nigg; Hans D. Gougar; Richard W. Johnson; William K. Terry; Chang H. Oh; Donald W. McEligot; Gary W. Johnsen; Glenn E. McCreery; Woo Y. Yoon; James W. Sterbentz; J. Steve Herring; Temitope A. Taiwo; Thomas Y. C. Wei; William D. Pointer; Won S. Yang; Michael T. Farmer; Hussein S. Khalil; Madeline A. Feltus
2010-09-01
One of the great challenges of designing and licensing the Very High Temperature Reactor (VHTR) is to confirm that the intended VHTR analysis tools can be used confidently to make decisions and to assure all that the reactor systems are safe and meet the performance objectives of the Generation IV Program. The research and development (R&D) projects defined in the Next Generation Nuclear Plant (NGNP) Design Methods Development and Validation Program will ensure that the tools used to perform the required calculations and analyses can be trusted. The Methods R&D tasks are designed to ensure that the calculational envelope of the tools used to analyze the VHTR reactor systems encompasses, or is larger than, the operational and transient envelope of the VHTR itself. The Methods R&D focuses on the development of tools to assess the neutronic and thermal fluid behavior of the plant. The fuel behavior and fission product transport models are discussed in the Advanced Gas Reactor (AGR) program plan. Various stress analysis and mechanical design tools will also need to be developed and validated and will ultimately also be included in the Methods R&D Program Plan. The calculational envelope of the neutronics and thermal-fluids software tools intended to be used on the NGNP is defined by the scenarios and phenomena that these tools can calculate with confidence. The software tools can only be used confidently when the results they produce have been shown to be in reasonable agreement with first-principle results, thought-problems, and data that describe the “highly ranked” phenomena inherent in all operational conditions and important accident scenarios for the VHTR.
Load forecasting method considering temperature effect for distribution network
Directory of Open Access Journals (Sweden)
Meng Xiao Fang
2016-01-01
Full Text Available To improve the accuracy of load forecasting, the temperature factor was introduced into the load forecasting in this paper. This paper analyzed the characteristics of power load variation, and researched the rule of the load with the temperature change. Based on the linear regression analysis, the mathematical model of load forecasting was presented with considering the temperature effect, and the steps of load forecasting were given. Used MATLAB, the temperature regression coefficient was calculated. Using the load forecasting model, the full-day load forecasting and time-sharing load forecasting were carried out. By comparing and analyzing the forecast error, the results showed that the error of time-sharing load forecasting method was small in this paper. The forecasting method is an effective method to improve the accuracy of load forecasting.
A method for scientific code coupling in a distributed environment
International Nuclear Information System (INIS)
Caremoli, C.; Beaucourt, D.; Chen, O.; Nicolas, G.; Peniguel, C.; Rascle, P.; Richard, N.; Thai Van, D.; Yessayan, A.
1994-12-01
This guide book deals with coupling of big scientific codes. First, the context is introduced: big scientific codes devoted to a specific discipline coming to maturity, and more and more needs in terms of multi discipline studies. Then we describe different kinds of code coupling and an example of code coupling: 3D thermal-hydraulic code THYC and 3D neutronics code COCCINELLE. With this example we identify problems to be solved to realize a coupling. We present the different numerical methods usable for the resolution of coupling terms. This leads to define two kinds of coupling: with the leak coupling, we can use explicit methods, and with the strong coupling we need to use implicit methods. On both cases, we analyze the link with the way of parallelizing code. For translation of data from one code to another, we define the notion of Standard Coupling Interface based on a general structure for data. This general structure constitutes an intermediary between the codes, thus allowing a relative independence of the codes from a specific coupling. The proposed method for the implementation of a coupling leads to a simultaneous run of the different codes, while they exchange data. Two kinds of data communication with message exchange are proposed: direct communication between codes with the use of PVM product (Parallel Virtual Machine) and indirect communication with a coupling tool. This second way, with a general code coupling tool, is based on a coupling method, and we strongly recommended to use it. This method is based on the two following principles: re-usability, that means few modifications on existing codes, and definition of a code usable for coupling, that leads to separate the design of a code usable for coupling from the realization of a specific coupling. This coupling tool available from beginning of 1994 is described in general terms. (authors). figs., tabs
Relaxation Methods for Strictly Convex Regularizations of Piecewise Linear Programs
International Nuclear Information System (INIS)
Kiwiel, K. C.
1998-01-01
We give an algorithm for minimizing the sum of a strictly convex function and a convex piecewise linear function. It extends several dual coordinate ascent methods for large-scale linearly constrained problems that occur in entropy maximization, quadratic programming, and network flows. In particular, it may solve exact penalty versions of such (possibly inconsistent) problems, and subproblems of bundle methods for nondifferentiable optimization. It is simple, can exploit sparsity, and in certain cases is highly parallelizable. Its global convergence is established in the recent framework of B -functions (generalized Bregman functions)
Quantifying Carbon and distributional benefits of solar home system programs in Bangladesh
Wang, Limin; Bandyopadhyay, Sushenjit; Cosgrove-Davies, Mac; Samad, Hussain
2011-01-01
Scaling-up adoption of renewable energy technology, such as solar home systems, to expand electricity access in developing countries can accelerate the transition to low-carbon economic development. Using a purposely collected national household survey, this study quantifies the carbon and distributional benefits of solar home system programs in Bangladesh. Three key findings are generated...
Distributed Semidefinite Programming with Application to Large-scale System Analysis
DEFF Research Database (Denmark)
Khoshfetrat Pakazad, Sina; Hansson, Anders; Andersen, Martin S.
2017-01-01
Distributed algorithms for solving coupled semidefinite programs (SDPs) commonly require many iterations to converge. They also put high computational demand on the computational agents. In this paper we show that in case the coupled problem has an inherent tree structure, it is possible to devis...
AspectKE*:Security Aspects with Program Analysis for Distributed Systems
DEFF Research Database (Denmark)
2010-01-01
AspectKE* is the first distributed AOP language based on a tuple space system. It is designed to enforce security policies to applications containing untrusted processes. One of the key features is the high-level predicates that extract results of static program analysis. These predicates provide...
Methods and computer programs for PWR's fuel management: Programs Sothis and Ciclon
International Nuclear Information System (INIS)
Aragones, J.M.; Corella, M.R.; Martinez-Val, J.M.
1976-01-01
Methos and computer programs developed at JEN for fuel management in PWR are discussed, including scope of model, procedures for sistematic selection of alternatives to be evaluated, basis of model for neutronic calculation, methods for fuel costs calculation, procedures for equilibrium and trans[tion cycles calculation with Soth[s and Ciclon codes and validation of methods by comparison of results with others of reference (author) ' [es
Isotope distribution program at the Oak Ridge National Laboratory with emphasis on medical isotopes
International Nuclear Information System (INIS)
Adair, H.L.
1987-01-01
The Isotope Distribution Program (IDP) is a group of individual activities with separate and diverse DOE sponsors which share the common mission of the production and distribution of isotope products and the performance of isotope-related services. Its basic mission is to provide isotope products and associated services to the user community by utilizing government-owned facilities that are excess to the primary mission of the DOE. The IDP is in its 41st year of operation. Initially, the program provided research quantities of radioactive materials, and through the 1950's it was the major supplier of radioisotopes for both research and commercial application. Distribution of enriched stable isotopes began in 1954. This paper discusses the use of radioisotopes in medicine and the role that ORNL plays in this field
Method of trial distribution function for quantum turbulence
International Nuclear Information System (INIS)
Nemirovskii, Sergey K.
2012-01-01
Studying quantum turbulence the necessity of calculation the various characteristics of the vortex tangle (VT) appears. Some of 'crude' quantities can be expressed directly via the total length of vortex lines (per unit of volume) or the vortex line density L(t) and the structure parameters of the VT. Other more 'subtle' quantities require knowledge of the vortex line configurations {s(xi,t) }. Usually, the corresponding calculations are carried out with the use of more or less truthful speculations concerning arrangement of the VT. In this paper we review other way to solution of this problem. It is based on the trial distribution functional (TDF) in space of vortex loop configurations. The TDF is constructed on the basis of well established properties of the vortex tangle. It is designed to calculate various averages taken over stochastic vortex loop configurations. In this paper we also review several applications of the use this model to calculate some important characteristics of the vortex tangle. In particular we discussed the average superfluid mass current J induced by vortices and its dynamics. We also describe the diffusion-like processes in the nonuniform vortex tangle and propagation of turbulent fronts.
User-Defined Data Distributions in High-Level Programming Languages
Diaconescu, Roxana E.; Zima, Hans P.
2006-01-01
One of the characteristic features of today s high performance computing systems is a physically distributed memory. Efficient management of locality is essential for meeting key performance requirements for these architectures. The standard technique for dealing with this issue has involved the extension of traditional sequential programming languages with explicit message passing, in the context of a processor-centric view of parallel computation. This has resulted in complex and error-prone assembly-style codes in which algorithms and communication are inextricably interwoven. This paper presents a high-level approach to the design and implementation of data distributions. Our work is motivated by the need to improve the current parallel programming methodology by introducing a paradigm supporting the development of efficient and reusable parallel code. This approach is currently being implemented in the context of a new programming language called Chapel, which is designed in the HPCS project Cascade.
CDFTBL: A statistical program for generating cumulative distribution functions from data
International Nuclear Information System (INIS)
Eslinger, P.W.
1991-06-01
This document describes the theory underlying the CDFTBL code and gives details for using the code. The CDFTBL code provides an automated tool for generating a statistical cumulative distribution function that describes a set of field data. The cumulative distribution function is written in the form of a table of probabilities, which can be used in a Monte Carlo computer code. A a specific application, CDFTBL can be used to analyze field data collected for parameters required by the PORMC computer code. Section 2.0 discusses the mathematical basis of the code. Section 3.0 discusses the code structure. Section 4.0 describes the free-format input command language, while Section 5.0 describes in detail the commands to run the program. Section 6.0 provides example program runs, and Section 7.0 provides references. The Appendix provides a program source listing. 11 refs., 2 figs., 19 tabs
Academic training: From Evolution Theory to Parallel and Distributed Genetic Programming
2007-01-01
2006-2007 ACADEMIC TRAINING PROGRAMME LECTURE SERIES 15, 16 March From 11:00 to 12:00 - Main Auditorium, bldg. 500 From Evolution Theory to Parallel and Distributed Genetic Programming F. FERNANDEZ DE VEGA / Univ. of Extremadura, SP Lecture No. 1: From Evolution Theory to Evolutionary Computation Evolutionary computation is a subfield of artificial intelligence (more particularly computational intelligence) involving combinatorial optimization problems, which are based to some degree on the evolution of biological life in the natural world. In this tutorial we will review the source of inspiration for this metaheuristic and its capability for solving problems. We will show the main flavours within the field, and different problems that have been successfully solved employing this kind of techniques. Lecture No. 2: Parallel and Distributed Genetic Programming The successful application of Genetic Programming (GP, one of the available Evolutionary Algorithms) to optimization problems has encouraged an ...
Energy Technology Data Exchange (ETDEWEB)
Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung [Hanyang Univ., Seoul (Korea, Republic of); Noh, Jae Man [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2013-10-15
The uncertainty evaluation with statistical method is performed by repetition of transport calculation with sampling the directly perturbed nuclear data. Hence, the reliable uncertainty result can be obtained by analyzing the results of the numerous transport calculations. One of the problems in the uncertainty analysis with the statistical approach is known as that the cross section sampling from the normal (Gaussian) distribution with relatively large standard deviation leads to the sampling error of the cross sections such as the sampling of the negative cross section. Some collection methods are noted; however, the methods can distort the distribution of the sampled cross sections. In this study, a sampling method of the nuclear data is proposed by using lognormal distribution. After that, the criticality calculations with sampled nuclear data are performed and the results are compared with that from the normal distribution which is conventionally used in the previous studies. In this study, the statistical sampling method of the cross section with the lognormal distribution was proposed to increase the sampling accuracy without negative sampling error. Also, a stochastic cross section sampling and writing program was developed. For the sensitivity and uncertainty analysis, the cross section sampling was pursued with the normal and lognormal distribution. The uncertainties, which are caused by covariance of (n,.) cross sections, were evaluated by solving GODIVA problem. The results show that the sampling method with lognormal distribution can efficiently solve the negative sampling problem referred in the previous studies. It is expected that this study will contribute to increase the accuracy of the sampling-based uncertainty analysis.
International Nuclear Information System (INIS)
Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung; Noh, Jae Man
2013-01-01
The uncertainty evaluation with statistical method is performed by repetition of transport calculation with sampling the directly perturbed nuclear data. Hence, the reliable uncertainty result can be obtained by analyzing the results of the numerous transport calculations. One of the problems in the uncertainty analysis with the statistical approach is known as that the cross section sampling from the normal (Gaussian) distribution with relatively large standard deviation leads to the sampling error of the cross sections such as the sampling of the negative cross section. Some collection methods are noted; however, the methods can distort the distribution of the sampled cross sections. In this study, a sampling method of the nuclear data is proposed by using lognormal distribution. After that, the criticality calculations with sampled nuclear data are performed and the results are compared with that from the normal distribution which is conventionally used in the previous studies. In this study, the statistical sampling method of the cross section with the lognormal distribution was proposed to increase the sampling accuracy without negative sampling error. Also, a stochastic cross section sampling and writing program was developed. For the sensitivity and uncertainty analysis, the cross section sampling was pursued with the normal and lognormal distribution. The uncertainties, which are caused by covariance of (n,.) cross sections, were evaluated by solving GODIVA problem. The results show that the sampling method with lognormal distribution can efficiently solve the negative sampling problem referred in the previous studies. It is expected that this study will contribute to increase the accuracy of the sampling-based uncertainty analysis
FATAL, General Experiment Fitting Program by Nonlinear Regression Method
International Nuclear Information System (INIS)
Salmon, L.; Budd, T.; Marshall, M.
1982-01-01
1 - Description of problem or function: A generalized fitting program with a free-format keyword interface to the user. It permits experimental data to be fitted by non-linear regression methods to any function describable by the user. The user requires the minimum of computer experience but needs to provide a subroutine to define his function. Some statistical output is included as well as 'best' estimates of the function's parameters. 2 - Method of solution: The regression method used is based on a minimization technique devised by Powell (Harwell Subroutine Library VA05A, 1972) which does not require the use of analytical derivatives. The method employs a quasi-Newton procedure balanced with a steepest descent correction. Experience shows this to be efficient for a very wide range of application. 3 - Restrictions on the complexity of the problem: The current version of the program permits functions to be defined with up to 20 parameters. The function may be fitted to a maximum of 400 points, preferably with estimated values of weight given
Program-target methods of management small business
Directory of Open Access Journals (Sweden)
Gurova Ekaterina
2017-01-01
Full Text Available Experience of small businesses in Russia are just beginning their path to development. difficulties arise with the involvement of small businesses in the implementation of government development programmes. Small business in modern conditions to secure a visible prospect of development without the implementation of state support programmes. Ways and methods of regulation of development of the market economy are diverse. The total mass of the huge role is played by the program-target methods of regulation. The article describes the basic principles on the use of program-target approach to the development of a specific sector of the economy, as small businesses, designed to play an important role in getting the national economy out of crisis. The material in this publication is built from the need to maintain the connection between the theory of government regulation, practice of formation of development programs at the regional level and the needs of small businesses. Essential for the formation of entrepreneurship development programmes is to preserve the flexibility of small businesses in making management decisions related to the selection and change of activities.
International Nuclear Information System (INIS)
Fernandes, Marco A.R.; Fernandes, David M.; Florentino, Helenice O.
2010-01-01
The work detaches the importance of the use of mathematical tools and computer systems for optimization of the planning in radiotherapy, seeking to the distribution of dose of appropriate radiation in the white volume that provides an ideal therapeutic rate between the tumor cells and the adjacent healthy tissues, extolled in the radiotherapy protocols. Examples of target volumes mathematically modeled are analyzed with the technique of linear programming, comparing the obtained results using the Simplex algorithm with those using the algorithm of Interior Points. The System Genesis II was used for obtaining of the isodose curves for the outline and geometry of fields idealized in the computer simulations, considering the parameters of a 10 MV photons beams. Both programming methods (Simplex and Interior Points) they resulted in a distribution of integral dose in the tumor volume and allow the adaptation of the dose in the critical organs inside of the restriction limits extolled. The choice of an or other method should take into account the facility and the need of limiting the programming time. The isodose curves, obtained with the Genesis II System, illustrate that the adjacent healthy tissues to the tumor receives larger doses than those reached in the computer simulations. More coincident values can be obtained altering the weights and some factors of minimization of the objective function. The prohibitive costs of the computer planning systems, at present available for radiotherapy, it motivates the researches to look for the implementation of simpler and so effective methods for optimization of the treatment plan. (author)
Directory of Open Access Journals (Sweden)
Chaim Aldemir
2002-01-01
Full Text Available The main objective of this work was to compare two methods to estimate the deposition of pesticide applied by aerial spraying. Hundred and fifty pieces of water sensitive paper were distributed over an area of 50 m length by 75 m width for sampling droplets sprayed by an aircraft calibrated to apply a spray volume of 32 L/ha. The samples were analysed by visual microscopic method using NG 2 Porton graticule and by an image analyser computer program. The results reached by visual microscopic method were the following: volume median diameter, 398±62 mum; number median diameter, 159±22 mum; droplet density, 22.5±7.0 droplets/cm² and estimated deposited volume, 22.2±9.4 L/ha. The respective ones reached with the computer program were: 402±58 mum, 161±32 mum, 21.9±7.5 droplets/cm² and 21.9±9.2 L/ha. Graphs of the spatial distribution of droplet density and deposited spray volume on the area were produced by the computer program.
Distributed Coordinate Descent Method for Learning with Big Data
Richtárik, Peter; Takáč, Martin
2013-01-01
In this paper we develop and analyze Hydra: HYbriD cooRdinAte descent method for solving loss minimization problems with big data. We initially partition the coordinates (features) and assign each partition to a different node of a cluster. At every iteration, each node picks a random subset of the coordinates from those it owns, independently from the other computers, and in parallel computes and applies updates to the selected coordinates based on a simple closed-form formula. We give bound...
Seasonal comparison of two spatially distributed evapotranspiration mapping methods
Kisfaludi, Balázs; Csáki, Péter; Péterfalvi, József; Primusz, Péter
2017-04-01
More rainfall is disposed of through evapotranspiration (ET) on a global scale than through runoff and storage combined. In Hungary, about 90% of the precipitation evapotranspirates from the land and only 10% goes to surface runoff and groundwater recharge. Therefore, evapotranspiration is a very important element of the water balance, so it is a suitable parameter for the calibration of hydrological models. Monthly ET values of two MODIS-data based ET products were compared for the area of Hungary and for the vegetation period of the year 2008. The differences were assessed by land cover types and by elevation zones. One ET map was the MOD16, aiming at global coverage and provided by the MODIS Global Evaporation Project. The other method is called CREMAP, it was developed at the Budapest University of Technology and Economics for regional scale ET mapping. CREMAP was validated for the area of Hungary with good results, but ET maps were produced only for the period of 2000-2008. The aim of this research was to evaluate the performance of the MOD16 product compared to the CREMAP method. The average difference between the two products was the highest during summer, CREMAP estimating higher ET values by about 25 mm/month. In the spring and autumn, MOD16 ET values were higher by an average of 6 mm/month. The differences by land cover types showed a similar seasonal pattern to the average differences, and they correlated strongly with each other. Practically the same difference values could be calculated for arable lands and forests that together cover nearly 75% of the area of the country. Therefore, it can be said that the seasonal changes had the same effect on the two method's ET estimations in each land cover type areas. The analysis by elevation zones showed that on elevations lower than 200 m AMSL the trends of the difference values were similar to the average differences. The correlation between the values of these elevation zones was also strong. However weaker
Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks
Chaoyang Shi; Bi Yu Chen; William H. K. Lam; Qingquan Li
2017-01-01
Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are f...
International Nuclear Information System (INIS)
Han Jingru; Chen Yixue; Yuan Longjun
2013-01-01
The Monte Carlo (MC) and discrete ordinates (SN) are the commonly used methods in the design of radiation shielding. Monte Carlo method is able to treat the geometry exactly, but time-consuming in dealing with the deep penetration problem. The discrete ordinate method has great computational efficiency, but it is quite costly in computer memory and it suffers from ray effect. Single discrete ordinates method or single Monte Carlo method has limitation in shielding calculation for large complex nuclear facilities. In order to solve the problem, the Monte Carlo and discrete ordinates bidirectional coupling method is developed. The bidirectional coupling method is implemented in the interface program to transfer the particle probability distribution of MC and angular flux of discrete ordinates. The coupling method combines the advantages of MC and SN. The test problems of cartesian and cylindrical coordinate have been calculated by the coupling methods. The calculation results are performed with comparison to MCNP and TORT and satisfactory agreements are obtained. The correctness of the program is proved. (authors)
Computerized method for X-ray angular distribution simulation in radiological systems
International Nuclear Information System (INIS)
Marques, Marcio A.; Oliveira, Henrique J.Q. de; Frere, Annie F.; Schiabel, Homero; Marques, Paulo M.A.
1996-01-01
A method to simulate the changes in X-ray angular distribution (the Heel effect) for radiologic imaging systems is presented. This simulation method is described as to predict images for any exposure technique considering that the distribution is the cause of the intensity variation along the radiation field
Finite element method programs to analyze irradiation behavior of fuel pellets
International Nuclear Information System (INIS)
Yamada, Rayji; Harayama, Yasuo; Ishibashi, Akihiro; Ono, Masao.
1979-09-01
For the safety assessment of reactor fuel, it is important to grasp local changes of fuel pins due to irradiation in a reactor. Such changes of fuel result mostly from irradiation of fuel pellets. Elasto-plastic analysis programs based on the finite element method were developed to analyze these local changes. In the programs, emphasis is placed on the analysis of cracks in pellets; the interaction between cracked-pellets and cladding is not taken into consideration. The two programs developed are FEMF3 based on a two-dimensional axially symmetric model (r-z system) and FREB4 on a two-dimensional plane model (r-theta system). It is discussed in this report how the occurrence and distribution of cracks depend on heat rate of the fuel pin. (author)
International Nuclear Information System (INIS)
Zakaria, G.A.; Schuette, W.
2002-01-01
The purpose of this investigation was to compare the commercial 3D-treatment planning system Helax TMS to a simple 2D program ASYMM, concerning the calculation of dose distributions for asymmetric fields. The dose calculation algorithm in Helax-TMS is based on the polyenergetic pencil beam model of Ahnesjoe. Our own developed 2D treatment planning program ASYMM, based on the Thomas and Thomas method for asymmetric open fields, has been extended to calculate the dose distributions for open and wedged fields. Using both methods, dose distributions for various asymmetric open and wedged fields of a 4-MV Linear accelerator were calculated and compared with measured data in water. The agreement of the Helax-TMS and the ASYMM with the experiment was good, whereas ASYMM showed a better accuracy for larger asymmetric angles. The explanation for this result is based on the consideration of beam hardening within the flattening filter and edges for different asymmetric settings in ASYMM algorithm. The TMS, however, owns the diverse possibilities that the 3D calculation and corresponding representation provide and holds better application opportunities in clinical routine. (orig.) [de
Methods for obtaining distributions of uranium occurrence from estimates of geologic features
International Nuclear Information System (INIS)
Ford, C.E.; McLaren, R.A.
1980-04-01
The problem addressed in this paper is the determination of a quantitative estimate of a resource from estimates of fundamental variables which describe the resource. Due to uncertainty about the estimates, these basic variables are stochastic. The evaluation of random equations involving these variables is the core of the analysis process. The basic variables are originally described in terms of a low and a high percentile (the 5th and 95th, for example) and a central value (the mode, mean or median). The variable thus described is then generally assumed to be represented by a three-parameter lognormal distribution. Expressions involving these variables are evaluated by computing the first four central moments of the random functions (which are usually products and sums of variables). Stochastic independence is discussed. From the final set of moments a Pearson distribution is obtained; the high values of skewness and kurtosis resulting from uranium data require obtaining Pearson curves beyond those described in published tables. A cubic spline solution to the Pearson differential equation accomplishes this task. A sample problem is used to illustrate the application of the process; sensitivity to the estimated values of the basic variables is discussed. Appendices contain details of the methods and descriptions of computer programs
Methods for obtaining distributions of uranium occurrence from estimates of geologic features
International Nuclear Information System (INIS)
Ford, C.E.; McLaren, R.A.
1980-04-01
The problem addressed in this report is the determination of a quantitative estimate of a resource from estimates of fundamental variables which describe the resource. Due to uncertainty about the estimates, these basic variables are stochastic. The evaluation of random equations involving these variables is the core of the analysis process. The basic variables are originally described in terms of a low and a high percentile (the 5th and 95th, for example) and a central value (the mode, mean or median). The variable thus described is then generally assumed to be represented by a three-parameter lognormal distribution. Expressions involving these variables are evaluated by computing the first four central moments of the random functions (which are usually products and sums of variables). Stochastic independence is discussed. From the final set of moments a Pearson distribution is obtained; the high values of skewness and kurtosis resulting from uranium data requires obtaining Pearson curves beyond those described in published tables. A cubic spline solution to the Pearson differential equation accomplishes this task. A sample problem is used to illustrate the application of the process; sensitivity to the estimated values of the basic variables is discussed. Appendices contain details of the methods and descriptions of computer programs
Energy Technology Data Exchange (ETDEWEB)
Yi Luo; Jian-wei Cheng [West Virginia University, Morgantown, WV (United States). Department of Mining Engineering
2009-09-15
The distribution of the final surface subsidence basin induced by longwall operations in inclined coal seam could be significantly different from that in flat coal seam and demands special prediction methods. Though many empirical prediction methods have been developed, these methods are inflexible for varying geological and mining conditions. An influence function method has been developed to take the advantage of its fundamentally sound nature and flexibility. In developing this method, significant modifications have been made to the original Knothe function to produce an asymmetrical influence function. The empirical equations for final subsidence parameters derived from US subsidence data and Chinese empirical values have been incorporated into the mathematical models to improve the prediction accuracy. A corresponding computer program is developed. A number of subsidence cases for longwall mining operations in coal seams with varying inclination angles have been used to demonstrate the applicability of the developed subsidence prediction model. 9 refs., 8 figs.
International Nuclear Information System (INIS)
Gao, Li-Na; Liu, Fu-Hu; Lacey, Roy A.
2016-01-01
Experimental results of the transverse-momentum distributions of φ mesons and Ω hyperons produced in gold-gold (Au-Au) collisions with different centrality intervals, measured by the STAR Collaboration at different energies (7.7, 11.5, 19.6, 27, and 39 GeV) in the beam energy scan (BES) program at the relativistic heavy-ion collider (RHIC), are approximately described by the single Erlang distribution and the two-component Schwinger mechanism. Moreover, the STAR experimental transverse-momentum distributions of negatively charged particles, produced in Au-Au collisions at RHIC BES energies, are approximately described by the two-component Erlang distribution and the single Tsallis statistics. The excitation functions of free parameters are obtained from the fit to the experimental data. A weak softest point in the string tension in Ω hyperon spectra is observed at 7.7 GeV. (orig.)
Recommendations for scale-up of community-based misoprostol distribution programs.
Robinson, Nuriya; Kapungu, Chisina; Carnahan, Leslie; Geller, Stacie
2014-06-01
Community-based distribution of misoprostol for prevention of postpartum hemorrhage (PPH) in resource-poor settings has been shown to be safe and effective. However, global recommendations for prenatal distribution and monitoring within a community setting are not yet available. In order to successfully translate misoprostol and PPH research into policy and practice, several critical points must be considered. A focus on engaging the community, emphasizing the safe nature of community-based misoprostol distribution, supply chain management, effective distribution, coverage, and monitoring plans are essential elements to community-based misoprostol program introduction, expansion, or scale-up. Copyright © 2014 International Federation of Gynecology and Obstetrics. Published by Elsevier Ireland Ltd. All rights reserved.
Purpose and methods of a Pollution Prevention Awareness Program
Energy Technology Data Exchange (ETDEWEB)
Flowers, P.A.; Irwin, E.F.; Poligone, S.E.
1994-08-15
The purpose of the Pollution Prevention Awareness Program (PPAP), which is required by DOE Order 5400.1, is to foster the philosophy that prevention is superior to remediation. The goal of the program is to incorporate pollution prevention into the decision-making process at every level throughout the organization. The objectives are to instill awareness, disseminate information, provide training and rewards for identifying the true source or cause of wastes, and encourage employee participation in solving environmental issues and preventing pollution. PPAP at the Oak Ridge Y-12 Plant was created several years ago and continues to grow. We believe that we have implemented several unique methods of communicating environmental awareness to promote a more active work force in identifying ways of reducing pollution.
The Accident Sequence Precursor program: Methods improvements and current results
International Nuclear Information System (INIS)
Minarick, J.W.; Manning, F.M.; Harris, J.D.
1987-01-01
Changes in the US NRC Accident Sequence Precursor program methods since the initial program evaluations of 1969-81 operational events are described, along with insights from the review of 1984-85 events. For 1984-85, the number of significant precursors was consistent with the number observed in 1980-81, dominant sequences associated with significant events were reasonably consistent with PRA estimates for BWRs, but lacked the contribution due to small-break LOCAs previously observed and predicted in PWRs, and the frequency of initiating events and non-recoverable system failures exhibited some reduction compared to 1980-81. Operational events which provide information concerning additional PRA modeling needs are also described
Agent-based method for distributed clustering of textual information
Potok, Thomas E [Oak Ridge, TN; Reed, Joel W [Knoxville, TN; Elmore, Mark T [Oak Ridge, TN; Treadwell, Jim N [Louisville, TN
2010-09-28
A computer method and system for storing, retrieving and displaying information has a multiplexing agent (20) that calculates a new document vector (25) for a new document (21) to be added to the system and transmits the new document vector (25) to master cluster agents (22) and cluster agents (23) for evaluation. These agents (22, 23) perform the evaluation and return values upstream to the multiplexing agent (20) based on the similarity of the document to documents stored under their control. The multiplexing agent (20) then sends the document (21) and the document vector (25) to the master cluster agent (22), which then forwards it to a cluster agent (23) or creates a new cluster agent (23) to manage the document (21). The system also searches for stored documents according to a search query having at least one term and identifying the documents found in the search, and displays the documents in a clustering display (80) of similarity so as to indicate similarity of the documents to each other.
Twelve tips for teaching in a provincially distributed medical education program.
Wong, Roger Y; Chen, Luke; Dhadwal, Gurbir; Fok, Mark C; Harder, Ken; Huynh, Hanh; Lunge, Ryan; Mackenzie, Mark; Mckinney, James; Ovalle, William; Rauniyar, Pooja; Tse, Luke; Villanyi, Diane
2012-01-01
As distributed undergraduate and postgraduate medical education becomes more common, the challenges with the teaching and learning process also increase. To collaboratively engage front line teachers in improving teaching in a distributed medical program. We recently conducted a contest on teaching tips in a provincially distributed medical education program and received entries from faculty and resident teachers. Tips that are helpful for teaching around clinical cases at distributed teaching sites include: ask "what if" questions to maximize clinical teaching opportunities, try the 5-min short snapper, multitask to allow direct observation, create dedicated time for feedback, there are really no stupid questions, and work with heterogeneous group of learners. Tips that are helpful for multi-site classroom teaching include: promote teacher-learner connectivity, optimize the long distance working relationship, use the reality television show model to maximize retention and captivate learners, include less teaching content if possible, tell learners what you are teaching and make it relevant and turn on the technology tap to fill the knowledge gap. Overall, the above-mentioned tips offered by front line teachers can be helpful in distributed medical education.
Program generator for the Incomplete Cholesky Conjugate Gradient (ICCG) method
International Nuclear Information System (INIS)
Kuo-Petravic, G.; Petravic, M.
1978-04-01
The Incomplete Cholesky Conjugate Gradient (ICCG) method has been found very effective for the solution of sparse systems of linear equations. Its implementation on a computer, however, requires a considerable amount of careful coding to achieve good machine efficiency. Furthermore, the resulting code is necessarily inflexible and cannot be easily adapted to different problems. We present in this paper a code generator GENIC which, given a small amount of information concerning the sparsity pattern and size of the system of equations, generates a solver package. This package, called SOLIC, is tailor made for a particular problem and can be easily incorporated into any user program
International Nuclear Information System (INIS)
Hawong, Jai Sug; Lee, Dong Hun; Lee, Dong Ha; Tche, Konstantin
2004-01-01
In this research, the photoelastic experimental hybrid method with Hook-Jeeves numerical method has been developed: This method is more precise and stable than the photoelastic experimental hybrid method with Newton-Rapson numerical method with Gaussian elimination method. Using the photoelastic experimental hybrid method with Hook-Jeeves numerical method, we can separate stress components from isochromatics only and stress intensity factors and stress concentration factors can be determined. The photoelastic experimental hybrid method with Hook-Jeeves had better be used in the full field experiment than the photoelastic experimental hybrid method with Newton-Rapson with Gaussian elimination method
Toomey, Patricia; Lovato, Chris Y; Hanlon, Neil; Poole, Gary; Bates, Joanna
2013-06-01
To describe community leaders' perceptions regarding the impact of a fully distributed undergraduate medical education program on a small, medically underserved host community. The authors conducted semistructured interviews in 2007 with 23 community leaders representing, collectively, the education, health, economic, media, and political sectors. They reinterviewed six participants from a pilot study (2005) and recruited new participants using purposeful and snowball sampling. The authors employed analytic induction to organize content thematically, using the sectors as a framework, and they used open coding to identify new themes. The authors reanalyzed transcripts to identify program outcomes (e.g., increased research capacity) and construct a list of quantifiable indicators (e.g., number of grants and publications). Participants reported their perspectives on the current and anticipated impact of the program on education, health services, the economy, media, and politics. Perceptions of impact were overwhelmingly positive (e.g., increased physician recruitment), though some were negative (e.g., strains on health resources). The authors identified new outcomes and confirmed outcomes described in 2005. They identified 16 quantifiable indicators of impact, which they judged to be plausible and measureable. Participants perceive that the regional undergraduate medical education program in their community has broad, local impacts. Findings suggest that early observed outcomes have been maintained and may be expanding. Results may be applicable to medical education programs with distributed or regional sites in similar rural, remote, and/or underserved regions. The areas of impact, outcomes, and quantifiable indicators identified will be of interest to future researchers and evaluators.
Voltage profile program for the Kennedy Space Center electric power distribution system
1976-01-01
The Kennedy Space Center voltage profile program computes voltages at all busses greater than 1 Kv in the network under various conditions of load. The computation is based upon power flow principles and utilizes a Newton-Raphson iterative load flow algorithm. Power flow conditions throughout the network are also provided. The computer program is designed for both steady state and transient operation. In the steady state mode, automatic tap changing of primary distribution transformers is incorporated. Under transient conditions, such as motor starts etc., it is assumed that tap changing is not accomplished so that transformer secondary voltage is allowed to sag.
Isotope Production and Distribution Program. Financial statements, September 30, 1994 and 1993
Energy Technology Data Exchange (ETDEWEB)
Marwick, P.
1994-11-30
The attached report presents the results of the independent certified public accountants` audit of the Isotope Production and Distribution (IP&D) Program`s financial statements as of September 30, 1994. The auditors have expressed an unqualified opinion on IP&D`s 1994 statements. Their reports on IP&D`s internal control structure and on compliance with laws,and regulations are also provided. The charter of the Isotope Program covers the production and sale of radioactive and stable isotopes, byproducts, and related isotope services. Prior to October 1, 1989, the Program was subsidized by the Department of Energy through a combination of appropriated funds and isotope sales revenue. The Fiscal Year 1990 Appropriations Act, Public Law 101-101, authorized a separate Isotope Revolving Fund account for the Program, which was to support itself solely from the proceeds of isotope sales. The initial capitalization was about $16 million plus the value of the isotope assets in inventory or on loan for research and the unexpended appropriation available at the close of FY 1989. During late FY 1994, Public Law 103--316 restructured the Program to provide for supplemental appropriations to cover costs which are impractical to incorporate into the selling price of isotopes. Additional information about the Program is provided in the notes to the financial statements.
Lv, Y; Huang, G H; Li, Y P; Yang, Z F; Sun, W
2011-03-01
A two-stage inexact joint-probabilistic programming (TIJP) method is developed for planning a regional air quality management system with multiple pollutants and multiple sources. The TIJP method incorporates the techniques of two-stage stochastic programming, joint-probabilistic constraint programming and interval mathematical programming, where uncertainties expressed as probability distributions and interval values can be addressed. Moreover, it can not only examine the risk of violating joint-probability constraints, but also account for economic penalties as corrective measures against any infeasibility. The developed TIJP method is applied to a case study of a regional air pollution control problem, where the air quality index (AQI) is introduced for evaluation of the integrated air quality management system associated with multiple pollutants. The joint-probability exists in the environmental constraints for AQI, such that individual probabilistic constraints for each pollutant can be efficiently incorporated within the TIJP model. The results indicate that useful solutions for air quality management practices have been generated; they can help decision makers to identify desired pollution abatement strategies with minimized system cost and maximized environmental efficiency. Copyright Â© 2010 Elsevier Ltd. All rights reserved.
ARSTEC, Nonlinear Optimization Program Using Random Search Method
International Nuclear Information System (INIS)
Rasmuson, D. M.; Marshall, N. H.
1979-01-01
1 - Description of problem or function: The ARSTEC program was written to solve nonlinear, mixed integer, optimization problems. An example of such a problem in the nuclear industry is the allocation of redundant parts in the design of a nuclear power plant to minimize plant unavailability. 2 - Method of solution: The technique used in ARSTEC is the adaptive random search method. The search is started from an arbitrary point in the search region and every time a point that improves the objective function is found, the search region is centered at that new point. 3 - Restrictions on the complexity of the problem: Presently, the maximum number of independent variables allowed is 10. This can be changed by increasing the dimension of the arrays
Development of ray tracing visualization program by Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Higuchi, Kenji; Otani, Takayuki [Japan Atomic Energy Research Inst., Tokyo (Japan); Hasegawa, Yukihiro
1997-09-01
Ray tracing algorithm is a powerful method to synthesize three dimensional computer graphics. In conventional ray tracing algorithms, a view point is used as a starting point of ray tracing, from which the rays are tracked up to the light sources through center points of pixels on the view screen to calculate the intensities of the pixels. This manner, however, makes it difficult to define the configuration of light source as well as to strictly simulate the reflections of the rays. To resolve these problems, we have developed a new ray tracing means which traces rays from a light source, not from a view point, with use of Monte Carlo method which is widely applied in nuclear fields. Moreover, we adopt the variance reduction techniques to the program with use of the specialized machine (Monte-4) for particle transport Monte Carlo so that the computational time could be successfully reduced. (author)
Bayesian methods for hackers probabilistic programming and Bayesian inference
Davidson-Pilon, Cameron
2016-01-01
Bayesian methods of inference are deeply natural and extremely powerful. However, most discussions of Bayesian inference rely on intensely complex mathematical analyses and artificial examples, making it inaccessible to anyone without a strong mathematical background. Now, though, Cameron Davidson-Pilon introduces Bayesian inference from a computational perspective, bridging theory to practice–freeing you to get results using computing power. Bayesian Methods for Hackers illuminates Bayesian inference through probabilistic programming with the powerful PyMC language and the closely related Python tools NumPy, SciPy, and Matplotlib. Using this approach, you can reach effective solutions in small increments, without extensive mathematical intervention. Davidson-Pilon begins by introducing the concepts underlying Bayesian inference, comparing it with other techniques and guiding you through building and training your first Bayesian model. Next, he introduces PyMC through a series of detailed examples a...
International Nuclear Information System (INIS)
Sanchez de Alsina, O.L.; Scaricabarozzi, R.A.
1982-01-01
A matrix non-iterative method to calculate the periodical distribution in reactors with thermal regeneration is presented. In case of exothermic reaction, a source term will be included. A computer code was developed to calculate the final temperature distribution in solids and in the outlet temperatures of the gases. The results obtained from ethane oxidation calculation in air, using the Dietrich kinetic data are presented. This method is more advantageous than iterative methods. (E.G.) [pt
Test of methods for retrospective activity size distribution determination from filter samples
International Nuclear Information System (INIS)
Meisenberg, Oliver; Tschiersch, Jochen
2015-01-01
Determining the activity size distribution of radioactive aerosol particles requires sophisticated and heavy equipment, which makes measurements at large number of sites difficult and expensive. Therefore three methods for a retrospective determination of size distributions from aerosol filter samples in the laboratory were tested for their applicability. Extraction into a carrier liquid with subsequent nebulisation showed size distributions with a slight but correctable bias towards larger diameters compared with the original size distribution. Yields in the order of magnitude of 1% could be achieved. Sonication-assisted extraction into a carrier liquid caused a coagulation mode to appear in the size distribution. Sonication-assisted extraction into the air did not show acceptable results due to small yields. The method of extraction into a carrier liquid without sonication was applied to aerosol samples from Chernobyl in order to calculate inhalation dose coefficients for 137 Cs based on the individual size distribution. The effective dose coefficient is about half of that calculated with a default reference size distribution. - Highlights: • Activity size distributions can be recovered after aerosol sampling on filters. • Extraction into a carrier liquid and subsequent nebulisation is appropriate. • This facilitates the determination of activity size distributions for individuals. • Size distributions from this method can be used for individual dose coefficients. • Dose coefficients were calculated for the workers at the new Chernobyl shelter
The HACMS program: using formal methods to eliminate exploitable bugs.
Fisher, Kathleen; Launchbury, John; Richards, Raymond
2017-10-13
For decades, formal methods have offered the promise of verified software that does not have exploitable bugs. Until recently, however, it has not been possible to verify software of sufficient complexity to be useful. Recently, that situation has changed. SeL4 is an open-source operating system microkernel efficient enough to be used in a wide range of practical applications. Its designers proved it to be fully functionally correct, ensuring the absence of buffer overflows, null pointer exceptions, use-after-free errors, etc., and guaranteeing integrity and confidentiality. The CompCert Verifying C Compiler maps source C programs to provably equivalent assembly language, ensuring the absence of exploitable bugs in the compiler. A number of factors have enabled this revolution, including faster processors, increased automation, more extensive infrastructure, specialized logics and the decision to co-develop code and correctness proofs rather than verify existing artefacts. In this paper, we explore the promise and limitations of current formal-methods techniques. We discuss these issues in the context of DARPA's HACMS program, which had as its goal the creation of high-assurance software for vehicles, including quadcopters, helicopters and automobiles.This article is part of the themed issue 'Verified trustworthy software systems'. © 2017 The Authors.
The HACMS program: using formal methods to eliminate exploitable bugs
Launchbury, John; Richards, Raymond
2017-01-01
For decades, formal methods have offered the promise of verified software that does not have exploitable bugs. Until recently, however, it has not been possible to verify software of sufficient complexity to be useful. Recently, that situation has changed. SeL4 is an open-source operating system microkernel efficient enough to be used in a wide range of practical applications. Its designers proved it to be fully functionally correct, ensuring the absence of buffer overflows, null pointer exceptions, use-after-free errors, etc., and guaranteeing integrity and confidentiality. The CompCert Verifying C Compiler maps source C programs to provably equivalent assembly language, ensuring the absence of exploitable bugs in the compiler. A number of factors have enabled this revolution, including faster processors, increased automation, more extensive infrastructure, specialized logics and the decision to co-develop code and correctness proofs rather than verify existing artefacts. In this paper, we explore the promise and limitations of current formal-methods techniques. We discuss these issues in the context of DARPA’s HACMS program, which had as its goal the creation of high-assurance software for vehicles, including quadcopters, helicopters and automobiles. This article is part of the themed issue ‘Verified trustworthy software systems’. PMID:28871050
Implementing an overdose education and naloxone distribution program in a health system.
Devries, Jennifer; Rafie, Sally; Polston, Gregory
To design and implement a health system-wide program increasing provision of take-home naloxone in patients at risk for opioid overdose, with the downstream aim of reducing fatalities. The program includes health care professional education and guidelines, development, and dissemination of patient education materials, electronic health record changes to promote naloxone prescriptions, and availability of naloxone in pharmacies. Academic health system, San Diego, California. University of California, San Diego Health (UCSDH), offers both inpatient and outpatient primary care and specialty services with 563 beds spanning 2 hospitals and 6 pharmacies. UCSDH is part of the University of California health system, and it serves as the county's safety net hospital. In January 2016, a multisite academic health system initiated a system-wide overdose education and naloxone distribution program to prevent opioid overdose and opioid overdose-related deaths. An interdisciplinary, interdepartmental team came together to develop and implement the program. To strengthen institutional support, naloxone prescribing guidelines were developed and approved for the health system. Education on naloxone for physicians, pharmacists, and nurses was provided through departmental trainings, bulletins, and e-mail notifications. Alerts in the electronic health record and preset naloxone orders facilitated co-prescribing of naloxone with opioid prescriptions. Electronic health record reports captured naloxone prescriptions ordered. Summary reports on the electronic health record measured naloxone reminder alerts and response rates. Since the start of the program, the health system has trained 252 physicians, pharmacists, and nurses in overdose education and take-home naloxone. There has been an increase in the number of prescriptions for naloxone from a baseline of 4.5 per month to an average of 46 per month during the 3 months following full implementation of the program including
45 CFR 2516.600 - How are funds for school-based service-learning programs distributed?
2010-10-01
... 45 Public Welfare 4 2010-10-01 2010-10-01 false How are funds for school-based service-learning... (Continued) CORPORATION FOR NATIONAL AND COMMUNITY SERVICE SCHOOL-BASED SERVICE-LEARNING PROGRAMS Distribution of Funds § 2516.600 How are funds for school-based service-learning programs distributed? (a) Of...
45 CFR 2517.600 - How are funds for community-based service-learning programs distributed?
2010-10-01
... 45 Public Welfare 4 2010-10-01 2010-10-01 false How are funds for community-based service-learning... (Continued) CORPORATION FOR NATIONAL AND COMMUNITY SERVICE COMMUNITY-BASED SERVICE-LEARNING PROGRAMS Distribution of Funds § 2517.600 How are funds for community-based service-learning programs distributed? All...
Vi D. Nguyen; Lara A. Roman; Dexter H. Locke; Sarah K. Mincey; Jessica R. Sanders; Erica Smith Fichman; Mike Duran-Mitchell; Sarah Lumban Tobing
2017-01-01
Residential lands constitute a major component of existing and possible tree canopy in many cities in the United States. To expand the urban forest on these lands, some municipalities and nonprofit organizations have launched residential yard tree distribution programs, also known as tree giveaway programs. This paper describes the operations of five tree distribution...
Research on Optimized Torque-Distribution Control Method for Front/Rear Axle Electric Wheel Loader
Directory of Open Access Journals (Sweden)
Zhiyu Yang
2017-01-01
Full Text Available Optimized torque-distribution control method (OTCM is a critical technology for front/rear axle electric wheel loader (FREWL to improve the operation performance and energy efficiency. In the paper, a longitudinal dynamics model of FREWL is created. Based on the model, the objective functions are that the weighted sum of variance and mean of tire workload is minimal and the total motor efficiency is maximal. Four nonlinear constraint optimization algorithms, quasi-newton Lagrangian multiplier method, sequential quadratic programming, adaptive genetic algorithms, and particle swarm optimization with random weighting and natural selection, which have fast convergent rate and quick calculating speed, are used as solving solutions for objective function. The simulation results show that compared to no-control FREWL, controlled FREWL utilizes the adhesion ability better and slips less. It is obvious that controlled FREWL gains better operation performance and higher energy efficiency. The energy efficiency of FREWL in equipment transferring condition is increased by 13–29%. In addition, this paper discussed the applicability of OTCM and analyzed the reason for different simulation results of four algorithms.
Analysis of calculating methods for failure distribution function based on maximal entropy principle
International Nuclear Information System (INIS)
Guo Chunying; Lin Yuangen; Jiang Meng; Wu Changli
2009-01-01
The computation of invalidation distribution functions of electronic devices when exposed in gamma rays is discussed here. First, the possible devices failure distribution models are determined through the tests of statistical hypotheses using the test data. The results show that: the devices' failure distribution can obey multi-distributions when the test data is few. In order to decide the optimum failure distribution model, the maximal entropy principle is used and the elementary failure models are determined. Then, the Bootstrap estimation method is used to simulate the intervals estimation of the mean and the standard deviation. On the basis of this, the maximal entropy principle is used again and the simulated annealing method is applied to find the optimum values of the mean and the standard deviation. Accordingly, the electronic devices' optimum failure distributions are finally determined and the survival probabilities are calculated. (authors)
Energy Technology Data Exchange (ETDEWEB)
Bird, L.; Reger, A.; Heeter, J.
2012-12-01
Based on lessons from recent program experience, this report explores best practices for designing and implementing incentives for small and mid-sized residential and commercial distributed solar energy projects. The findings of this paper are relevant to both new incentive programs as well as those undergoing modifications. The report covers factors to consider in setting and modifying incentive levels over time, differentiating incentives to encourage various market segments, administrative issues such as providing equitable access to incentives and customer protection. It also explores how incentive programs can be designed to respond to changing market conditions while attempting to provide a longer-term and stable environment for the solar industry. The findings are based on interviews with program administrators, regulators, and industry representatives as well as data from numerous incentive programs nationally, particularly the largest and longest-running programs. These best practices consider the perspectives of various stakeholders and the broad objectives of reducing solar costs, encouraging long-term market viability, minimizing ratepayer costs, and protecting consumers.
Energy Technology Data Exchange (ETDEWEB)
Sundqvist, B; Gonczi, L; Koersner, I; Bergman, R; Lindh, U
1974-01-01
(d,p) reactions in /sup 14/N were used for probing single kernels of seed for nitrogen content and nitrogen depth distributions. Comparison with the Kjeldahl method was made on individual peas and beans. The results were found to be strongly correlated. The technique to obtain depth distributions of nitrogen was also used on high- and low-lysine varieties of barley for which large differences in nitrogen distributions were found.
Size distributions of micro-bubbles generated by a pressurized dissolution method
Taya, C.; Maeda, Y.; Hosokawa, S.; Tomiyama, A.; Ito, Y.
2012-03-01
Size of micro-bubbles is widely distributed in the range of one to several hundreds micrometers and depends on generation methods, flow conditions and elapsed times after the bubble generation. Although a size distribution of micro-bubbles should be taken into account to improve accuracy in numerical simulations of flows with micro-bubbles, a variety of the size distribution makes it difficult to introduce the size distribution in the simulations. On the other hand, several models such as the Rosin-Rammler equation and the Nukiyama-Tanazawa equation have been proposed to represent the size distribution of particles or droplets. Applicability of these models to the size distribution of micro-bubbles has not been examined yet. In this study, we therefore measure size distribution of micro-bubbles generated by a pressurized dissolution method by using a phase Doppler anemometry (PDA), and investigate the applicability of the available models to the size distributions of micro-bubbles. Experimental apparatus consists of a pressurized tank in which air is dissolved in liquid under high pressure condition, a decompression nozzle in which micro-bubbles are generated due to pressure reduction, a rectangular duct and an upper tank. Experiments are conducted for several liquid volumetric fluxes in the decompression nozzle. Measurements are carried out at the downstream region of the decompression nozzle and in the upper tank. The experimental results indicate that (1) the Nukiyama-Tanasawa equation well represents the size distribution of micro-bubbles generated by the pressurized dissolution method, whereas the Rosin-Rammler equation fails in the representation, (2) the bubble size distribution of micro-bubbles can be evaluated by using the Nukiyama-Tanasawa equation without individual bubble diameters, when mean bubble diameter and skewness of the bubble distribution are given, and (3) an evaluation method of visibility based on the bubble size distribution and bubble
Kilerci Eser, Ece; Vestergaard, M.
2018-02-01
We present and analyse quasi-simultaneous multi-epoch spectral energy distributions (SEDs) of seven reverberation-mapped active galactic nuclei (AGNs) for which accurate black hole mass measurements and suitable archival data are available from the `AGN Watch' monitoring programs. We explore the potential of optical-UV and X-ray data, obtained within 2 d, to provide more accurate SED-based measurements of individual AGN and quantify the impact of source variability on key measurements typically used to characterize the black hole accretion process plus on bolometric correction factors at 5100 Å, 1350 Å and for the 2-10 keV X-ray band, respectively. The largest SED changes occur on long time-scales (≳1 year). For our small sample, the 1μm to 10 keV integrated accretion luminosity typically changes by 10 per cent on short time-scales (over 20 d), by ˜30 per cent over a year, but can change by 100 per cent or more for individual AGN. The extreme ultraviolet (EUV) gap is the most uncertain part of the intrinsic SED, introducing a ˜25 per cent uncertainty in the accretion-induced luminosity, relative to the model independent interpolation method that we adopt. That aside, our analysis shows that the uncertainty in the accretion-induced luminosity, the Eddington luminosity ratio and the bolometric correction factors can be reduced (by a factor of two or more) by use of the SEDs built from data obtained within 20 d. However, \\dot{M} and η are mostly limited by the unknown EUV emission and the unknown details of the central engine and our aspect angle.
Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks
Directory of Open Access Journals (Sweden)
Chaoyang Shi
2017-12-01
Full Text Available Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.
Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks.
Shi, Chaoyang; Chen, Bi Yu; Lam, William H K; Li, Qingquan
2017-12-06
Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.
A study of the up-and-down method for non-normal distribution functions
DEFF Research Database (Denmark)
Vibholm, Svend; Thyregod, Poul
1988-01-01
The assessment of breakdown probabilities is examined by the up-and-down method. The exact maximum-likelihood estimates for a number of response patterns are calculated for three different distribution functions and are compared with the estimates corresponding to the normal distribution. Estimates...
Distributed AC power flow method for AC and AC-DC hybrid ...
African Journals Online (AJOL)
... on voltage level and R/X ratio in the formulation itself. DPFM is applied on a 10 bus, low voltage, microgrid system giving a better voltage profile.. Keywords: Microgrid (MG), Distributed Energy Resources (DER), Particle Swarm Optimization (OPF), Time varying inertia weight (TVIW), Distributed power flow method (DPFM) ...
International Nuclear Information System (INIS)
Shevkunov, I A; Petrov, N V
2014-01-01
Performance of the three phase retrieval methods that use spatial intensity distributions was investigated in dealing with a task of reconstruction of the amplitude characteristics of the test object. These methods differ both by mathematical models and order of iteration execution. The single-beam multiple-intensity reconstruction method showed the best efficiency in terms of quality of reconstruction and time consumption.
An improved in situ method for determining depth distributions of gamma-ray emitting radionuclides
International Nuclear Information System (INIS)
Benke, R.R.; Kearfott, K.J.
2001-01-01
In situ gamma-ray spectrometry determines the quantities of radionuclides in some medium with a portable detector. The main limitation of in situ gamma-ray spectrometry lies in determining the depth distribution of radionuclides. This limitation is addressed by developing an improved in situ method for determining the depth distributions of gamma-ray emitting radionuclides in large area sources. This paper implements a unique collimator design with conventional radiation detection equipment. Cylindrically symmetric collimators were fabricated to allow only those gamma-rays emitted from a selected range of polar angles (measured off the detector axis) to be detected. Positioned with its axis normal to surface of the media, each collimator enables the detection of gamma-rays emitted from a different range of polar angles and preferential depths. Previous in situ methods require a priori knowledge of the depth distribution shape. However, the absolute method presented in this paper determines the depth distribution as a histogram and does not rely on such assumptions. Other advantages over previous in situ methods are that this method only requires a single gamma-ray emission, provides more detailed depth information, and offers a superior ability for characterizing complex depth distributions. Collimated spectrometer measurements of buried area sources demonstrated the ability of the method to yield accurate depth information. Based on the results of actual measurements, this method increases the potential of in situ gamma-ray spectrometry as an independent characterization tool in situations with unknown radionuclide depth distributions
Energy Technology Data Exchange (ETDEWEB)
Glass, Samuel W. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Fifield, Leonard S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hartman, Trenton S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2016-05-30
This Pacific Northwest National Laboratory (PNNL) milestone report describes progress to date on the investigation of nondestructive test (NDE) methods focusing particularly on local measurements that provide key indicators of cable aging and damage. The work includes a review of relevant literature as well as hands-on experimental verification of inspection capabilities. As NPPs consider applying for second, or subsequent, license renewal (SLR) to extend their operating period from 60 years to 80 years, it important to understand how the materials installed in plant systems and components will age during that time and develop aging management programs (AMPs) to assure continued safe operation under normal and design basis events (DBE). Normal component and system tests typically confirm the cables can perform their normal operational function. The focus of the cable test program is directed toward the more demanding challenge of assuring the cable function under accident or DBE. Most utilities already have a program associated with their first life extension from 40 to 60 years. Regrettably, there is neither a clear guideline nor a single NDE that can assure cable function and integrity for all cables. Thankfully, however, practical implementation of a broad range of tests allows utilities to develop a practical program that assures cable function to a high degree. The industry has adopted 50% elongation at break (EAB) relative to the un-aged cable condition as the acceptability standard. All tests are benchmarked against the cable EAB test. EAB is a destructive test so the test programs must apply an array of other NDE tests to assure or infer the overall set of cable’s system integrity. These cable NDE programs vary in rigor and methodology. As the industry gains experience with the efficacy of these programs, it is expected that implementation practice will converge to a more common approach. This report addresses the range of local NDE cable tests that are
A Method for Developing Standard Patient Education Program.
Lura, Carolina Bryne; Hauch, Sophie Misser Pallesgaard; Gøeg, Kirstine Rosenbeck; Pape-Haugaard, Louise
2018-01-01
In Denmark, patients being treated on Haematology Outpatients Departments get instructed to self-manage their blood sample collection from Central Venous Catheter (CVC). However, this is a complex and risky procedure, which can jeopardize patient safety. The aim of the study was to suggest a method for developing standard digital patient education programs for patients in self-administration of blood samples drawn from CVC. The Design Science Research Paradigm was used to develop a digital patient education program, called PAVIOSY, to increase patient safety during execution of the blood sample collection procedure by using videos for teaching as well as procedural support. A step-by-step guide was developed and used as basis for making the videos. Quality assurance through evaluation with a nurse was conducted on both the step-by-step guide and the videos. The quality assurance evaluation of the videos showed; 1) Errors due to the order of the procedure can be determined by reviewing the videos despite that the guide was followed. 2) Videos can be used to identify errors - important for patient safety - in the procedure, which are not identifiable in a written script. To ensure correct clinical content of the educational patient system, health professionals must be engaged early in the development of content and design phase.
A New Method for Solving Multiobjective Bilevel Programs
Directory of Open Access Journals (Sweden)
Ying Ji
2017-01-01
Full Text Available We study a class of multiobjective bilevel programs with the weights of objectives being uncertain and assumed to belong to convex and compact set. To the best of our knowledge, there is no study about this class of problems. We use a worst-case weighted approach to solve this class of problems. Our “worst-case weighted multiobjective bilevel programs” model supposes that each player (leader or follower has a set of weights to their objectives and wishes to minimize their maximum weighted sum objective where the maximization is with respect to the set of weights. This new model gives rise to a new Pareto optimum concept, which we call “robust-weighted Pareto optimum”; for the worst-case weighted multiobjective optimization with the weight set of each player given as a polytope, we show that a robust-weighted Pareto optimum can be obtained by solving mathematical programing with equilibrium constraints (MPEC. For an application, we illustrate the usefulness of the worst-case weighted multiobjective optimization to a supply chain risk management under demand uncertainty. By the comparison with the existing weighted approach, we show that our method is more robust and can be more efficiently applied to real-world problems.
Analytical method for reconstruction pin to pin of the nuclear power density distribution
Energy Technology Data Exchange (ETDEWEB)
Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S., E-mail: ppessoa@con.ufrj.br, E-mail: fernando@con.ufrj.br, E-mail: aquilino@imp.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil)
2013-07-01
An accurate and efficient method for reconstructing pin to pin of the nuclear power density distribution, involving the analytical solution of the diffusion equation for two-dimensional neutron energy groups in homogeneous nodes, is presented. The boundary conditions used for analytic as solution are the four currents or fluxes on the surface of the node, which are obtained by Nodal Expansion Method (known as NEM) and four fluxes at the vertices of a node calculated using the finite difference method. The analytical solution found is the homogeneous distribution of neutron flux. Detailed distributions pin to pin inside a fuel assembly are estimated by the product of homogeneous flux distribution by local heterogeneous form function. Furthermore, the form functions of flux and power are used. The results obtained with this method have a good accuracy when compared with reference values. (author)
Analytical method for reconstruction pin to pin of the nuclear power density distribution
International Nuclear Information System (INIS)
Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.
2013-01-01
An accurate and efficient method for reconstructing pin to pin of the nuclear power density distribution, involving the analytical solution of the diffusion equation for two-dimensional neutron energy groups in homogeneous nodes, is presented. The boundary conditions used for analytic as solution are the four currents or fluxes on the surface of the node, which are obtained by Nodal Expansion Method (known as NEM) and four fluxes at the vertices of a node calculated using the finite difference method. The analytical solution found is the homogeneous distribution of neutron flux. Detailed distributions pin to pin inside a fuel assembly are estimated by the product of homogeneous flux distribution by local heterogeneous form function. Furthermore, the form functions of flux and power are used. The results obtained with this method have a good accuracy when compared with reference values. (author)
Proposal for a new method of reactor neutron flux distribution determination
Energy Technology Data Exchange (ETDEWEB)
Popic, V R [Institute of nuclear sciences Boris Kidric, Vinca, Beograd (Serbia and Montenegro)
1964-01-15
A method, based on the measurements of the activity produced in a medium flowing with variable velocity through a reactor, for the determination of the neutron flux distribution inside a reactor is considered theoretically (author)
Advanced airflow distribution methods for reduction of personal exposure to indoor pollutants
DEFF Research Database (Denmark)
Cao, Guangyu; Kosonen, Risto; Melikov, Arsen
2016-01-01
The main objective of this study is to recognize possible airflow distribution methods to protect the occupants from exposure to various indoor pollutants. The fact of the increasing exposure of occupants to various indoor pollutants shows that there is an urgent need to develop advanced airflow ...... distribution methods to reduce indoor exposure to various indoor pollutants. This article presents some of the latest development of advanced airflow distribution methods to reduce indoor exposure in various types of buildings.......The main objective of this study is to recognize possible airflow distribution methods to protect the occupants from exposure to various indoor pollutants. The fact of the increasing exposure of occupants to various indoor pollutants shows that there is an urgent need to develop advanced airflow...
Distribution Route Planning of Clean Coal Based on Nearest Insertion Method
Wang, Yunrui
2018-01-01
Clean coal technology has made some achievements for several ten years, but the research in its distribution field is very small, the distribution efficiency would directly affect the comprehensive development of clean coal technology, it is the key to improve the efficiency of distribution by planning distribution route rationally. The object of this paper was a clean coal distribution system which be built in a county. Through the surveying of the customer demand and distribution route, distribution vehicle in previous years, it was found that the vehicle deployment was only distributed by experiences, and the number of vehicles which used each day changed, this resulted a waste of transport process and an increase in energy consumption. Thus, the mathematical model was established here in order to aim at shortest path as objective function, and the distribution route was re-planned by using nearest-insertion method which been improved. The results showed that the transportation distance saved 37 km and the number of vehicles used had also been decreased from the past average of 5 to fixed 4 every day, as well the real loading of vehicles increased by 16.25% while the current distribution volume staying same. It realized the efficient distribution of clean coal, achieved the purpose of saving energy and reducing consumption.
International Nuclear Information System (INIS)
Colombo, A.G.; Jaarsma, R.J.
1982-01-01
This report describes a conversational computer program which, via Bayes' theorem, numerically combines the prior distribution of a parameter with a likelihood function. Any type of prior and likelihood function can be considered. The present version of the program includes six types of prior and employs the binomial likelihood. As input the program requires the law and parameters of the prior distribution and the sample data. As output it gives the posterior distribution as a histogram. The use of the program for estimating the constant failure rate of an item is briefly described
New method for exact measurement of thermal neutron distribution in elementary cell
International Nuclear Information System (INIS)
Takac, S.M.; Krcevinac, S.B.
1966-06-01
Exact measurement of thermal neutron density distribution in an elementary cell necessitates the knowledge of the perturbations involved in the cell by the measuring device. A new method has been developed in which a special stress is made to evaluate these perturbations by measuring the response from the perturbations introduced in the elementary cell. The unperturbed distribution was obtained by extrapolation to zero perturbation. The final distributions for different lattice pitches were compared with a THERMOS-type calculation. As a pleasing fact a very good agreement has been reached, which dissolves the long existing disagreement between THERMOS calculations and measured density distribution (author)
International Nuclear Information System (INIS)
Wada, Hiroshi; Igari, Toshihide; Kitade, Shoji.
1989-01-01
A prediction method was proposed for plastic ratcheting of a cylinder, which was subjected to axially moving temperature distribution without primary stress. First, a mechanism of this ratcheting was proposed, which considered the movement of temperature distribution as a driving force of this phenomenon. Predictive equations of the ratcheting strain for two representative temperature distributions were proposed based on this mechanism by assuming the elastic-perfectly-plastic material behavior. Secondly, an elastic-plastic analysis was made on a cylinder subjected to the representative two temperature distributions. Analytical results coincided well with the predicted results, and the applicability of the proposed equations was confirmed. (author)
New method for exact measurement of thermal neutron distribution in elementary cell
Energy Technology Data Exchange (ETDEWEB)
Takac, S M; Krcevinac, S B [Institute of nuclear sciences Boris Kidric, Vinca, Beograd (Yugoslavia)
1966-06-15
Exact measurement of thermal neutron density distribution in an elementary cell necessitates the knowledge of the perturbations involved in the cell by the measuring device. A new method has been developed in which a special stress is made to evaluate these perturbations by measuring the response from the perturbations introduced in the elementary cell. The unperturbed distribution was obtained by extrapolation to zero perturbation. The final distributions for different lattice pitches were compared with a THERMOS-type calculation. As a pleasing fact a very good agreement has been reached, which dissolves the long existing disagreement between THERMOS calculations and measured density distribution (author)
Confidence Limits for the Indirect Effect: Distribution of the Product and Resampling Methods
MacKinnon, David P.; Lockwood, Chondra M.; Williams, Jason
2010-01-01
The most commonly used method to test an indirect effect is to divide the estimate of the indirect effect by its standard error and compare the resulting z statistic with a critical value from the standard normal distribution. Confidence limits for the indirect effect are also typically based on critical values from the standard normal distribution. This article uses a simulation study to demonstrate that confidence limits are imbalanced because the distribution of the indirect effect is normal only in special cases. Two alternatives for improving the performance of confidence limits for the indirect effect are evaluated: (a) a method based on the distribution of the product of two normal random variables, and (b) resampling methods. In Study 1, confidence limits based on the distribution of the product are more accurate than methods based on an assumed normal distribution but confidence limits are still imbalanced. Study 2 demonstrates that more accurate confidence limits are obtained using resampling methods, with the bias-corrected bootstrap the best method overall. PMID:20157642
A Recourse-Based Type-2 Fuzzy Programming Method for Water Pollution Control under Uncertainty
Directory of Open Access Journals (Sweden)
Jing Liu
2017-11-01
Full Text Available In this study, a recourse-based type-2 fuzzy programming (RTFP method is developed for supporting water pollution control of basin systems under uncertainty. The RTFP method incorporates type-2 fuzzy programming (TFP within a two-stage stochastic programming with recourse (TSP framework to handle uncertainties expressed as type-2 fuzzy sets (i.e., a fuzzy set in which the membership function is also fuzzy and probability distributions, as well as to reflect the trade-offs between conflicting economic benefits and penalties due to violated policies. The RTFP method is then applied to a real case of water pollution control in the Heshui River Basin (a rural area of China, where chemical oxygen demand (COD, total nitrogen (TN, total phosphorus (TP, and soil loss are selected as major indicators to identify the water pollution control strategies. Solutions of optimal production plans of economic activities under each probabilistic pollutant discharge allowance level and membership grades are obtained. The results are helpful for the authorities in exploring the trade-off between economic objective and pollutant discharge decision-making based on river water pollution control.
International Nuclear Information System (INIS)
Tan, Cheng-Yang; Fermilab
2006-01-01
One common way for measuring the emittance of an electron beam is with the slits method. The usual approach for analyzing the data is to calculate an emittance that is a subset of the parent emittance. This paper shows an alternative way by using the method of correlations which ties the parameters derived from the beamlets to the actual parameters of the parent emittance. For parent distributions that are Gaussian, this method yields exact results. For non-Gaussian beam distributions, this method yields an effective emittance that can serve as a yardstick for emittance comparisons
Predictive Distribution of the Dirichlet Mixture Model by the Local Variational Inference Method
DEFF Research Database (Denmark)
Ma, Zhanyu; Leijon, Arne; Tan, Zheng-Hua
2014-01-01
the predictive likelihood of the new upcoming data, especially when the amount of training data is small. The Bayesian estimation of a Dirichlet mixture model (DMM) is, in general, not analytically tractable. In our previous work, we have proposed a global variational inference-based method for approximately...... calculating the posterior distributions of the parameters in the DMM analytically. In this paper, we extend our previous study for the DMM and propose an algorithm to calculate the predictive distribution of the DMM with the local variational inference (LVI) method. The true predictive distribution of the DMM...... is analytically intractable. By considering the concave property of the multivariate inverse beta function, we introduce an upper-bound to the true predictive distribution. As the global minimum of this upper-bound exists, the problem is reduced to seek an approximation to the true predictive distribution...
Uncertainty Management of Dynamic Tariff Method for Congestion Management in Distribution Networks
DEFF Research Database (Denmark)
Huang, Shaojun; Wu, Qiuwei; Cheng, Lin
2016-01-01
The dynamic tariff (DT) method is designed for the distribution system operator (DSO) to alleviate congestions that might occur in a distribution network with high penetration of distributed energy resources (DERs). Uncertainty management is required for the decentralized DT method because the DT...... is de- termined based on optimal day-ahead energy planning with forecasted parameters such as day-ahead energy prices and en- ergy needs which might be different from the parameters used by aggregators. The uncertainty management is to quantify and mitigate the risk of the congestion when employing...
Ludvík Friebel; Jana Friebelová
2006-01-01
This article deals with approximation of empirical distribution to standard normal distribution using Johnson transformation. This transformation enables us to approximate wide spectrum of continuous distributions with a normal distribution. The estimation of parameters of transformation formulas is based on percentiles of empirical distribution. There are derived theoretical probability distribution functions of random variable obtained on the base of backward transformation standard normal ...
Li, Q; He, Y L; Wang, Y; Tao, W Q
2007-11-01
A coupled double-distribution-function lattice Boltzmann method is developed for the compressible Navier-Stokes equations. Different from existing thermal lattice Boltzmann methods, this method can recover the compressible Navier-Stokes equations with a flexible specific-heat ratio and Prandtl number. In the method, a density distribution function based on a multispeed lattice is used to recover the compressible continuity and momentum equations, while the compressible energy equation is recovered by an energy distribution function. The energy distribution function is then coupled to the density distribution function via the thermal equation of state. In order to obtain an adjustable specific-heat ratio, a constant related to the specific-heat ratio is introduced into the equilibrium energy distribution function. Two different coupled double-distribution-function lattice Boltzmann models are also proposed in the paper. Numerical simulations are performed for the Riemann problem, the double-Mach-reflection problem, and the Couette flow with a range of specific-heat ratios and Prandtl numbers. The numerical results are found to be in excellent agreement with analytical and/or other solutions.
SSRscanner: a program for reporting distribution and exact location of simple sequence repeats.
Anwar, Tamanna; Khan, Asad U
2006-02-20
Simple sequence repeats (SSRs) have become important molecular markers for a broad range of applications, such as genome mapping and characterization, phenotype mapping, marker assisted selection of crop plants and a range of molecular ecology and diversity studies. These repeated DNA sequences are found in both prokaryotes and eukaryotes. They are distributed almost at random throughout the genome, ranging from mononucleotide to trinucleotide repeats. They are also found at longer lengths (> 6 repeating units) of tracts. Most of the computer programs that find SSRs do not report its exact position. A computer program SSRscanner was written to find out distribution, frequency and exact location of each SSR in the genome. SSRscanner is user friendly. It can search repeats of any length and produce outputs with their exact position on chromosome and their frequency of occurrence in the sequence. This program has been written in PERL and is freely available for non-commercial users by request from the authors. Please contact the authors by E-mail: huzzi99@hotmail.com.
New method for extracting tumors in PET/CT images based on the probability distribution
International Nuclear Information System (INIS)
Nitta, Shuhei; Hontani, Hidekata; Hukami, Tadanori
2006-01-01
In this report, we propose a method for extracting tumors from PET/CT images by referring to the probability distribution of pixel values in the PET image. In the proposed method, first, the organs that normally take up fluorodeoxyglucose (FDG) (e.g., the liver, kidneys, and brain) are extracted. Then, the tumors are extracted from the images. The distribution of pixel values in PET images differs in each region of the body. Therefore, the threshold for detecting tumors is adaptively determined by referring to the distribution. We applied the proposed method to 37 cases and evaluated its performance. This report also presents the results of experiments comparing the proposed method and another method in which the pixel values are normalized for extracting tumors. (author)
Benke, R R
2002-01-01
In situ gamma-ray spectrometry uses a portable detector to quantify radionuclides in materials. The main shortcoming of in situ gamma-ray spectrometry has been its inability to determine radionuclide depth distributions. Novel collimator designs were paired with a commercial in situ gamma-ray spectrometry system to overcome this limitation for large area sources. Positioned with their axes normal to the material surface, the cylindrically symmetric collimators limited the detection of un attenuated gamma-rays from a selected range of polar angles (measured off the detector axis). Although this approach does not alleviate the need for some knowledge of the gamma-ray attenuation characteristics of the materials being measured, the collimation method presented in this paper represents an absolute method that determines the depth distribution as a histogram, while other in situ methods require a priori knowledge of the depth distribution shape. Other advantages over previous in situ methods are that this method d...
Directory of Open Access Journals (Sweden)
R. Aversa
2008-01-01
Full Text Available Parallel programming effort can be reduced by using high level constructs such as algorithmic skeletons. Within the MAGDA toolset, supporting programming and execution of mobile agent based distributed applications, we provide a skeleton-based parallel programming environment, based on specialization of Algorithmic Skeleton Java interfaces and classes. Their implementation include mobile agent features for execution on heterogeneous systems, such as clusters of WSs and PCs, and support reliability and dynamic workload balancing. The user can thus develop a parallel, mobile agent based application by simply specialising a given set of classes and methods and using a set of added functionalities.
International Nuclear Information System (INIS)
Ravindra, M.K.; Banon, H.
1992-07-01
In this report, the scoping quantification procedures for external events in probabilistic risk assessments of nuclear power plants are described. External event analysis in a PRA has three important goals; (1) the analysis should be complete in that all events are considered; (2) by following some selected screening criteria, the more significant events are identified for detailed analysis; (3) the selected events are analyzed in depth by taking into account the unique features of the events: hazard, fragility of structures and equipment, external-event initiated accident sequences, etc. Based on the above goals, external event analysis may be considered as a three-stage process: Stage I: Identification and Initial Screening of External Events; Stage II: Bounding Analysis; Stage III: Detailed Risk Analysis. In the present report, first, a review of published PRAs is given to focus on the significance and treatment of external events in full-scope PRAs. Except for seismic, flooding, fire, and extreme wind events, the contributions of other external events to plant risk have been found to be negligible. Second, scoping methods for external events not covered in detail in the NRC's PRA Procedures Guide are provided. For this purpose, bounding analyses for transportation accidents, extreme winds and tornadoes, aircraft impacts, turbine missiles, and chemical release are described
A method of numerically controlled machine part programming
1970-01-01
Computer program is designed for automatically programmed tools. Preprocessor computes desired tool path and postprocessor computes actual commands causing machine tool to follow specific path. It is used on a Cincinnati ATC-430 numerically controlled machine tool.
Tunjo Perić; Željko Mandić
2017-01-01
This paper presents the production plan optimization in the metal industry considered as a multi-criteria programming problem. We first provided the definition of the multi-criteria programming problem and classification of the multicriteria programming methods. Then we applied two multi-criteria programming methods (the STEM method and the PROMETHEE method) in solving a problem of multi-criteria optimization production plan in a company from the metal industry. The obtained resul...
Learning Based Approach for Optimal Clustering of Distributed Program's Call Flow Graph
Abofathi, Yousef; Zarei, Bager; Parsa, Saeed
Optimal clustering of call flow graph for reaching maximum concurrency in execution of distributable components is one of the NP-Complete problems. Learning automatas (LAs) are search tools which are used for solving many NP-Complete problems. In this paper a learning based algorithm is proposed to optimal clustering of call flow graph and appropriate distributing of programs in network level. The algorithm uses learning feature of LAs to search in state space. It has been shown that the speed of reaching to solution increases remarkably using LA in search process, and it also prevents algorithm from being trapped in local minimums. Experimental results show the superiority of proposed algorithm over others.
International Nuclear Information System (INIS)
Pitchford, P.; Brown, T.
2001-01-01
This four-page fact sheet describes distributed energy resources for Federal facilities, which are being supported by the U.S. Department of Energy's (DOE's) Federal Energy Management Program (FEMP). Distributed energy resources include both existing and emerging energy technologies: advanced industrial turbines and microturbines; combined heat and power (CHP) systems; fuel cells; geothermal systems; natural gas reciprocating engines; photovoltaics and other solar systems; wind turbines; small, modular biopower; energy storage systems; and hybrid systems. DOE FEMP is investigating ways to use these alternative energy systems in government facilities to meet greater demand, to increase the reliability of the power-generation system, and to reduce the greenhouse gases associated with burning fossil fuels
International Nuclear Information System (INIS)
Mohammadi, Kasra; Alavi, Omid; Mostafaeipour, Ali; Goudarzi, Navid; Jalilvand, Mahdi
2016-01-01
Highlights: • Effectiveness of six numerical methods is evaluated to determine wind power density. • More appropriate method for computing the daily wind power density is estimated. • Four windy stations located in the south part of Alberta, Canada namely is investigated. • The more appropriate parameters estimation method was not identical among all examined stations. - Abstract: In this study, the effectiveness of six numerical methods is evaluated to determine the shape (k) and scale (c) parameters of Weibull distribution function for the purpose of calculating the wind power density. The selected methods are graphical method (GP), empirical method of Justus (EMJ), empirical method of Lysen (EML), energy pattern factor method (EPF), maximum likelihood method (ML) and modified maximum likelihood method (MML). The purpose of this study is to identify the more appropriate method for computing the wind power density in four stations distributed in Alberta province of Canada namely Edmonton City Center Awos, Grande Prairie A, Lethbridge A and Waterton Park Gate. To provide a complete analysis, the evaluations are performed on both daily and monthly scales. The results indicate that the precision of computed wind power density values change when different parameters estimation methods are used to determine the k and c parameters. Four methods of EMJ, EML, EPF and ML present very favorable efficiency while the GP method shows weak ability for all stations. However, it is found that the more effective method is not similar among stations owing to the difference in the wind characteristics.
Neutron distribution modeling based on integro-probabilistic approach of discrete ordinates method
International Nuclear Information System (INIS)
Khromov, V.V.; Kryuchkov, E.F.; Tikhomirov, G.V.
1992-01-01
In this paper is described the universal nodal method for the neutron distribution calculation in reactor and shielding problems, based on using of influence functions and factors of local-integrated volume and surface neutron sources in phase subregions. This method permits to avoid the limited capabilities of collision-probability method concerning with the detailed calculation of angular neutron flux dependence, scattering anisotropy and empty channels. The proposed method may be considered as modification of S n - method with advantage of ray-effects elimination. There are presented the description of method theory and algorithm following by the examples of method applications for calculation of neutron distribution in three-dimensional model of fusion reactor blanket and in highly heterogeneous reactor with empty channel
International Nuclear Information System (INIS)
Ahmadi, Abdollah; Charwand, Mansour; Siano, Pierluigi; Nezhad, Ali Esmaeel; Sarno, Debora; Gitizadeh, Mohsen; Raeisi, Fatima
2016-01-01
In order to supply the demands of the end users in a competitive market, a distribution company purchases energy from the wholesale market while other options would be in access in the case of possessing distributed generation units and interruptible loads. In this regard, this study presents a two-stage stochastic programming model for a distribution company energy acquisition market model to manage the involvement of different electric energy resources characterized by uncertainties with the minimum cost. In particular, the distribution company operations planning over a day-ahead horizon is modeled as a stochastic mathematical optimization, with the objective of minimizing costs. By this, distribution company decisions on grid purchase, owned distributed generation units and interruptible load scheduling are determined. Then, these decisions are considered as boundary constraints to a second step, which deals with distribution company's operations in the hour-ahead market with the objective of minimizing the short-term cost. The uncertainties in spot market prices and wind speed are modeled by means of probability distribution functions of their forecast errors and the roulette wheel mechanism and lattice Monte Carlo simulation are used to generate scenarios. Numerical results show the capability of the proposed method. - Highlights: • Proposing a new a stochastic-based two-stage operations framework in retail competitive markets. • Proposing a Mixed Integer Non-Linear stochastic programming. • Employing roulette wheel mechanism and Lattice Monte Carlo Simulation.
Katano, Izumi; Harada, Ken; Doi, Hideyuki; Souma, Rio; Minamoto, Toshifumi
2017-01-01
Environmental DNA (eDNA) has recently been used for detecting the distribution of macroorganisms in various aquatic habitats. In this study, we applied an eDNA method to estimate the distribution of the Japanese clawed salamander, Onychodactylus japonicus, in headwater streams. Additionally, we compared the detection of eDNA and hand-capturing methods used for determining the distribution of O. japonicus. For eDNA detection, we designed a qPCR primer/probe set for O. japonicus using the 12S rRNA region. We detected the eDNA of O. japonicus at all sites (with the exception of one), where we also observed them by hand-capturing. Additionally, we detected eDNA at two sites where we were unable to observe individuals using the hand-capturing method. Moreover, we found that eDNA concentrations and detection rates of the two water sampling areas (stream surface and under stones) were not significantly different, although the eDNA concentration in the water under stones was more varied than that on the surface. We, therefore, conclude that eDNA methods could be used to determine the distribution of macroorganisms inhabiting headwater systems by using samples collected from the surface of the water.
Estimation of the distribution coefficient by combined application of two different methods
International Nuclear Information System (INIS)
Vogl, G.; Gerstenbrand, F.
1982-01-01
A simple, non-invasive method is presented which permits determination of the rBCF and, in addition, of the distribution coefficient of the grey matter. The latter, which is closely correlated with the cerebral metabolism, has only been determined in vitro so far. The new method will be a means to check its accuracy. (orig.) [de
Uniform distribution and quasi-Monte Carlo methods discrepancy, integration and applications
Kritzer, Peter; Pillichshammer, Friedrich; Winterhof, Arne
2014-01-01
The survey articles in this book focus on number theoretic point constructions, uniform distribution theory, and quasi-Monte Carlo methods. As deterministic versions of the Monte Carlo method, quasi-Monte Carlo rules enjoy increasing popularity, with many fruitful applications in mathematical practice, as for example in finance, computer graphics, and biology.
Why liquid displacement methods are sometimes wrong in estimating the pore-size distribution
Gijsbertsen-Abrahamse, A.J.; Boom, R.M.; Padt, van der A.
2004-01-01
The liquid displacement method is a commonly used method to determine the pore size distribution of micro- and ultrafiltration membranes. One of the assumptions for the calculation of the pore sizes is that the pores are parallel and thus are not interconnected. To show that the estimated pore size
International Nuclear Information System (INIS)
Gao Gan
2015-01-01
Song [Song D 2004 Phys. Rev. A 69 034301] first proposed two key distribution schemes with the symmetry feature. We find that, in the schemes, the private channels which Alice and Bob publicly announce the initial Bell state or the measurement result through are not needed in discovering keys, and Song’s encoding methods do not arrive at the optimization. Here, an optimized encoding method is given so that the efficiencies of Song’s schemes are improved by 7/3 times. Interestingly, this optimized encoding method can be extended to the key distribution scheme composed of generalized Bell states. (paper)
Research on distributed optical fiber sensing data processing method based on LabVIEW
Li, Zhonghu; Yang, Meifang; Wang, Luling; Wang, Jinming; Yan, Junhong; Zuo, Jing
2018-01-01
The pipeline leak detection and leak location problem have gotten extensive attention in the industry. In this paper, the distributed optical fiber sensing system is designed based on the heat supply pipeline. The data processing method of distributed optical fiber sensing based on LabVIEW is studied emphatically. The hardware system includes laser, sensing optical fiber, wavelength division multiplexer, photoelectric detector, data acquisition card and computer etc. The software system is developed using LabVIEW. The software system adopts wavelet denoising method to deal with the temperature information, which improved the SNR. By extracting the characteristic value of the fiber temperature information, the system can realize the functions of temperature measurement, leak location and measurement signal storage and inquiry etc. Compared with traditional negative pressure wave method or acoustic signal method, the distributed optical fiber temperature measuring system can measure several temperatures in one measurement and locate the leak point accurately. It has a broad application prospect.
A calculation method for transient flow distribution of SCWR(CSR1000)
International Nuclear Information System (INIS)
Chen, Juan; Zhou, Tao; Chen, Jie; Liu, Liang; Muhammad, Ali Shahzad; Muhammad, Zeeshan Ali; Xia, Bangyang
2017-01-01
The supercritical water reactor CSR1000 is selected for the study. A parallel channel flow transient flow distribution module is developed, which is used for solving unsteady nonlinear equations. The incorporated programs of SCAC-CSR1000 are executed on normal and abnormal operating conditions. The analysis shows that: 1. Transient flow distribution can incorporate parallel channel flow calculation, with an error less than 0.1%; 2. After a total loss of coolant flow, the flow of each channel shows a downward trend; 3. In the event of introducing a traffic accident, the first coolant flow shows an increasing trend.
Scott, Gary D.; Chapman, Alberta
The Kentucky student follow-up system was studied to identify the current status of follow-up activities in business and office education and marketing and distributive education; to identify the impact of follow-up data on these programs; to identify program components for which detailed follow-up can provide information to assist in program…
Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi
2016-02-01
Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We
2010-07-01
... LEAs for basic grants, concentration grants, targeted grants, and education finance incentive grants in... TITLE I-IMPROVING THE ACADEMIC ACHIEVEMENT OF THE DISADVANTAGED Improving Basic Programs Operated by... to the Secretary to use an alternative method to distribute basic grant, concentration grant...
PREDICTION OF MEAT PRODUCT QUALITY BY THE MATHEMATICAL PROGRAMMING METHODS
Directory of Open Access Journals (Sweden)
A. B. Lisitsyn
2016-01-01
Full Text Available Abstract Use of the prediction technologies is one of the directions of the research work carried out both in Russia and abroad. Meat processing is accompanied by the complex physico-chemical, biochemical and mechanical processes. To predict the behavior of meat raw material during the technological processing, a complex of physico-technological and structural-mechanical indicators, which objectively reflects its quality, is used. Among these indicators are pH value, water binding and fat holding capacities, water activity, adhesiveness, viscosity, plasticity and so on. The paper demonstrates the influence of animal proteins (beef and pork on the physico-chemical and functional properties before and after thermal treatment of minced meat made from meat raw material with different content of the connective and fat tissues. On the basis of the experimental data, the model (stochastic dependence parameters linking the quantitative resultant and factor variables were obtained using the regression analysis, and the degree of the correlation with the experimental data was assessed. The maximum allowable levels of meat raw material replacement with animal proteins (beef and pork were established by the methods of mathematical programming. Use of the information technologies will significantly reduce the costs of the experimental search and substantiation of the optimal level of replacement of meat raw material with animal proteins (beef, pork, and will also allow establishing a relationship of product quality indicators with quantity and quality of minced meat ingredients.
Effects of mixing methods on phase distribution in vertical bubble flow
International Nuclear Information System (INIS)
Monji, Hideaki; Matsui, Goichi; Sugiyama, Takayuki.
1992-01-01
The mechanism of the phase distribution formation in a bubble flow is one of the most important problems in the control of two-phase flow systems. The effect of mixing methods on the phase distribution was experimentally investigated by using upward nitrogen gas-water bubble flow under the condition of fixed flow rates. The experimental results show that the diameter of the gas injection hole influences the phase distribution through the bubble size. The location of the injection hole and the direction of injection do not influence the phase distribution of fully developed bubble flow. The transitive equivalent bubble size from the coring bubble flow to the sliding bubble flow corresponds to the bubble shape transition. The analytical results show that the phase distribution may be predictable if the phase profile is judged from the bubble size. (author)
Mixed-Methods Assessment of Trauma and Acute Care Surgical Quality Improvement Programs in Peru.
LaGrone, Lacey N; Fuhs, Amy K; Egoavil, Eduardo Huaman; Rodriguez Castro, Manuel J A; Valderrama, Roberto; Isquith-Dicker, Leah N; Herrera-Matta, Jaime; Mock, Charles N
2017-04-01
Evidence for the positive impact of quality improvement (QI) programs on morbidity, mortality, patient satisfaction, and cost is strong. Data regarding the status of QI programs in low- and middle-income countries, as well as in-depth examination of barriers and facilitators to their implementation, are limited. This cross-sectional, descriptive study employed a mixed-methods design, including distribution of an anonymous quantitative survey and individual interviews with healthcare providers who participate in the care of the injured at ten large hospitals in Lima, Peru. Key areas identified for improvement in morbidity and mortality (M&M) conferences were the standardization of case selection, incorporation of evidence from the medical literature into case presentation and discussion, case documentation, and the development of a clear plan for case follow-up. The key barriers to QI program implementation were a lack of prioritization of QI, lack of sufficient human and administrative resources, lack of political support, and lack of education on QI practices. A national program that makes QI a required part of all health providers' professional training and responsibilities would effectively address a majority of identified barriers to QI programs in Peru. Specifically, the presence of basic QI elements, such as M&M conferences, should be required at hospitals that train pre-graduate physicians. Alternatively, short of this national-level organization, efforts that capitalize on local examples through apprenticeships between institutions or integration of QI into continuing medical education would be expected to build on the facilitators for QI programs that exist in Peru.
Sediment spatial distribution evaluated by three methods and its relation to some soil properties
Energy Technology Data Exchange (ETDEWEB)
Bacchi, O O.S. . [Centro de Energia Nuclear na Agricultura-CENA/USP, Laboratorio de Fisica do Solo, Piracicaba, SP (Brazil); Reichardt, K [Centro de Energia Nuclear na Agricultura-CENA/USP, Laboratorio de Fisica do Solo, Piracicaba, SP (Brazil); Departamento de Ciencias Exatas, Escola Superior de Agricultura ' Luiz de Queiroz' ESALQ/USP, Piracicaba, SP (Brazil); Sparovek, G [Departamento de Solos e Nutricao de Plantas, Escola Superior de Agricultura ' Luiz de Queiroz' ESALQ/USP, Piracicaba, SP (Brazil)
2003-02-15
An investigation of rates and spatial distribution of sediments on an agricultural field cultivated with sugarcane was undertaken using the {sup 137}Cs technique, USLE and WEPP models. The study was carried out on the Ceveiro watershed of the Piracicaba river basin, state of Sao Paulo, Brazil, experiencing severe soil degradation due to soil erosion. The objectives of the study were to compare the spatial distribution of sediments evaluated by the three methods and its relation to some soil properties. Erosion and sedimentation rates and their spatial distribution estimated by the three methods were completely different. Although not able to show sediment deposition, the spatial distribution of erosion rates evaluated by USLE presented the best correlation with other studied soil properties. (author)
A Study of Economical Incentives for Voltage Profile Control Method in Future Distribution Network
Tsuji, Takao; Sato, Noriyuki; Hashiguchi, Takuhei; Goda, Tadahiro; Tange, Seiji; Nomura, Toshio
In a future distribution network, it is difficult to maintain system voltage because a large number of distributed generators are introduced to the system. The authors have proposed “voltage profile control method” using power factor control of distributed generators in the previous work. However, the economical disbenefit is caused by the active power decrease when the power factor is controlled in order to increase the reactive power. Therefore, proper incentives must be given to the customers that corporate to the voltage profile control method. Thus, in this paper, we develop a new rules which can decide the economical incentives to the customers. The method is tested in one feeder distribution network model and its effectiveness is shown.
Directory of Open Access Journals (Sweden)
Mert Bayram Ali
2017-12-01
Full Text Available In this study, firstly, a practical and educational geostatistical program (JeoStat was developed, and then example analysis of porosity parameter distribution, using oilfield data, was presented.
CSIR Research Space (South Africa)
Nice, Jaco A
2015-07-01
Full Text Available This paper presents a theoretical and experimental research approach on the impact of spatial planning and functional program on the microbial load, distribution and organism diversity in hospital environments. The investigation aims to identify...
Higher moments method for generalized Pareto distribution in flood frequency analysis
Zhou, C. R.; Chen, Y. F.; Huang, Q.; Gu, S. H.
2017-08-01
The generalized Pareto distribution (GPD) has proven to be the ideal distribution in fitting with the peak over threshold series in flood frequency analysis. Several moments-based estimators are applied to estimating the parameters of GPD. Higher linear moments (LH moments) and higher probability weighted moments (HPWM) are the linear combinations of Probability Weighted Moments (PWM). In this study, the relationship between them will be explored. A series of statistical experiments and a case study are used to compare their performances. The results show that if the same PWM are used in LH moments and HPWM methods, the parameter estimated by these two methods is unbiased. Particularly, when the same PWM are used, the PWM method (or the HPWM method when the order equals 0) shows identical results in parameter estimation with the linear Moments (L-Moments) method. Additionally, this phenomenon is significant when r ≥ 1 that the same order PWM are used in HPWM and LH moments method.
Projection methods for the analysis of molecular-frame photoelectron angular distributions
International Nuclear Information System (INIS)
Grum-Grzhimailo, A.N.; Lucchese, R.R.; Liu, X.-J.; Pruemper, G.; Morishita, Y.; Saito, N.; Ueda, K.
2007-01-01
A projection method is developed for extracting the nondipole contribution from the molecular frame photoelectron angular distributions of linear molecules. A corresponding convenient parametric form for the angular distributions is derived. The analysis was performed for the N 1s photoionization of the NO molecule a few eV above the ionization threshold. No detectable nondipole contribution was found for the photon energy of 412 eV
A method and programme (BREACH) for predicting the flow distribution in water cooled reactor cores
International Nuclear Information System (INIS)
Randles, J.; Roberts, H.A.
1961-03-01
The method presented here of evaluating the flow rate in individual reactor channels may be applied to any type of water cooled reactor in which boiling occurs The flow distribution is calculated with the aid of a MERCURY autocode programme, BREACH, which is described in detail. This programme computes the steady state longitudinal void distribution and pressure drop in a single channel on the basis of the homogeneous model of two phase flow. (author)
A method and programme (BREACH) for predicting the flow distribution in water cooled reactor cores
Energy Technology Data Exchange (ETDEWEB)
Randles, J; Roberts, H A [Technical Assessments and Services Division, Atomic Energy Establishment, Winfrith, Dorchester, Dorset (United Kingdom)
1961-03-15
The method presented here of evaluating the flow rate in individual reactor channels may be applied to any type of water cooled reactor in which boiling occurs The flow distribution is calculated with the aid of a MERCURY autocode programme, BREACH, which is described in detail. This programme computes the steady state longitudinal void distribution and pressure drop in a single channel on the basis of the homogeneous model of two phase flow. (author)
Pascual Pañach, Josep
2010-01-01
Leaks are present in all water distribution systems. In this paper a method for leakage detection and localisation is presented. It uses pressure measurements and simulation models. Leakage localisation methodology is based on pressure sensitivity matrix. Sensitivity is normalised and binarised using a common threshold for all nodes, so a signatures matrix is obtained. A pressure sensor optimal distribution methodology is developed too, but it is not used in the real test. To validate this...
International Nuclear Information System (INIS)
Imae, Toshikazu; Takenaka, Shigeharu; Saotome, Naoya
2016-01-01
The purpose of this study was to evaluate a post-analysis method for cumulative dose distribution in stereotactic body radiotherapy (SBRT) using volumetric modulated arc therapy (VMAT). VMAT is capable of acquiring respiratory signals derived from projection images and machine parameters based on machine logs during VMAT delivery. Dose distributions were reconstructed from the respiratory signals and machine parameters in the condition where respiratory signals were without division, divided into 4 and 10 phases. The dose distribution of each respiratory phase was calculated on the planned four-dimensional CT (4DCT). Summation of the dose distributions was carried out using deformable image registration (DIR), and cumulative dose distributions were compared with those of the corresponding plans. Without division, dose differences between cumulative distribution and plan were not significant. In the condition Where respiratory signals were divided, dose differences were observed over dose in cranial region and under dose in caudal region of planning target volume (PTV). Differences between 4 and 10 phases were not significant. The present method Was feasible for evaluating cumulative dose distribution in VMAT-SBRT using 4DCT and DIR. (author)
Kosenko, Viktor; Persiyanova, Elena; Belotskyy, Oleksiy; Malyeyeva, Olga
2017-01-01
The subject matter of the article is information and communication networks (ICN) of critical infrastructure systems (CIS). The goal of the work is to create methods for managing the data flows and resources of the ICN of CIS to improve the efficiency of information processing. The following tasks were solved in the article: the data flow model of multi-level ICN structure was developed, the method of adaptive distribution of data flows was developed, the method of network resource assignment...
Method of determining local distribution of water or aqueous solutions penetrated into plastics
International Nuclear Information System (INIS)
Krejci, M.; Joks, Z.
1983-01-01
Penetrating water is labelled with tritium and the distribution is autoradiographically monitored. The discovery consists in that the plastic with the penetrating water or aqueous solution is cooled with liquid nitrogen and under the stream of liquid nitrogen the plastic is cut and exposed on the autoradiographic film in the freezer at temperatures from -15 to -30 degC. The autoradiogram will show the distribution of water in the whole area of the section. The described method may be used to detect water distribution also in filled plastics. (J.P.)
Finite difference applied to the reconstruction method of the nuclear power density distribution
International Nuclear Information System (INIS)
Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.
2016-01-01
Highlights: • A method for reconstruction of the power density distribution is presented. • The method uses discretization by finite differences of 2D neutrons diffusion equation. • The discretization is performed homogeneous meshes with dimensions of a fuel cell. • The discretization is combined with flux distributions on the four node surfaces. • The maximum errors in reconstruction occur in the peripheral water region. - Abstract: In this reconstruction method the two-dimensional (2D) neutron diffusion equation is discretized by finite differences, employed to two energy groups (2G) and meshes with fuel-pin cell dimensions. The Nodal Expansion Method (NEM) makes use of surface discontinuity factors of the node and provides for reconstruction method the effective multiplication factor of the problem and the four surface average fluxes in homogeneous nodes with size of a fuel assembly (FA). The reconstruction process combines the discretized 2D diffusion equation by finite differences with fluxes distribution on four surfaces of the nodes. These distributions are obtained for each surfaces from a fourth order one-dimensional (1D) polynomial expansion with five coefficients to be determined. The conditions necessary for coefficients determination are three average fluxes on consecutive surfaces of the three nodes and two fluxes in corners between these three surface fluxes. Corner fluxes of the node are determined using a third order 1D polynomial expansion with four coefficients. This reconstruction method uses heterogeneous nuclear parameters directly providing the heterogeneous neutron flux distribution and the detailed nuclear power density distribution within the FAs. The results obtained with this method has good accuracy and efficiency when compared with reference values.
Winkler, Sabune J; Cagliero, Enrico; Witte, Elizabeth; Bierer, Barbara E
2014-08-01
The Harvard Clinical and Translational Science Center ("Harvard Catalyst") Research Subject Advocacy (RSA) Program has reengineered subject advocacy, distributing the delivery of advocacy functions through a multi-institutional, central platform rather than vesting these roles and responsibilities in a single individual functioning as a subject advocate. The program is process-oriented and output-driven, drawing on the strengths of participating institutions to engage local stakeholders both in the protection of research subjects and in advocacy for subjects' rights. The program engages stakeholder communities in the collaborative development and distributed delivery of accessible and applicable educational programming and resources. The Harvard Catalyst RSA Program identifies, develops, and supports the sharing and distribution of expertise, education, and resources for the benefit of all institutions, with a particular focus on the frontline: research subjects, researchers, research coordinators, and research nurses. © 2014 Wiley Periodicals, Inc.
Kassa, Semu Mitiku; Tsegay, Teklay Hailay
2017-08-01
Tri-level optimization problems are optimization problems with three nested hierarchical structures, where in most cases conflicting objectives are set at each level of hierarchy. Such problems are common in management, engineering designs and in decision making situations in general, and are known to be strongly NP-hard. Existing solution methods lack universality in solving these types of problems. In this paper, we investigate a tri-level programming problem with quadratic fractional objective functions at each of the three levels. A solution algorithm has been proposed by applying fuzzy goal programming approach and by reformulating the fractional constraints to equivalent but non-fractional non-linear constraints. Based on the transformed formulation, an iterative procedure is developed that can yield a satisfactory solution to the tri-level problem. The numerical results on various illustrative examples demonstrated that the proposed algorithm is very much promising and it can also be used to solve larger-sized as well as n-level problems of similar structure.
Comparison of Two Methods Used to Model Shape Parameters of Pareto Distributions
Liu, C.; Charpentier, R.R.; Su, J.
2011-01-01
Two methods are compared for estimating the shape parameters of Pareto field-size (or pool-size) distributions for petroleum resource assessment. Both methods assume mature exploration in which most of the larger fields have been discovered. Both methods use the sizes of larger discovered fields to estimate the numbers and sizes of smaller fields: (1) the tail-truncated method uses a plot of field size versus size rank, and (2) the log-geometric method uses data binned in field-size classes and the ratios of adjacent bin counts. Simulation experiments were conducted using discovered oil and gas pool-size distributions from four petroleum systems in Alberta, Canada and using Pareto distributions generated by Monte Carlo simulation. The estimates of the shape parameters of the Pareto distributions, calculated by both the tail-truncated and log-geometric methods, generally stabilize where discovered pool numbers are greater than 100. However, with fewer than 100 discoveries, these estimates can vary greatly with each new discovery. The estimated shape parameters of the tail-truncated method are more stable and larger than those of the log-geometric method where the number of discovered pools is more than 100. Both methods, however, tend to underestimate the shape parameter. Monte Carlo simulation was also used to create sequences of discovered pool sizes by sampling from a Pareto distribution with a discovery process model using a defined exploration efficiency (in order to show how biased the sampling was in favor of larger fields being discovered first). A higher (more biased) exploration efficiency gives better estimates of the Pareto shape parameters. ?? 2011 International Association for Mathematical Geosciences.
Scaling Agile Methods for Department of Defense Programs
2016-12-01
DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. LeSS, reward and remuneration system in...Washington Headquarters Services, Directorate for information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and
A FPGA-based identity authority method in quantum key distribution system
International Nuclear Information System (INIS)
Cui Ke; Luo Chunli; Zhang Hongfei; Lin Shengzhao; Jin Ge; Wang Jian
2012-01-01
In this article, an identity authority method realized in hardware is developed which is used in quantum key distribution (QKD) systems. This method is based on LFSR-Teoplitz hashing matrix. Its benefits relay on its easy implementation in hardware and high secure coefficient. It can gain very high security by means of splitting part of the final key generated from QKD systems as the seed where it is required in the identity authority method. We propose an specific flow of the identity authority method according to the problems and features of the hardware. The proposed method can satisfy many kinds of QKD systems. (authors)
Root Causes of Component Failures Program: Methods and applications
International Nuclear Information System (INIS)
Satterwhite, D.G.; Cadwallader, L.C.; Vesely, W.E.; Meale, B.M.
1986-12-01
This report contains information pertaining to definitions, methodologies, and applications of root cause analysis. Of specific interest, and highlighted throughout the discussion, are applications pertaining to current and future Nuclear Regulatory Commission (NRC) light water reactor safety programs. These applications are discussed in view of addressing specific program issues under NRC consideration and reflect current root cause analysis capabilities
Bayuk, Milla; Bayuk, Barry S.
A program currently in use by the military that gives instruction in the so-called "sensitive" languages is based on the "Army Method" which was initiated in military language programs during World War II. Attention to the sensitive language program initiated a review of the programs, especially those conducted by the military intelligence schools…
Frontiers in economic research on petroleum allocation using mathematical programming methods
International Nuclear Information System (INIS)
Rowse, J.
1991-01-01
This paper presents a state of the art of operations research techniques applied in petroleum allocation, namely mathematical programming methods, with principal attention directed toward linear programming and nonlinear programming (including quadratic programming). Contributions to the economics of petroleum allocation are discussed for international trade, industrial organization, regional/macro economics, public finance and natural resource/environmental economics
Directory of Open Access Journals (Sweden)
Shujing Su
2015-01-01
Full Text Available For the characteristics of parameters dispersion in large factories, storehouses, and other applications, a distributed parameter measurement system is designed that is based on the ring network. The structure of the system and the circuit design of the master-slave node are described briefly. The basic protocol architecture about transmission communication is introduced, and then this paper comes up with two kinds of distributed transmission control methods. Finally, the reliability, extendibility, and control characteristic of these two methods are tested through a series of experiments. Moreover, the measurement results are compared and discussed.
Networked and Distributed Control Method with Optimal Power Dispatch for Islanded Microgrids
DEFF Research Database (Denmark)
Li, Qiang; Peng, Congbo; Chen, Minyou
2017-01-01
of controllable agents. The distributed control laws derived from the first subgraph guarantee the supply-demand balance, while further control laws from the second subgraph reassign the outputs of controllable distributed generators, which ensure active and reactive power are dispatched optimally. However...... according to our proposition. Finally, the method is evaluated over seven cases via simulation. The results show that the system performs as desired, even if environmental conditions and load demand fluctuate significantly. In summary, the method can rapidly respond to fluctuations resulting in optimal...
An improved method for calculating force distributions in moment-stiff timber connections
DEFF Research Database (Denmark)
Ormarsson, Sigurdur; Blond, Mette
2012-01-01
An improved method for calculating force distributions in moment-stiff metal dowel-type timber connections is presented, a method based on use of three-dimensional finite element simulations of timber connections subjected to moment action. The study that was carried out aimed at determining how...... the slip modulus varies with the angle between the direction of the dowel forces and the fibres in question, as well as how the orthotropic stiffness behaviour of the wood material affects the direction and the size of the forces. It was assumed that the force distribution generated by the moment action...
Methods to determine fast-ion distribution functions from multi-diagnostic measurements
DEFF Research Database (Denmark)
Jacobsen, Asger Schou; Salewski, Mirko
-ion diagnostic views, it is possible to infer the distribution function using a tomography approach. Several inversion methods for solving this tomography problem in velocity space are implemented and compared. It is found that the best quality it obtained when using inversion methods which penalise steep......Understanding the behaviour of fast ions in a fusion plasma is very important, since the fusion-born alpha particles are expected to be the main source of heating in a fusion power plant. Preferably, the entire fast-ion velocity-space distribution function would be measured. However, no fast...
John R. Jones
1985-01-01
Quaking aspen is the most widely distributed native North American tree species (Little 1971, Sargent 1890). It grows in a great diversity of regions, environments, and communities (Harshberger 1911). Only one deciduous tree species in the world, the closely related Eurasian aspen (Populus tremula), has a wider range (Weigle and Frothingham 1911)....
International Nuclear Information System (INIS)
Kane, V.E.
1979-10-01
The standard maximum likelihood and moment estimation procedures are shown to have some undesirable characteristics for estimating the parameters in a three-parameter lognormal distribution. A class of goodness-of-fit estimators is found which provides a useful alternative to the standard methods. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Shapiro-Francia tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted-order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Bias and robustness of the procedures are examined and example data sets analyzed including geochemical data from the National Uranium Resource Evaluation Program
IFNA approved Chinese Anaesthesia Nurse Education Program: A Delphi method.
Hu, Jiale; Fallacaro, Michael D; Jiang, Lili; Wu, Junyan; Jiang, Hong; Shi, Zhen; Ruan, Hong
2017-09-01
Numerous nurses work in operating rooms and recovery rooms or participate in the performance of anaesthesia in China. However, the scope of practice and the education for Chinese Anaesthesia Nurses is not standardized, varying from one geographic location to another. Furthermore, most nurses are not trained sufficiently to provide anaesthesia care. This study aimed to develop the first Anaesthesia Nurse Education Program in Mainland China based on the Educational Standards of the International Federation of Nurse Anaesthetists. The Delphi technique was applied to develop the scope of practice, competencies for Chinese Anaesthesia Nurses and education program. In 2014 the Anaesthesia Nurse Education Program established by the hospital applied for recognition by the International Federation of Nurse Anaesthetists. The Program's curriculum was evaluated against the IFNA Standards and recognition was awarded in 2015. The four-category, 50-item practice scope, and the three-domain, 45-item competency list were identified for Chinese Anaesthesia Nurses. The education program, which was established based on the International Federation of Nurse Anaesthetists educational standards and Chinese context, included nine curriculum modules. In March 2015, 13 candidates received and passed the 21-month education program. The Anaesthesia Nurse Education Program became the first program approved by the International Federation of Nurse Anaesthetists in China. Policy makers and hospital leaders can be confident that anaesthesia nurses graduating from this Chinese program will be prepared to demonstrate high level patient care as reflected in the recognition by IFNA of their adoption of international nurse anaesthesia education standards. Copyright © 2017 Elsevier Ltd. All rights reserved.
A method to describe inelastic gamma field distribution in neutron gamma density logging.
Zhang, Feng; Zhang, Quanying; Liu, Juntao; Wang, Xinguang; Wu, He; Jia, Wenbao; Ti, Yongzhou; Qiu, Fei; Zhang, Xiaoyang
2017-11-01
Pulsed neutron gamma density logging (NGD) is of great significance for radioprotection and density measurement in LWD, however, the current methods have difficulty in quantitative calculation and single factor analysis for the inelastic gamma field distribution. In order to clarify the NGD mechanism, a new method is developed to describe the inelastic gamma field distribution. Based on the fast-neutron scattering and gamma attenuation, the inelastic gamma field distribution is characterized by the inelastic scattering cross section, fast-neutron scattering free path, formation density and other parameters. And the contribution of formation parameters on the field distribution is quantitatively analyzed. The results shows the contribution of density attenuation is opposite to that of inelastic scattering cross section and fast-neutron scattering free path. And as the detector-spacing increases, the density attenuation gradually plays a dominant role in the gamma field distribution, which means large detector-spacing is more favorable for the density measurement. Besides, the relationship of density sensitivity and detector spacing was studied according to this gamma field distribution, therefore, the spacing of near and far gamma ray detector is determined. The research provides theoretical guidance for the tool parameter design and density determination of pulsed neutron gamma density logging technique. Copyright © 2017 Elsevier Ltd. All rights reserved.
Design method of freeform light distribution lens for LED automotive headlamp based on DMD
Ma, Jianshe; Huang, Jianwei; Su, Ping; Cui, Yao
2018-01-01
We propose a new method to design freeform light distribution lens for light-emitting diode (LED) automotive headlamp based on digital micro mirror device (DMD). With the Parallel optical path architecture, the exit pupil of the illuminating system is set in infinity. Thus the principal incident rays of micro lens in DMD is parallel. DMD is made of high speed digital optical reflection array, the function of distribution lens is to distribute the emergent parallel rays from DMD and get a lighting pattern that fully comply with the national regulation GB 25991-2010.We use DLP 4500 to design the light distribution lens, mesh the target plane regulated by the national regulation GB 25991-2010 and correlate the mesh grids with the active mirror array of DLP4500. With the mapping relations and the refraction law, we can build the mathematics model and get the parameters of freeform light distribution lens. Then we import its parameter into the three-dimensional (3D) software CATIA to construct its 3D model. The ray tracing results using Tracepro demonstrate that the Illumination value of target plane is easily adjustable and fully comply with the requirement of the national regulation GB 25991-2010 by adjusting the exit brightness value of DMD. The theoretical optical efficiencies of the light distribution lens designed using this method could be up to 92% without any other auxiliary lens.
International Nuclear Information System (INIS)
Tagesson, M.; Ljungberg, M.; Strand, S.E.
1996-01-01
In systemic radiation therapy, the absorbed dose distribution must be calculated from the individual activity distribution. A computer code has been developed for the conversion of an arbitrary activity distribution to a 3-D absorbed dose distribution. The activity distribution can be described either analytically or as a voxel based distribution, which comes from a SPECT acquisition. Decay points are sampled according to the activity map, and particles (photons and electrons) from the decay are followed through the tissue until they either escape the patient or drop below a cut off energy. To verify the calculated results, the mathematically defined MIRD phantom and unity density spheres have been included in the code. Also other published dosimetry data were used for verification. Absorbed fraction and S-values were calculated. A comparison with simulated data from the code with MIRD data shows good agreement. The S values are within 10-20% of published MIRD S values for most organs. Absorbed fractions for photons and electrons in spheres (masses between 1 g and 200 kg) are within 10-15% of those published. Radial absorbed dose distributions in a necrotic tumor show good agreement with published data. The application of the code in a radionuclide therapy dose planning system, based on quantitative SPECT, is discussed. (orig.)
Numerical calculation of elastohydrodynamic lubrication methods and programs
Huang, Ping
2015-01-01
The book not only offers scientists and engineers a clear inter-disciplinary introduction and orientation to all major EHL problems and their solutions but, most importantly, it also provides numerical programs on specific application in engineering. A one-stop reference providing equations and their solutions to all major elastohydrodynamic lubrication (EHL) problems, plus numerical programs on specific applications in engineering offers engineers and scientists a clear inter-disciplinary introduction and a concise program for practical engineering applications to most important EHL problems
KUEBEL. A Fortran program for computation of cooling-agent-distribution within reactor fuel-elements
International Nuclear Information System (INIS)
Inhoven, H.
1984-12-01
KUEBEL is a Fortran-program for computation of cooling-agent-distribution within reactor fuel-elements or -zones of theirs. They may be assembled of max. 40 cooling-channels with laminar up to turbulent type of flow (respecting Reynolds' coefficients up to 2.0E+06) at equal pressure loss. Flow-velocity, dynamic flow-, contraction- and friction-losses will be calculated for each channel and for the total zone. Other computations will present mean heat-up of cooling-agent, mean outlet-temperature of the core, boiling-temperature and absolute pressure at flow-outlet. All characteristic coolant-values, including the factor of safety for flow-instability of the most-loaded cooling gap are computed by 'KUEBEL' too. Absolute pressure at flow-outlet or is-factor may be defined as dependent or independent variables of the program alternatively. In latter case 3 variations of solution will be available: Adapted flow of cooling-agent, inlet-temperature of the core and thermal power. All calculations can be done alternatively with variation of parameters: flow of cooling-agent, inlet-temperature of the core and thermal power, which are managed by the program itself. 'KUEBEL' is able to distinguish light- and heavy-water coolant, flow-direction of coolant and fuel elements with parallel, rectangular, respectively concentric, cylindrical shape of their gaps. Required material specifics are generated by the program. Segments of fuel elements or constructively unconnected gaps can also be computed by means of interposition of S.C. 'phantom channels'. (orig.) [de
Directory of Open Access Journals (Sweden)
Maman Abdurohman
2017-12-01
Full Text Available This research proposed a new method to enhance Distributed Denial of Service (DDoS detection attack on Software Defined Network (SDN environment. This research utilized the OpenFlow controller of SDN for DDoS attack detection using modified method and regarding entropy value. The new method would check whether the traffic was a normal traffic or DDoS attack by measuring the randomness of the packets. This method consisted of two steps, detecting attack and checking the entropy. The result shows that the new method can reduce false positive when there is a temporary and sudden increase in normal traffic. The new method succeeds in not detecting this as a DDoS attack. Compared to previous methods, this proposed method can enhance DDoS attack detection on SDN environment.
An efficient inverse radiotherapy planning method for VMAT using quadratic programming optimization.
Hoegele, W; Loeschel, R; Merkle, N; Zygmanski, P
2012-01-01
The purpose of this study is to investigate the feasibility of an inverse planning optimization approach for the Volumetric Modulated Arc Therapy (VMAT) based on quadratic programming and the projection method. The performance of this method is evaluated against a reference commercial planning system (eclipse(TM) for rapidarc(TM)) for clinically relevant cases. The inverse problem is posed in terms of a linear combination of basis functions representing arclet dose contributions and their respective linear coefficients as degrees of freedom. MLC motion is decomposed into basic motion patterns in an intuitive manner leading to a system of equations with a relatively small number of equations and unknowns. These equations are solved using quadratic programming under certain limiting physical conditions for the solution, such as the avoidance of negative dose during optimization and Monitor Unit reduction. The modeling by the projection method assures a unique treatment plan with beneficial properties, such as the explicit relation between organ weightings and the final dose distribution. Clinical cases studied include prostate and spine treatments. The optimized plans are evaluated by comparing isodose lines, DVH profiles for target and normal organs, and Monitor Units to those obtained by the clinical treatment planning system eclipse(TM). The resulting dose distributions for a prostate (with rectum and bladder as organs at risk), and for a spine case (with kidneys, liver, lung and heart as organs at risk) are presented. Overall, the results indicate that similar plan qualities for quadratic programming (QP) and rapidarc(TM) could be achieved at significantly more efficient computational and planning effort using QP. Additionally, results for the quasimodo phantom [Bohsung et al., "IMRT treatment planning: A comparative inter-system and inter-centre planning exercise of the estro quasimodo group," Radiother. Oncol. 76(3), 354-361 (2005)] are presented as an example
Thermodynamic method for generating random stress distributions on an earthquake fault
Barall, Michael; Harris, Ruth A.
2012-01-01
This report presents a new method for generating random stress distributions on an earthquake fault, suitable for use as initial conditions in a dynamic rupture simulation. The method employs concepts from thermodynamics and statistical mechanics. A pattern of fault slip is considered to be analogous to a micro-state of a thermodynamic system. The energy of the micro-state is taken to be the elastic energy stored in the surrounding medium. Then, the Boltzmann distribution gives the probability of a given pattern of fault slip and stress. We show how to decompose the system into independent degrees of freedom, which makes it computationally feasible to select a random state. However, due to the equipartition theorem, straightforward application of the Boltzmann distribution leads to a divergence which predicts infinite stress. To avoid equipartition, we show that the finite strength of the fault acts to restrict the possible states of the system. By analyzing a set of earthquake scaling relations, we derive a new formula for the expected power spectral density of the stress distribution, which allows us to construct a computer algorithm free of infinities. We then present a new technique for controlling the extent of the rupture by generating a random stress distribution thousands of times larger than the fault surface, and selecting a portion which, by chance, has a positive stress perturbation of the desired size. Finally, we present a new two-stage nucleation method that combines a small zone of forced rupture with a larger zone of reduced fracture energy.
Directory of Open Access Journals (Sweden)
J. Szymszal
2009-01-01
Full Text Available The study discusses application of computer simulation based on the method of inverse cumulative distribution function. The simulationrefers to an elementary static case, which can also be solved by physical experiment, consisting mainly in observations of foundryproduction in a selected foundry plant. For the simulation and forecasting of foundry production quality in selected cast iron grade, arandom number generator of Excel calculation sheet was chosen. Very wide potentials of this type of simulation when applied to theevaluation of foundry production quality were demonstrated, using a number generator of even distribution for generation of a variable ofan arbitrary distribution, especially of a preset empirical distribution, without any need of adjusting to this variable the smooth theoreticaldistributions.
ON ESTIMATION AND HYPOTHESIS TESTING OF THE GRAIN SIZE DISTRIBUTION BY THE SALTYKOV METHOD
Directory of Open Access Journals (Sweden)
Yuri Gulbin
2011-05-01
Full Text Available The paper considers the problem of validity of unfolding the grain size distribution with the back-substitution method. Due to the ill-conditioned nature of unfolding matrices, it is necessary to evaluate the accuracy and precision of parameter estimation and to verify the possibility of expected grain size distribution testing on the basis of intersection size histogram data. In order to review these questions, the computer modeling was used to compare size distributions obtained stereologically with those possessed by three-dimensional model aggregates of grains with a specified shape and random size. Results of simulations are reported and ways of improving the conventional stereological techniques are suggested. It is shown that new improvements in estimating and testing procedures enable grain size distributions to be unfolded more efficiently.
Mathematical models and methods of assisting state subsidy distribution at the regional level
Bondarenko, Yu V.; Azarnova, T. V.; Kashirina, I. L.; Goroshko, I. V.
2018-03-01
One of the most common forms of state support in the world is subsidization. By providing direct financial support to businesses, local authorities get an opportunity to set certain performance targets. Successful achievement of such targets depends not only on the amount of the budgetary allocations, but also on the distribution mechanisms adopted by the regional authorities. Analysis of the existing mechanisms of subsidies distribution in Russian regions shows that in most cases the choice of subsidy calculation formula and its parameters depends on the experts’ subjective opinion. The authors offer a new approach to assisting subsidy distribution at the regional level, which is based on mathematical models and methods, allowing to evaluate the influence of subsidy distribution on the region’s social and economic development. The results of calculations were discussed with the regional administration representatives who confirmed their significance for decision-making in the sphere of state control.
M. ZANGIABADI; H. R. MALEKI
2007-01-01
In the real-world optimization problems, coefficients of the objective function are not known precisely and can be interpreted as fuzzy numbers. In this paper we define the concepts of optimality for linear programming problems with fuzzy parameters based on those for multiobjective linear programming problems. Then by using the concept of comparison of fuzzy numbers, we transform a linear programming problem with fuzzy parameters to a multiobjective linear programming problem. To this end, w...
Pair Programming as a Modern Method of Teaching Computer Science
Irena Nančovska Šerbec; Branko Kaučič; Jože Rugelj
2008-01-01
At the Faculty of Education, University of Ljubljana we educate future computer science teachers. Beside didactical, pedagogical, mathematical and other interdisciplinary knowledge, students gain knowledge and skills of programming that are crucial for computer science teachers. For all courses, the main emphasis is the absorption of professional competences, related to the teaching profession and the programming profile. The latter are selected according to the well-known document, the ACM C...
A method for developing standard patient education program
DEFF Research Database (Denmark)
Lura, Carolina Bryne; Hauch, Sophie Misser Pallesgaard; Gøeg, Kirstine Rosenbeck
2018-01-01
for developing standard digital patient education programs for patients in self-administration of blood samples drawn from CVC. The Design Science Research Paradigm was used to develop a digital patient education program, called PAVIOSY, to increase patient safety during execution of the blood sample collection...... of the educational patient system, health professionals must be engaged early in the development of content and design phase....
2012-05-30
...-AC46 Energy Conservation Program: Alternative Efficiency Determination Methods and Alternative Rating... regulations authorizing the use of alternative methods of determining energy efficiency or energy consumption... alternative methods of determining energy efficiency or energy consumption of various consumer products and...
International Nuclear Information System (INIS)
Ahmadigorji, Masoud; Amjady, Nima
2014-01-01
Highlights: • A new dynamic distribution network expansion planning model is presented. • A Binary Enhanced Particle Swarm Optimization (BEPSO) algorithm is proposed. • A Modified Differential Evolution (MDE) algorithm is proposed. • A new bi-level optimization approach composed of BEPSO and MDE is presented. • The effectiveness of the proposed optimization approach is extensively illustrated. - Abstract: Reconstruction in the power system and appearing of new technologies for generation capacity of electrical energy has led to significant innovation in Distribution Network Expansion Planning (DNEP). Distributed Generation (DG) includes the application of small/medium generation units located in power distribution networks and/or near the load centers. Appropriate utilization of DG can affect the various technical and operational indices of the distribution network such as the feeder loading, energy losses and voltage profile. In addition, application of DG in proper size is an essential tool to achieve the DG maximum potential benefits. In this paper, a time-based (dynamic) model for DNEP is proposed to determine the optimal size, location and installation year of DG in distribution system. Also, in this model, the Optimal Power Flow (OPF) is exerted to determine the optimal generation of DGs for every potential solution in order to minimize the investment and operation costs following the load growth in a specified planning period. Besides, the reinforcement requirements of existing distribution feeders are considered, simultaneously. The proposed optimization problem is solved by the combination of evolutionary methods of a new Binary Enhanced Particle Swarm Optimization (BEPSO) and Modified Differential Evolution (MDE) to find the optimal expansion strategy and solve OPF, respectively. The proposed planning approach is applied to two typical primary distribution networks and compared with several other methods. These comparisons illustrate the
International Nuclear Information System (INIS)
Caldarola, L.
1976-01-01
A method is proposed for the analytical evaluation of the cumulative failure probability distribution of complex repairable systems. The method is based on a set of integral equations each one referring to a specific minimal cut set of the system. Each integral equation links the unavailability of a minimal cut set to its failure probability density distribution and to the probability that the minimal cut set is down at the time t under the condition that it was down at time t'(t'<=t). The limitations for the applicability of the method are also discussed. It has been concluded that the method is applicable if the process describing the failure of a minimal cut set is a 'delayed semi-regenerative process'. (Auth.)
Methods of Run-Time Error Detection in Distributed Process Control Software
DEFF Research Database (Denmark)
Drejer, N.
of generic run-time error types, design of methods of observing application software behaviorduring execution and design of methods of evaluating run time constraints. In the definition of error types it is attempted to cover all relevant aspects of the application softwaree behavior. Methods of observation......In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...
Prestressing force monitoring method for a box girder through distributed long-gauge FBG sensors
Chen, Shi-Zhi; Wu, Gang; Xing, Tuo; Feng, De-Cheng
2018-01-01
Monitoring prestressing forces is essential for prestressed concrete box girder bridges. However, the current monitoring methods used for prestressing force were not applicable for a box girder neither because of the sensor’s setup being constrained or shear lag effect not being properly considered. Through combining with the previous analysis model of shear lag effect in the box girder, this paper proposed an indirect monitoring method for on-site determination of prestressing force in a concrete box girder utilizing the distributed long-gauge fiber Bragg grating sensor. The performance of this method was initially verified using numerical simulation for three different distribution forms of prestressing tendons. Then, an experiment involving two concrete box girders was conducted to study the feasibility of this method under different prestressing levels preliminarily. The results of both numerical simulation and lab experiment validated this method’s practicability in a box girder.
Voltage Based Detection Method for High Impedance Fault in a Distribution System
Thomas, Mini Shaji; Bhaskar, Namrata; Prakash, Anupama
2016-09-01
High-impedance faults (HIFs) on distribution feeders cannot be detected by conventional protection schemes, as HIFs are characterized by their low fault current level and waveform distortion due to the nonlinearity of the ground return path. This paper proposes a method to identify the HIFs in distribution system and isolate the faulty section, to reduce downtime. This method is based on voltage measurements along the distribution feeder and utilizes the sequence components of the voltages. Three models of high impedance faults have been considered and source side and load side breaking of the conductor have been studied in this work to capture a wide range of scenarios. The effect of neutral grounding of the source side transformer is also accounted in this study. The results show that the algorithm detects the HIFs accurately and rapidly. Thus, the faulty section can be isolated and service can be restored to the rest of the consumers.
Cellular Neural Network-Based Methods for Distributed Network Intrusion Detection
Directory of Open Access Journals (Sweden)
Kang Xie
2015-01-01
Full Text Available According to the problems of current distributed architecture intrusion detection systems (DIDS, a new online distributed intrusion detection model based on cellular neural network (CNN was proposed, in which discrete-time CNN (DTCNN was used as weak classifier in each local node and state-controlled CNN (SCCNN was used as global detection method, respectively. We further proposed a new method for design template parameters of SCCNN via solving Linear Matrix Inequality. Experimental results based on KDD CUP 99 dataset show its feasibility and effectiveness. Emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI implementation which allows the distributed intrusion detection to be performed better.
An analog computer method for solving flux distribution problems in multi region nuclear reactors
Energy Technology Data Exchange (ETDEWEB)
Radanovic, L; Bingulac, S; Lazarevic, B; Matausek, M [Boris Kidric Institute of Nuclear Sciences Vinca, Beograd (Yugoslavia)
1963-04-15
The paper describes a method developed for determining criticality conditions and plotting flux distribution curves in multi region nuclear reactors on a standard analog computer. The method, which is based on the one-dimensional two group treatment, avoids iterative procedures normally used for boundary value problems and is practically insensitive to errors in initial conditions. The amount of analog equipment required is reduced to a minimum and is independent of the number of core regions and reflectors. (author)
Directory of Open Access Journals (Sweden)
Pūle Daina
2016-12-01
Full Text Available Prevalence of Legionella in drinking water distribution systems is a widespread problem. Outbreaks of Legionella caused diseases occur despite various disinfectants are used in order to control Legionella. Conventional methods like thermal disinfection, silver/copper ionization, ultraviolet irradiation or chlorine-based disinfection have not been effective in the long term for control of biofilm bacteria. Therefore, research to develop more effective disinfection methods is still necessary.
A method for exploring the distribution of radioelements at depth using gamma-ray spectrometric data
International Nuclear Information System (INIS)
Li Qingyang
1997-01-01
Based on the inherent relation between radioelements and terrestrial heat flow, theoretically shows the possibility of exploring the distribution of radioelements at depth using gamma-ray spectrometric data, and a data-processing and synthesizing method has been adopted to deduce the calculation formula. The practical application in the uranium mineralized area No. 2801 in Yunnan Province proves that this method is of practical value, and it has been successfully applied to the data processing and good results have been obtained
Directory of Open Access Journals (Sweden)
Amanda E Links
2016-10-01
Full Text Available The National Institutes of Health Undiagnosed Diseases Program (NIH UDP applies translational research systematically to diagnose patients with undiagnosed diseases. The challenge is to implement an information system enabling scalable translational research. The authors hypothesized that similarly complex problems are resolvable through process management and the distributed cognition of communities. The team therefore built the NIH UDP Integrated Collaboration System (UDPICS to form virtual collaborative multidisciplinary research networks or communities. UDPICS supports these communities through integrated process management, ontology-based phenotyping, biospecimen management, cloud-based genomic analysis, and an electronic laboratory notebook. UDPICS provided a mechanism for efficient, transparent, and scalable translational research and thereby addressed many of the complex and diverse research and logistical problems of the NIH UDP. Full definition of the strengths and deficiencies of UDPICS will require formal qualitative and quantitative usability and process improvement measurement.
Links, Amanda E; Draper, David; Lee, Elizabeth; Guzman, Jessica; Valivullah, Zaheer; Maduro, Valerie; Lebedev, Vlad; Didenko, Maxim; Tomlin, Garrick; Brudno, Michael; Girdea, Marta; Dumitriu, Sergiu; Haendel, Melissa A; Mungall, Christopher J; Smedley, Damian; Hochheiser, Harry; Arnold, Andrew M; Coessens, Bert; Verhoeven, Steven; Bone, William; Adams, David; Boerkoel, Cornelius F; Gahl, William A; Sincan, Murat
2016-01-01
The National Institutes of Health Undiagnosed Diseases Program (NIH UDP) applies translational research systematically to diagnose patients with undiagnosed diseases. The challenge is to implement an information system enabling scalable translational research. The authors hypothesized that similar complex problems are resolvable through process management and the distributed cognition of communities. The team, therefore, built the NIH UDP integrated collaboration system (UDPICS) to form virtual collaborative multidisciplinary research networks or communities. UDPICS supports these communities through integrated process management, ontology-based phenotyping, biospecimen management, cloud-based genomic analysis, and an electronic laboratory notebook. UDPICS provided a mechanism for efficient, transparent, and scalable translational research and thereby addressed many of the complex and diverse research and logistical problems of the NIH UDP. Full definition of the strengths and deficiencies of UDPICS will require formal qualitative and quantitative usability and process improvement measurement.
Haroldson, Mark A.; Schwartz, Charles C.; Thompson, Daniel J.; Bjornlie, Daniel D.; Gunther, Kerry A.; Cain, Steven L.; Tyers, Daniel B.; Frey, Kevin L.; Aber, Bryan C.
2014-01-01
The distribution of the Greater Yellowstone Ecosystem grizzly bear (Ursus arctos) population has expanded into areas unoccupied since the early 20th century. Up-to-date information on the area and extent of this distribution is crucial for federal, state, and tribal wildlife and land managers to make informed decisions regarding grizzly bear management. The most recent estimate of grizzly bear distribution (2004) utilized fixed-kernel density estimators to describe distribution. This method was complex and computationally time consuming and excluded observations of unmarked bears. Our objective was to develop a technique to estimate grizzly bear distribution that would allow for the use of all verified grizzly bear location data, as well as provide the simplicity to be updated more frequently. We placed all verified grizzly bear locations from all sources from 1990 to 2004 and 1990 to 2010 onto a 3-km × 3-km grid and used zonal analysis and ordinary kriging to develop a predicted surface of grizzly bear distribution. We compared the area and extent of the 2004 kriging surface with the previous 2004 effort and evaluated changes in grizzly bear distribution from 2004 to 2010. The 2004 kriging surface was 2.4% smaller than the previous fixed-kernel estimate, but more closely represented the data. Grizzly bear distribution increased 38.3% from 2004 to 2010, with most expansion in the northern and southern regions of the range. This technique can be used to provide a current estimate of grizzly bear distribution for management and conservation applications.
This paper assesses the impact of different likelihood functions in identifying sensitive parameters of the highly parameterized, spatially distributed Soil and Water Assessment Tool (SWAT) watershed model for multiple variables at multiple sites. The global one-factor-at-a-time (OAT) method of Morr...
Projection methods for the analysis of molecular-frame photoelectron angular distributions
International Nuclear Information System (INIS)
Lucchese, R.R.; Montuoro, R.; Grum-Grzhimailo, A.N.; Liu, X.-J.; Pruemper, G.; Morishita, Y.; Saito, N.; Ueda, K.
2007-01-01
The analysis of the molecular-frame photoelectron angular distributions (MFPADs) is discussed within the dipole approximation. The general expressions are reviewed and strategies for extracting the maximum amount of information from different types of experimental measurements are considered. The analysis of the N 1s photoionization of NO is given to illustrate the method
Distributed Solutions for Loosely Coupled Feasibility Problems Using Proximal Splitting Methods
DEFF Research Database (Denmark)
Pakazad, Sina Khoshfetrat; Andersen, Martin Skovgaard; Hansson, Anders
2014-01-01
In this paper,we consider convex feasibility problems (CFPs) where the underlying sets are loosely coupled, and we propose several algorithms to solve such problems in a distributed manner. These algorithms are obtained by applying proximal splitting methods to convex minimization reformulations ...
The overlapping distribution method to compute chemical potentials of chain molecules
Mooij, G.C.A.M.; Frenkel, D.
1994-01-01
The chemical potential of continuously deformable chain molecules can be estimated by measuring the average Rosenbluth weight associated with the virtual insertion of a molecule. We show how to generalize the overlapping-distribution method of Bennett to histograms of Rosenbluth weights. In this way
Analysis of the distribution of X-ray characteristic production using the Monte Carlo methods
International Nuclear Information System (INIS)
Del Giorgio, Marcelo; Brizuela, Horacio; Riveros, J.A.
1987-01-01
The Monte Carlo method has been applied for the simulation of electron trajectories in a bulk sample, and therefore for the distribution of signals produced in an electron microprobe. Results for the function φ(ρz) are compared with experimental data. Some conclusions are drawn with respect to the parameters involved in the gaussian model. (Author) [es
International Nuclear Information System (INIS)
Snowdon, K.J.; Andresen, B.; Veje, E.
1978-01-01
The method of calculating relative initial level populations of excited states of sputtered atoms is developed in principle and compared with those in current use. The reason that the latter, although mathematically different, have generally led to similar population distributions is outlined. (Auth.)
An Empirical Method to Fuse Partially Overlapping State Vectors for Distributed State Estimation
Sijs, J.; Hanebeck, U.; Noack, B.
2013-01-01
State fusion is a method for merging multiple estimates of the same state into a single fused estimate. Dealing with multiple estimates is one of the main concerns in distributed state estimation, where an estimated value of the desired state vector is computed in each node of a networked system.
Air method measurements of apple vessel length distributions with improved apparatus and theory
Shabtal Cohen; John Bennink; Mel Tyree
2003-01-01
Studies showing that rootstock dwarfing potential is related to plant hydraulic conductance led to the hypothesis that xylem properties are also related. Vessel length distribution and other properties of apple wood from a series of varieties were measured using the 'air method' in order to test this hypothesis. Apparatus was built to measure and monitor...
A method to calculate flux distribution in reactor systems containing materials with grain structure
International Nuclear Information System (INIS)
Stepanek, J.
1980-01-01
A method is proposed to compute the neutron flux spatial distribution in slab, spherical or cylindrical systems containing zones with close grain structure of material. Several different types of equally distributed particles embedded in the matrix material are allowed in one or more zones. The multi-energy group structure of the flux is considered. The collision probability method is used to compute the fluxes in the grains and in an ''effective'' part of the matrix material. Then the overall structure of the flux distribution in the zones with homogenized materials is determined using the DPN ''surface flux'' method. Both computations are connected using the balance equation during the outer iterations. The proposed method is written in the code SURCU-DH. Two testcases are computed and discussed. One testcase is the computation of the eigenvalue in simplified slab geometry of an LWR container of one zone with boral grains equally distributed in an aluminium matrix. The second is the computation of the eigenvalue in spherical geometry of the HTR pebble-bed cell with spherical particles embedded in a graphite matrix. The results are compared to those obtained by repeated use of the WIMS Code. (author)
Directory of Open Access Journals (Sweden)
Tunjo Perić
2017-09-01
Full Text Available This paper presents the production plan optimization in the metal industry considered as a multi-criteria programming problem. We first provided the definition of the multi-criteria programming problem and classification of the multicriteria programming methods. Then we applied two multi-criteria programming methods (the STEM method and the PROMETHEE method in solving a problem of multi-criteria optimization production plan in a company from the metal industry. The obtained results indicate a high efficiency of the applied methods in solving the problem.
Directory of Open Access Journals (Sweden)
Jinhong Noh
2016-04-01
Full Text Available Obstacle avoidance methods require knowledge of the distance between a mobile robot and obstacles in the environment. However, in stochastic environments, distance determination is difficult because objects have position uncertainty. The purpose of this paper is to determine the distance between a robot and obstacles represented by probability distributions. Distance determination for obstacle avoidance should consider position uncertainty, computational cost and collision probability. The proposed method considers all of these conditions, unlike conventional methods. It determines the obstacle region using the collision probability density threshold. Furthermore, it defines a minimum distance function to the boundary of the obstacle region with a Lagrange multiplier method. Finally, it computes the distance numerically. Simulations were executed in order to compare the performance of the distance determination methods. Our method demonstrated a faster and more accurate performance than conventional methods. It may help overcome position uncertainty issues pertaining to obstacle avoidance, such as low accuracy sensors, environments with poor visibility or unpredictable obstacle motion.
A new hydraulic regulation method on district heating system with distributed variable-speed pumps
International Nuclear Information System (INIS)
Wang, Hai; Wang, Haiying; Zhu, Tong
2017-01-01
Highlights: • A hydraulic regulation method was presented for district heating with distributed variable speed pumps. • Information and automation technologies were utilized to support the proposed method. • A new hydraulic model was developed for distributed variable speed pumps. • A new optimization model was developed based on genetic algorithm. • Two scenarios of a multi-source looped system was illustrated to validate the method. - Abstract: Compared with the hydraulic configuration based on the conventional central circulating pump, a district heating system with distributed variable-speed-pumps configuration can often save 30–50% power consumption on circulating pumps with frequency inverters. However, the hydraulic regulations on distributed variable-speed-pumps configuration could be more complicated than ever while all distributed pumps need to be adjusted to their designated flow rates. Especially in a multi-source looped structure heating network where the distributed pumps have strongly coupled and severe non-linear hydraulic connections with each other, it would be rather difficult to maintain the hydraulic balance during the regulations. In this paper, with the help of the advanced automation and information technologies, a new hydraulic regulation method was proposed to achieve on-site hydraulic balance for the district heating systems with distributed variable-speed-pumps configuration. The proposed method was comprised of a new hydraulic model, which was developed to adapt the distributed variable-speed-pumps configuration, and a calibration model with genetic algorithm. By carrying out the proposed method step by step, the flow rates of all distributed pumps can be progressively adjusted to their designated values. A hypothetic district heating system with 2 heat sources and 10 substations was taken as a case study to illustrate the feasibility of the proposed method. Two scenarios were investigated respectively. In Scenario I, the
Ames, D. P.; Osorio-Murillo, C.; Over, M. W.; Rubin, Y.
2012-12-01
The Method of Anchored Distributions (MAD) is an inverse modeling technique that is well-suited for estimation of spatially varying parameter fields using limited observations and Bayesian methods. This presentation will discuss the design, development, and testing of a free software implementation of the MAD technique using the open source DotSpatial geographic information system (GIS) framework, R statistical software, and the MODFLOW groundwater model. This new tool, dubbed MAD-GIS, is built using a modular architecture that supports the integration of external analytical tools and models for key computational processes including a forward model (e.g. MODFLOW, HYDRUS) and geostatistical analysis (e.g. R, GSLIB). The GIS-based graphical user interface provides a relatively simple way for new users of the technique to prepare the spatial domain, to identify observation and anchor points, to perform the MAD analysis using a selected forward model, and to view results. MAD-GIS uses the Managed Extensibility Framework (MEF) provided by the Microsoft .NET programming platform to support integration of different modeling and analytical tools at run-time through a custom "driver." Each driver establishes a connection with external programs through a programming interface, which provides the elements for communicating with core MAD software. This presentation gives an example of adapting the MODFLOW to serve as the external forward model in MAD-GIS for inferring the distribution functions of key MODFLOW parameters. Additional drivers for other models are being developed and it is expected that the open source nature of the project will engender the development of additional model drivers by 3rd party scientists.
Demand Response Programs Design and Use Considering Intensive Penetration of Distributed Generation
Directory of Open Access Journals (Sweden)
Pedro Faria
2015-06-01
Full Text Available Further improvements in demand response programs implementation are needed in order to take full advantage of this resource, namely for the participation in energy and reserve market products, requiring adequate aggregation and remuneration of small size resources. The present paper focuses on SPIDER, a demand response simulation that has been improved in order to simulate demand response, including realistic power system simulation. For illustration of the simulator’s capabilities, the present paper is proposes a methodology focusing on the aggregation of consumers and generators, providing adequate tolls for the demand response program’s adoption by evolved players. The methodology proposed in the present paper focuses on a Virtual Power Player that manages and aggregates the available demand response and distributed generation resources in order to satisfy the required electrical energy demand and reserve. The aggregation of resources is addressed by the use of clustering algorithms, and operation costs for the VPP are minimized. The presented case study is based on a set of 32 consumers and 66 distributed generation units, running on 180 distinct operation scenarios.
Distributed and collaborative: Experiences of local leadership of a first-year experience program
Directory of Open Access Journals (Sweden)
Jo McKenzie
2017-07-01
Full Text Available Local level leadership of the first year experience (FYE is critical for engaging academic and professional staff in working collaboratively on a whole of institution focus on student transition and success. This paper describes ways in which local informal leadership is experienced at faculty level in an institutional FYE program, based on interviews with faculty coordinators and small grant recipients. Initial analysis using the distributed leadership tenets described by Jones, Hadgraft, Harvey, Lefoe, and Ryland (2014 revealed features that enabled success, such as collaborative communities, as well as faculty differences influenced by the strength of the external mandate for change in the FYE. More fine-grained analysis indicated further themes in engaging others, enabling and enacting the FYE program that fostered internal mandates for change: gaining buy-in; being opportunistic; making use of evidence of success and recognition; along with the need for collegial support for coordinators and self-perceptions of leadership being about making connections, collaboration, trust and expertise.
Matsuhashi, Saeko; Doi, Hideyuki; Fujiwara, Ayaka; Watanabe, Sonoko; Minamoto, Toshifumi
2016-01-01
The environmental DNA (eDNA) method has increasingly been recognized as a powerful tool for monitoring aquatic animal species; however, its application for monitoring aquatic plants is limited. To evaluate eDNA analysis for estimating the distribution of aquatic plants, we compared its estimated distributions with eDNA analysis, visual observation, and past distribution records for the submerged species Hydrilla verticillata. Moreover, we conducted aquarium experiments using H. verticillata and Egeria densa and analyzed the relationships between eDNA concentrations and plant biomass to investigate the potential for biomass estimation. The occurrences estimated by eDNA analysis closely corresponded to past distribution records, and eDNA detections were more frequent than visual observations, indicating that the method is potentially more sensitive. The results of the aquarium experiments showed a positive relationship between plant biomass and eDNA concentration; however, the relationship was not always significant. The eDNA concentration peaked within three days of the start of the experiment in most cases, suggesting that plants do not release constant amounts of DNA. These results showed that eDNA analysis can be used for distribution surveys, and has the potential to estimate the biomass of aquatic plants.
International Nuclear Information System (INIS)
Murata, Isao; Mori, Takamasa; Nakagawa, Masayuki; Shirai, Hiroshi.
1996-03-01
High Temperature Gas-cooled Reactors (HTGRs) employ spherical fuels named coated fuel particles (CFPs) consisting of a microsphere of low enriched UO 2 with coating layers in order to prevent FP release. There exist many spherical fuels distributed randomly in the cores. Therefore, the nuclear design of HTGRs is generally performed on the basis of the multigroup approximation using a diffusion code, S N transport code or group-wise Monte Carlo code. This report summarizes a Monte Carlo hard sphere packing simulation code to simulate the packing of equal hard spheres and evaluate the necessary probability distribution of them, which is used for the application of the new Monte Carlo calculation method developed to treat randomly distributed spherical fuels with the continuous energy Monte Carlo method. By using this code, obtained are the various statistical values, namely Radial Distribution Function (RDF), Nearest Neighbor Distribution (NND), 2-dimensional RDF and so on, for random packing as well as ordered close packing of FCC and BCC. (author)
Graphical programming interface: A development environment for MRI methods.
Zwart, Nicholas R; Pipe, James G
2015-11-01
To introduce a multiplatform, Python language-based, development environment called graphical programming interface for prototyping MRI techniques. The interface allows developers to interact with their scientific algorithm prototypes visually in an event-driven environment making tasks such as parameterization, algorithm testing, data manipulation, and visualization an integrated part of the work-flow. Algorithm developers extend the built-in functionality through simple code interfaces designed to facilitate rapid implementation. This article shows several examples of algorithms developed in graphical programming interface including the non-Cartesian MR reconstruction algorithms for PROPELLER and spiral as well as spin simulation and trajectory visualization of a FLORET example. The graphical programming interface framework is shown to be a versatile prototyping environment for developing numeric algorithms used in the latest MR techniques. © 2014 Wiley Periodicals, Inc.
Incentive mechanisms to promote energy efficiency programs in power distribution companies
International Nuclear Information System (INIS)
Osorio, Karim; Sauma, Enzo
2015-01-01
Power distribution companies (DISCOs) play an important role in promoting energy efficiency (hereafter EE), mainly due to the fact that they have detailed information regarding their clients' consumption patterns. However, under the traditional regulatory framework, DISCOs have disincentives to promote EE, due to the fact that a reduction in sales also means a reduction in their revenues and profits. Most regulatory policies encouraging EE have some embedded payment schemes that allow financing EE programs. In this paper, we focus on these EE-programs' payment schemes that are embedded into the regulatory policies. Specifically, this paper studies two models of the Principal–Agent bi-level type in order to analyze the economic effects of implementing different payment schemes to foster EE in DISCOs. The main difference between each model is that uncertainty in energy savings is considered by the electricity regulatory institution in only one of the models. In terms of the results, it is observed that, in general terms, it is more convenient for the regulator to adopt a performance-based incentive mechanism than a payment scheme financing only the fixed costs of implementing EE programs. However, if the electricity regulatory institution seeks a higher level of minimum expected utility, it is optimal to adopt a mixed system of compensation, which takes into account the fixed cost compensation and performance-based incentive payments. - Highlights: • We studied different payment schemes to promote energy efficiency in DISCOs. • We propose two bi-level models based on the Principal–Agent theory. • Uncertainty associated with energy savings is incorporated in one of the models. • A performance-based payment scheme is generally more convenient for the regulator. • A mixed payment scheme is optimal when a lower level of uncertainty is tolerated
Energy Technology Data Exchange (ETDEWEB)
Ruegg, Rosalie [TIA Consulting, Inc., Emeral Isle, NC (United States); Jordan, Gretchen B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2007-03-01
This document provides guidance for evaluators who conduct impact assessments to determine the “realized” economic benefits and costs, energy, environmental benefits, and other impacts of the Office of Energy Efficiency and Renewable Energy’s (EERE) R&D programs. The focus of this Guide is on realized outcomes or impacts of R&D programs actually experienced by American citizens, industry, and others.
S-curve networks and an approximate method for estimating degree distributions of complex networks
International Nuclear Information System (INIS)
Guo Jin-Li
2010-01-01
In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research. (general)
S-curve networks and an approximate method for estimating degree distributions of complex networks
Guo, Jin-Li
2010-12-01
In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research.
Extension of the Accurate Voltage-Sag Fault Location Method in Electrical Power Distribution Systems
Directory of Open Access Journals (Sweden)
Youssef Menchafou
2016-03-01
Full Text Available Accurate Fault location in an Electric Power Distribution System (EPDS is important in maintaining system reliability. Several methods have been proposed in the past. However, the performances of these methods either show to be inefficient or are a function of the fault type (Fault Classification, because they require the use of an appropriate algorithm for each fault type. In contrast to traditional approaches, an accurate impedance-based Fault Location (FL method is presented in this paper. It is based on the voltage-sag calculation between two measurement points chosen carefully from the available strategic measurement points of the line, network topology and current measurements at substation. The effectiveness and the accuracy of the proposed technique are demonstrated for different fault types using a radial power flow system. The test results are achieved from the numerical simulation using the data of a distribution line recognized in the literature.
a New Method for Calculating Fractal Dimensions of Porous Media Based on Pore Size Distribution
Xia, Yuxuan; Cai, Jianchao; Wei, Wei; Hu, Xiangyun; Wang, Xin; Ge, Xinmin
Fractal theory has been widely used in petrophysical properties of porous rocks over several decades and determination of fractal dimensions is always the focus of researches and applications by means of fractal-based methods. In this work, a new method for calculating pore space fractal dimension and tortuosity fractal dimension of porous media is derived based on fractal capillary model assumption. The presented work establishes relationship between fractal dimensions and pore size distribution, which can be directly used to calculate the fractal dimensions. The published pore size distribution data for eight sandstone samples are used to calculate the fractal dimensions and simultaneously compared with prediction results from analytical expression. In addition, the proposed fractal dimension method is also tested through Micro-CT images of three sandstone cores, and are compared with fractal dimensions by box-counting algorithm. The test results also prove a self-similar fractal range in sandstone when excluding smaller pores.
An introduction to meshfree methods and their programming
Liu, GR
2005-01-01
Friendly and straightforward presentation and beginner orientated Provides the fundamentals of numerical analysis that are particularly important to meshfree methods. Wide coverage of meshfree methods: EFG, RPIM, MLPG, LRPIM, MWS and collocation methods Detailed comparison case studies for many existing meshfree methods Well-tested computer source codes are attached with useful descriptions and readily test examples Soft copy of these source codes are available also at http://www.nus.edu.sg/ACES.
Monte-Carlo Method Python Library for dose distribution Calculation in Brachytherapy
Energy Technology Data Exchange (ETDEWEB)
Randriantsizafy, R D; Ramanandraibe, M J [Madagascar Institut National des Sciences et Techniques Nucleaires, Antananarivo (Madagascar); Raboanary, R [Institut of astro and High-Energy Physics Madagascar, University of Antananarivo, Antananarivo (Madagascar)
2007-07-01
The Cs-137 Brachytherapy treatment is performed in Madagascar since 2005. Time treatment calculation for prescribed dose is made manually. Monte-Carlo Method Python library written at Madagascar INSTN is experimentally used to calculate the dose distribution on the tumour and around it. The first validation of the code was done by comparing the library curves with the Nucletron company curves. To reduce the duration of the calculation, a Grid of PC's is set up with listner patch run on each PC. The library will be used to modelize the dose distribution in the CT scan patient picture for individual and better accuracy time calculation for a prescribed dose.
Monte-Carlo Method Python Library for dose distribution Calculation in Brachytherapy
International Nuclear Information System (INIS)
Randriantsizafy, R.D.; Ramanandraibe, M.J.; Raboanary, R.
2007-01-01
The Cs-137 Brachytherapy treatment is performed in Madagascar since 2005. Time treatment calculation for prescribed dose is made manually. Monte-Carlo Method Python library written at Madagascar INSTN is experimentally used to calculate the dose distribution on the tumour and around it. The first validation of the code was done by comparing the library curves with the Nucletron company curves. To reduce the duration of the calculation, a Grid of PC's is set up with listner patch run on each PC. The library will be used to modelize the dose distribution in the CT scan patient picture for individual and better accuracy time calculation for a prescribed dose.
A practical method for in-situ thickness determination using energy distribution of beta particles
International Nuclear Information System (INIS)
Yalcin, S.; Gurler, O.; Gundogdu, O.; Bradley, D.A.
2012-01-01
This paper discusses a method to determine the thickness of an absorber using the energy distribution of beta particles. An empirical relationship was obtained between the absorber thickness and the energy distribution of beta particles transmitted through. The thickness of a polyethylene radioactive source cover was determined by exploiting this relationship, which has largely been left unexploited allowing us to determine the in-situ cover thickness of beta sources in a fast, cheap and non-destructive way. - Highlights: ► A practical and in-situ unknown cover thickness determination ► Cheap and readily available compared to other techniques. ► Beta energy spectrum.
Ida, Midori; Hirata, Masakazu; Hosoda, Kiminori; Nakao, Kazuwa
2013-02-01
Two novel bioelectrical impedance analysis (BIA) methods have been developed recently for evaluation of intra-abdominal fat accumulation. Both methods use electrodes that are placed on abdominal wall and allow evaluation of intra-abdominal fat area (IAFA) easily without radiation exposure. Of these, "abdominal BIA" method measures impedance distribution along abdominal anterior-posterior axis, and IAFA by BIA method(BIA-IAFA) is calculated from waist circumference and the voltage occurring at the flank. Dual BIA method measures impedance of trunk and body surface at the abdominal level and calculates BIA-IAFA from transverse and antero-posterior diameters of the abdomen and the impedance of trunk and abdominal surface. BIA-IAFA by these two BIA methods correlated well with IAFA measured by abdominal CT (CT-IAFA) with correlatipn coefficient of 0.88 (n = 91, p abdominal adiposity in clinical study and routine clinical practice of metabolic syndrome and obesity.
Andragogical and Pedagogical Methods for Curriculum and Program Development
Wang, Victor C. X., Ed.; Bryan, Valerie C., Ed.
2014-01-01
Today's ever-changing learning environment is characterized by the fast pace of technology that drives our society to move forward, and causes our knowledge to increase at an exponential rate. The need for in-depth research that is bound to generate new knowledge about curriculum and program development is becoming ever more relevant.…
Conditional maximum-entropy method for selecting prior distributions in Bayesian statistics
Abe, Sumiyoshi
2014-11-01
The conditional maximum-entropy method (abbreviated here as C-MaxEnt) is formulated for selecting prior probability distributions in Bayesian statistics for parameter estimation. This method is inspired by a statistical-mechanical approach to systems governed by dynamics with largely separated time scales and is based on three key concepts: conjugate pairs of variables, dimensionless integration measures with coarse-graining factors and partial maximization of the joint entropy. The method enables one to calculate a prior purely from a likelihood in a simple way. It is shown, in particular, how it not only yields Jeffreys's rules but also reveals new structures hidden behind them.
Concise method for evaluating the probability distribution of the marginal cost of power generation
International Nuclear Information System (INIS)
Zhang, S.H.; Li, Y.Z.
2000-01-01
In the developing electricity market, many questions on electricity pricing and the risk modelling of forward contracts require the evaluation of the expected value and probability distribution of the short-run marginal cost of power generation at any given time. A concise forecasting method is provided, which is consistent with the definitions of marginal costs and the techniques of probabilistic production costing. The method embodies clear physical concepts, so that it can be easily understood theoretically and computationally realised. A numerical example has been used to test the proposed method. (author)
Directory of Open Access Journals (Sweden)
Seyed Ahmad Yazdian
2011-01-01
Full Text Available In this paper, we present a multi-objective possibilistic programming model to locate distribution centers (DCs and allocate customers' demands in a supply chain network design (SCND problem. The SCND problem deals with determining locations of facilities (DCs and/or plants, and also shipment quantities between each two consecutive tier of the supply chain. The primary objective of this study is to consider different risk factors which are involved in both locating DCs and shipping products as an objective function. The risk consists of various components: the risks related to each potential DC location, the risk associated with each arc connecting a plant to a DC and the risk of shipment from a DC to a customer. The proposed method of this paper considers the risk phenomenon in fuzzy forms to handle the uncertainties inherent in these factors. A possibilistic programming approach is proposed to solve the resulted multi-objective problem and a numerical example for three levels of possibility is conducted to analyze the model.
On-line reconstruction of in-core power distribution by harmonics expansion method
International Nuclear Information System (INIS)
Wang Changhui; Wu Hongchun; Cao Liangzhi; Yang Ping
2011-01-01
Highlights: → A harmonics expansion method for the on-line in-core power reconstruction is proposed. → A harmonics data library is pre-generated off-line and a code named COMS is developed. → Numerical results show that the maximum relative error of the reconstruction is less than 5.5%. → This method has a high computational speed compared to traditional methods. - Abstract: Fixed in-core detectors are most suitable in real-time response to in-core power distributions in pressurized water reactors (PWRs). In this paper, a harmonics expansion method is used to reconstruct the in-core power distribution of a PWR on-line. In this method, the in-core power distribution is expanded by the harmonics of one reference case. The expansion coefficients are calculated using signals provided by fixed in-core detectors. To conserve computing time and improve reconstruction precision, a harmonics data library containing the harmonics of different reference cases is constructed. Upon reconstruction of the in-core power distribution on-line, the two closest reference cases are searched from the harmonics data library to produce expanded harmonics by interpolation. The Unit 1 reactor of DayaBay Nuclear Power Plant (DayaBay NPP) in China is considered for verification. The maximum relative error between the measurement and reconstruction results is less than 5.5%, and the computing time is about 0.53 s for a single reconstruction, indicating that this method is suitable for the on-line monitoring of PWRs.
Maadooliat, Mehdi; Gao, Xin; Huang, Jianhua Z.
2012-01-01
Despite considerable progress in the past decades, protein structure prediction remains one of the major unsolved problems in computational biology. Angular-sampling-based methods have been extensively studied recently due to their ability to capture the continuous conformational space of protein structures. The literature has focused on using a variety of parametric models of the sequential dependencies between angle pairs along the protein chains. In this article, we present a thorough review of angular-sampling-based methods by assessing three main questions: What is the best distribution type to model the protein angles? What is a reasonable number of components in a mixture model that should be considered to accurately parameterize the joint distribution of the angles? and What is the order of the local sequence-structure dependency that should be considered by a prediction method? We assess the model fits for different methods using bivariate lag-distributions of the dihedral/planar angles. Moreover, the main information across the lags can be extracted using a technique called Lag singular value decomposition (LagSVD), which considers the joint distribution of the dihedral/planar angles over different lags using a nonparametric approach and monitors the behavior of the lag-distribution of the angles using singular value decomposition. As a result, we developed graphical tools and numerical measurements to compare and evaluate the performance of different model fits. Furthermore, we developed a web-tool (http://www.stat.tamu. edu/~madoliat/LagSVD) that can be used to produce informative animations. © The Author 2012. Published by Oxford University Press.
Maadooliat, Mehdi
2012-08-27
Despite considerable progress in the past decades, protein structure prediction remains one of the major unsolved problems in computational biology. Angular-sampling-based methods have been extensively studied recently due to their ability to capture the continuous conformational space of protein structures. The literature has focused on using a variety of parametric models of the sequential dependencies between angle pairs along the protein chains. In this article, we present a thorough review of angular-sampling-based methods by assessing three main questions: What is the best distribution type to model the protein angles? What is a reasonable number of components in a mixture model that should be considered to accurately parameterize the joint distribution of the angles? and What is the order of the local sequence-structure dependency that should be considered by a prediction method? We assess the model fits for different methods using bivariate lag-distributions of the dihedral/planar angles. Moreover, the main information across the lags can be extracted using a technique called Lag singular value decomposition (LagSVD), which considers the joint distribution of the dihedral/planar angles over different lags using a nonparametric approach and monitors the behavior of the lag-distribution of the angles using singular value decomposition. As a result, we developed graphical tools and numerical measurements to compare and evaluate the performance of different model fits. Furthermore, we developed a web-tool (http://www.stat.tamu. edu/~madoliat/LagSVD) that can be used to produce informative animations. © The Author 2012. Published by Oxford University Press.
Directory of Open Access Journals (Sweden)
Yerriswamy Wooluru
2016-06-01
Full Text Available Process capability indices are very important process quality assessment tools in automotive industries. The common process capability indices (PCIs Cp, Cpk, Cpm are widely used in practice. The use of these PCIs based on the assumption that process is in control and its output is normally distributed. In practice, normality is not always fulfilled. Indices developed based on normality assumption are very sensitive to non- normal processes. When distribution of a product quality characteristic is non-normal, Cp and Cpk indices calculated using conventional methods often lead to erroneous interpretation of process capability. In the literature, various methods have been proposed for surrogate process capability indices under non normality but few literature sources offer their comprehensive evaluation and comparison of their ability to capture true capability in non-normal situation. In this paper, five methods have been reviewed and capability evaluation is carried out for the data pertaining to resistivity of silicon wafer. The final results revealed that the Burr based percentile method is better than Clements method. Modelling of non-normal data and Box-Cox transformation method using statistical software (Minitab 14 provides reasonably good result as they are very promising methods for non - normal and moderately skewed data (Skewness <= 1.5.
Methods to Regulate Unbundled Transmission and Distribution Business on Electricity Markets
International Nuclear Information System (INIS)
Forsberg, Kaj; Fritz, Peter
2003-11-01
The regulation of distribution utilities is evolving from the traditional approach based on a cost of service or rate of return remuneration, to ways of regulation more specifically focused on providing incentives for improving efficiency, known as performance-based regulation or ratemaking. Modern regulation systems are also, to a higher degree than previously, intended to simulate competitive market conditions. The Market Design 2003-conference gathered people from 18 countries to discuss 'Methods to regulate unbundled transmission and distribution business on electricity markets'. Speakers from nine different countries and backgrounds (academics, industry and regulatory) presented their experiences and most recent works on how to make the regulation of unbundled distribution business as accurate as possible. This paper does not claim to be a fully representative summary of everything that was presented or discussed during the conference. Rather, it is a purposely restricted document where we focus on a few central themes and experiences from different countries
A method for atomic-level noncontact thermometry with electron energy distribution
Kinoshita, Ikuo; Tsukada, Chiharu; Ouchi, Kohei; Kobayashi, Eiichi; Ishii, Juntaro
2017-04-01
We devised a new method of determining the temperatures of materials with their electron-energy distributions. The Fermi-Dirac distribution convoluted with a linear combination of Gaussian and Lorentzian distributions was fitted to the photoelectron spectrum measured for the Au(110) single-crystal surface at liquid N2-cooled temperature. The fitting successfully determined the surface-local thermodynamic temperature and the energy resolution simultaneously from the photoelectron spectrum, without any preliminary results of other measurements. The determined thermodynamic temperature was 99 ± 2.1 K, which was in good agreement with the reference temperature of 98.5 ± 0.5 K measured using a silicon diode sensor attached to the sample holder.
Methods to Regulate Unbundled Transmission and Distribution Business on Electricity Markets
Energy Technology Data Exchange (ETDEWEB)
Forsberg, Kaj; Fritz, Peter
2003-11-01
The regulation of distribution utilities is evolving from the traditional approach based on a cost of service or rate of return remuneration, to ways of regulation more specifically focused on providing incentives for improving efficiency, known as performance-based regulation or ratemaking. Modern regulation systems are also, to a higher degree than previously, intended to simulate competitive market conditions. The Market Design 2003-conference gathered people from 18 countries to discuss 'Methods to regulate unbundled transmission and distribution business on electricity markets'. Speakers from nine different countries and backgrounds (academics, industry and regulatory) presented their experiences and most recent works on how to make the regulation of unbundled distribution business as accurate as possible. This paper does not claim to be a fully representative summary of everything that was presented or discussed during the conference. Rather, it is a purposely restricted document where we focus on a few central themes and experiences from different countries.
A New Method for the 2D DOA Estimation of Coherently Distributed Sources
Directory of Open Access Journals (Sweden)
Liang Zhou
2014-03-01
Full Text Available The purpose of this paper is to develop a new technique for estimating the two- dimensional (2D direction-of-arrivals (DOAs of coherently distributed (CD sources, which can estimate effectively the central azimuth and central elevation of CD sources at the cost of less computational cost. Using the special L-shape array, a new approach for parametric estimation of CD sources is proposed. The proposed method is based on two rotational invariance relations under small angular approximation, and estimates two rotational matrices which depict the relations, using propagator technique. And then the central DOA estimations are obtained by utilizing the primary diagonal elements of two rotational matrices. Simulation results indicate that the proposed method can exhibit a good performance under small angular spread and be applied to the multisource scenario where different sources may have different angular distribution shapes. Without any peak-finding search and the eigendecomposition of the high-dimensional sample covariance matrix, the proposed method has significantly reduced the computational cost compared with the existing methods, and thus is beneficial to real-time processing and engineering realization. In addition, our approach is also a robust estimator which does not depend on the angular distribution shape of CD sources.
Directory of Open Access Journals (Sweden)
Chih-Hsueh Lin
2016-04-01
Full Text Available In wireless sensor networks, sensing information must be transmitted from sensor nodes to the base station by multiple hopping. Every sensor node is a sender and a relay node that forwards the sensing information that is sent by other nodes. Under an attack, the sensing information may be intercepted, modified, interrupted, or fabricated during transmission. Accordingly, the development of mutual trust to enable a secure path to be established for forwarding information is an important issue. Random key pre-distribution has been proposed to establish mutual trust among sensor nodes. This article modifies the random key pre-distribution to a random secret pre-distribution and incorporates identity-based cryptography to establish an effective method of establishing mutual trust for a wireless sensor network. In the proposed method, base station assigns an identity and embeds n secrets into the private secret keys for every sensor node. Based on the identity and private secret keys, the mutual trust method is utilized to explore the types of trust among neighboring sensor nodes. The novel method can resist malicious attacks and satisfy the requirements of wireless sensor network, which are resistance to compromising attacks, masquerading attacks, forger attacks, replying attacks, authentication of forwarding messages, and security of sensing information.
International Nuclear Information System (INIS)
Song, Xizi; Xu, Yanbin; Dong, Feng
2017-01-01
Electrical resistance tomography (ERT) is a promising measurement technique with important industrial and clinical applications. However, with limited effective measurements, it suffers from poor spatial resolution due to the ill-posedness of the inverse problem. Recently, there has been an increasing research interest in hybrid imaging techniques, utilizing couplings of physical modalities, because these techniques obtain much more effective measurement information and promise high resolution. Ultrasound modulated electrical impedance tomography (UMEIT) is one of the newly developed hybrid imaging techniques, which combines electric and acoustic modalities. A linearized image reconstruction method based on power density is proposed for UMEIT. The interior data, power density distribution, is adopted to reconstruct the conductivity distribution with the proposed image reconstruction method. At the same time, relating the power density change to the change in conductivity, the Jacobian matrix is employed to make the nonlinear problem into a linear one. The analytic formulation of this Jacobian matrix is derived and its effectiveness is also verified. In addition, different excitation patterns are tested and analyzed, and opposite excitation provides the best performance with the proposed method. Also, multiple power density distributions are combined to implement image reconstruction. Finally, image reconstruction is implemented with the linear back-projection (LBP) algorithm. Compared with ERT, with the proposed image reconstruction method, UMEIT can produce reconstructed images with higher quality and better quantitative evaluation results. (paper)
A simulation training evaluation method for distribution network fault based on radar chart
Directory of Open Access Journals (Sweden)
Yuhang Xu
2018-01-01
Full Text Available In order to solve the problem of automatic evaluation of dispatcher fault simulation training in distribution network, a simulation training evaluation method based on radar chart for distribution network fault is proposed. The fault handling information matrix is established to record the dispatcher fault handling operation sequence and operation information. The four situations of the dispatcher fault isolation operation are analyzed. The fault handling anti-misoperation rule set is established to describe the rules prohibiting dispatcher operation. Based on the idea of artificial intelligence reasoning, the feasibility of dispatcher fault handling is described by the feasibility index. The relevant factors and evaluation methods are discussed from the three aspects of the fault handling result feasibility, the anti-misoperation correctness and the operation process conciseness. The detailed calculation formula is given. Combining the independence and correlation between the three evaluation angles, a comprehensive evaluation method of distribution network fault simulation training based on radar chart is proposed. The method can comprehensively reflect the fault handling process of dispatchers, and comprehensively evaluate the fault handling process from various angles, which has good practical value.
Directory of Open Access Journals (Sweden)
Massoud Tabesh
2011-07-01
Full Text Available Optimum operation of water distribution networks is one of the priorities of sustainable development of water resources, considering the issues of increasing efficiency and decreasing the water losses. One of the key subjects in optimum operational management of water distribution systems is preparing rehabilitation and replacement schemes, prediction of pipes break rate and evaluation of their reliability. Several approaches have been presented in recent years regarding prediction of pipe failure rates which each one requires especial data sets. Deterministic models based on age and deterministic multi variables and stochastic group modeling are examples of the solutions which relate pipe break rates to parameters like age, material and diameters. In this paper besides the mentioned parameters, more factors such as pipe depth and hydraulic pressures are considered as well. Then using multi variable regression method, intelligent approaches (Artificial neural network and neuro fuzzy models and Evolutionary polynomial Regression method (EPR pipe burst rate are predicted. To evaluate the results of different approaches, a case study is carried out in a part ofMashhadwater distribution network. The results show the capability and advantages of ANN and EPR methods to predict pipe break rates, in comparison with neuro fuzzy and multi-variable regression methods.
Bufalo, Gennaro; Ambrosone, Luigi
2016-01-14
A method for studying the kinetics of thermal degradation of complex compounds is suggested. Although the method is applicable to any matrix whose grain size can be measured, herein we focus our investigation on thermogravimetric analysis, under a nitrogen atmosphere, of ground soft wheat and ground maize. The thermogravimetric curves reveal that there are two well-distinct jumps of mass loss. They correspond to volatilization, which is in the temperature range 298-433 K, and decomposition regions go from 450 to 1073 K. Thermal degradation is schematized as a reaction in the solid state whose kinetics is analyzed separately in each of the two regions. By means of a sieving analysis different size fractions of the material are separated and studied. A quasi-Newton fitting algorithm is used to obtain the grain size distribution as best fit to experimental data. The individual fractions are thermogravimetrically analyzed for deriving the functional relationship between activation energy of the degradation reactions and the particle size. Such functional relationship turns out to be crucial to evaluate the moments of the activation energy distribution, which is unknown in terms of the distribution calculated by sieve analysis. From the knowledge of moments one can reconstruct the reaction conversion. The method is applied first to the volatilization region, then to the decomposition region. The comparison with the experimental data reveals that the method reproduces the experimental conversion with an accuracy of 5-10% in the volatilization region and of 3-5% in the decomposition region.
International Nuclear Information System (INIS)
Surducan, V.; Surducan, E.; Dadarlat, D.
2013-01-01
Microwave induced heating is widely used in medical treatments, scientific and industrial applications. The temperature field inside a microwave heated sample is often inhomogenous, therefore multiple temperature sensors are required for an accurate result. Nowadays, non-contact (Infra Red thermography or microwave radiometry) or direct contact temperature measurement methods (expensive and sophisticated fiber optic temperature sensors transparent to microwave radiation) are mainly used. IR thermography gives only the surface temperature and can not be used for measuring temperature distributions in cross sections of a sample. In this paper we present a very simple experimental method for temperature distribution highlighting inside a cross section of a liquid sample, heated by a microwave radiation through a coaxial applicator. The method proposed is able to offer qualitative information about the heating distribution, using a temperature sensitive liquid crystal sheet. Inhomogeneities as smaller as 1°-2°C produced by the symmetry irregularities of the microwave applicator can be easily detected by visual inspection or by computer assisted color to temperature conversion. Therefore, the microwave applicator is tuned and verified with described method until the temperature inhomogeneities are solved
A "total parameter estimation" method in the varification of distributed hydrological models
Wang, M.; Qin, D.; Wang, H.
2011-12-01
Conventionally hydrological models are used for runoff or flood forecasting, hence the determination of model parameters are common estimated based on discharge measurements at the catchment outlets. With the advancement in hydrological sciences and computer technology, distributed hydrological models based on the physical mechanism such as SWAT, MIKESHE, and WEP, have gradually become the mainstream models in hydrology sciences. However, the assessments of distributed hydrological models and model parameter determination still rely on runoff and occasionally, groundwater level measurements. It is essential in many countries, including China, to understand the local and regional water cycle: not only do we need to simulate the runoff generation process and for flood forecasting in wet areas, we also need to grasp the water cycle pathways and consumption process of transformation in arid and semi-arid regions for the conservation and integrated water resources management. As distributed hydrological model can simulate physical processes within a catchment, we can get a more realistic representation of the actual water cycle within the simulation model. Runoff is the combined result of various hydrological processes, using runoff for parameter estimation alone is inherits problematic and difficult to assess the accuracy. In particular, in the arid areas, such as the Haihe River Basin in China, runoff accounted for only 17% of the rainfall, and very concentrated during the rainy season from June to August each year. During other months, many of the perennial rivers within the river basin dry up. Thus using single runoff simulation does not fully utilize the distributed hydrological model in arid and semi-arid regions. This paper proposed a "total parameter estimation" method to verify the distributed hydrological models within various water cycle processes, including runoff, evapotranspiration, groundwater, and soil water; and apply it to the Haihe river basin in
An intergenerational program for persons with dementia using Montessori methods.
Camp, C J; Judge, K S; Bye, C A; Fox, K M; Bowden, J; Bell, M; Valencic, K; Mattern, J M
1997-10-01
An intergenerational program bringing together older adults with dementia and preschool children in one-on-one interactions is described. Montessori activities, which have strong ties to physical and occupational therapy, as well as to theories of developmental and cognitive psychology, are used as the context for these interactions. Our experience indicates that older adults with dementia can still serve as effective mentors and teachers to children in an appropriately structured setting.
International Nuclear Information System (INIS)
Hwang, I.-M.; Lin, S.-Y.; Lee, M.-S.; Wang, C.-J.; Chuang, K.-S.; Ding, H.-J.
2002-01-01
Purpose: To smooth the staggered dose distribution that occurs in stepped leaves defined by a multi-leaf collimator (MLC). Materials and methods: The MLC Shaper program controlled the stepped leaves, which were shifted in a traveling range, the pattern of shift was from the position of out-bound to in-bound with a one-segment (cross-bound), three-segment, and five-segment shifts. Film was placed at a depth of 1.5 cm and irradiated with the same irradiation dose used for the cerrobend block experiment. Four field edges with the MLC defining at 15 deg., 30 deg., 45 deg., 60 deg. angels relative to the jaw edge were performed, respectively, in this study. For the field edge defined by the multi-segment technique, the amplitude of the isodose lines for 50% isodose line and both the 80% and 20% isodose lines were measured. The effective penumbra widths with 90-10% and 80-20% distances for different irradiations were determined at four field edges with the MLC defining at 15 deg., 30 deg., 45 deg., 60 deg. angels relative to the jaw edge. Results: Use of the five-segment technique for multi-leaf collimation at the 60 deg. angle field edge smoothes each isodose line into an effectively straight line, similar to the pattern achieved using a cerrobend block. The separation of these lines is also important. The 80-20% effective penumbra width with five-segment techniques (8.23 mm) at 60 deg. angle relative to the jaw edge is little wider (1.9 times) than the penumbra of cerrobend block field edge (4.23 mm). We also found that the 90-10% effective penumbra width with five-segment techniques (12.68 mm) at 60 deg. angle relative to the jaw edge is little wider (1.28 times) than the penumbra of cerrobend block field edge (9.89 mm). Conclusion: The multi-segment technique is effective in smoothing the MLC staggered field edge. The effective penumbra width with more segment techniques at larger degree angles relative to the field edge is little wider than the penumbra for a
Directory of Open Access Journals (Sweden)
Zhenxiang Jiang
2016-01-01
Full Text Available The traditional methods of diagnosing dam service status are always suitable for single measuring point. These methods also reflect the local status of dams without merging multisource data effectively, which is not suitable for diagnosing overall service. This study proposes a new method involving multiple points to diagnose dam service status based on joint distribution function. The function, including monitoring data of multiple points, can be established with t-copula function. Therefore, the possibility, which is an important fusing value in different measuring combinations, can be calculated, and the corresponding diagnosing criterion is established with typical small probability theory. Engineering case study indicates that the fusion diagnosis method can be conducted in real time and the abnormal point can be detected, thereby providing a new early warning method for engineering safety.
Calculations of Neutron Flux Distributions by Means of Integral Transport Methods
Energy Technology Data Exchange (ETDEWEB)
Carlvik, I
1967-05-15
Flux distributions have been calculated mainly in one energy group, for a number of systems representing geometries interesting for reactor calculations. Integral transport methods of two kinds were utilised, collision probabilities (CP) and the discrete method (DIT). The geometries considered comprise the three one-dimensional geometries, planes, sphericals and annular, and further a square cell with a circular fuel rod and a rod cluster cell with a circular outer boundary. For the annular cells both methods (CP and DIT) were used and the results were compared. The purpose of the work is twofold, firstly to demonstrate the versatility and efficacy of integral transport methods and secondly to serve as a guide for anybody who wants to use the methods.
International Nuclear Information System (INIS)
Zhao Xuefeng; Wang Chuanke; Hu Feng; Kuang Longyu; Wang Zhebin; Li Sanwei; Liu Shengye; Jiang Gang
2011-01-01
The spatial distribution of backscatter light is very important for understanding the production of backscatter light. The experimental method of spatial distribution of full aperture backscatter light is based on the circular PIN array composed of concentric orbicular multi-PIN detectors. The image of backscatter light spatial distribution of full aperture SBS is obtained by measuring spatial distribution of full aperture backscatter light using the method in the experiment of laser hohlraum targets interaction at 'Shenguang II'. A preliminary method to measure spatial distribution of full aperture backscatter light is established. (authors)
2013-07-26
... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health National Toxicology Program... Alternative Methods (ICCVAM), the National Toxicology Program (NTP) Interagency Center for the Evaluation of... Director, National Toxicology Program. [FR Doc. 2013-17919 Filed 7-25-13; 8:45 am] BILLING CODE 4140-01-P ...
Integrating Program Assessment and a Career Focus into a Research Methods Course
Senter, Mary Scheuer
2017-01-01
Sociology research methods students in 2013 and 2016 implemented a series of "real world" data gathering activities that enhanced their learning while assisting the department with ongoing program assessment and program review. In addition to the explicit collection of program assessment data on both students' development of sociological…
Lamdjaya, T.; Jobiliong, E.
2017-01-01
PT Anugrah Citra Boga is a food processing industry that produces meatballs as their main product. The distribution system of the products must be considered, because it needs to be more efficient in order to reduce the shipment cost. The purpose of this research is to optimize the distribution time by simulating the distribution channels with capacitated vehicle routing problem method. Firstly, the distribution route is observed in order to calculate the average speed, time capacity and shipping costs. Then build the model using AIMMS software. A few things that are required to simulate the model are customer locations, distances, and the process time. Finally, compare the total distribution cost obtained by the simulation and the historical data. It concludes that the company can reduce the shipping cost around 4.1% or Rp 529,800 per month. By using this model, the utilization rate can be more optimal. The current value for the first vehicle is 104.6% and after the simulation it becomes 88.6%. Meanwhile, the utilization rate of the second vehicle is increase from 59.8% to 74.1%. The simulation model is able to produce the optimal shipping route with time restriction, vehicle capacity, and amount of vehicle.
International Nuclear Information System (INIS)
Soussaline, F.; Bidaut, L.; Raynaud, C.; Le Coq, G.
1983-06-01
An analytical solution to the SPECT reconstruction problem, where the actual attenuation effect can be included, was developped using a regularizing iterative method (RIM). The potential of this approach in quantitative brain studies when using a tracer for cerebrovascular disorders is now under evaluation. Mathematical simulations for a distributed activity in the brain surrounded by the skull and physical phantom studies were performed, using a rotating camera based SPECT system, allowing the calibration of the system and the evaluation of the adapted method to be used. On the simulation studies, the contrast obtained along a profile, was less than 5%, the standard deviation 8% and the quantitative accuracy 13%, for a uniform emission distribution of mean = 100 per pixel and a double attenuation coefficient of μ = 0.115 cm -1 and 0.5 cm -1 . Clinical data obtained after injection of 123 I (AMPI) were reconstructed using the RIM without and with cerebrovascular diseases or lesion defects. Contour finding techniques were used for the delineation of the brain and the skull, and measured attenuation coefficients were assumed within these two regions. Using volumes of interest, selected on homogeneous regions on an hemisphere and reported symetrically, the statistical uncertainty for 300 K events in the tomogram was found to be 12%, the index of symetry was of 4% for normal distribution. These results suggest that quantitative SPECT reconstruction for brain distribution is feasible, and that combined with an adapted tracer and an adequate model physiopathological parameters could be extracted
Leontief Input-Output Method for The Fresh Milk Distribution Linkage Analysis
Directory of Open Access Journals (Sweden)
Riski Nur Istiqomah
2016-11-01
Full Text Available This research discusses about linkage analysis and identifies the key sector in the fresh milk distribution using Leontief Input-Output method. This method is one of the application of Mathematics in economy. The current fresh milk distribution system includes dairy farmers →collectors→fresh milk processing industries→processed milk distributors→consumers. Then, the distribution is merged between the collectors’ axctivity and the fresh milk processing industry. The data used are primary and secondary data taken in June 2016 in Kecamatan Jabung Kabupaten Malang. The collected data are then analysed using Leontief Input-Output Matriks and Python (PYIO 2.1 software. The result is that the merging of the collectors’ and the fresh milk processing industry’s activities shows high indices of forward linkages and backward linkages. It is shown that merging of the two activities is the key sector which has an important role in developing the whole activities in the fresh milk distribution.
Directory of Open Access Journals (Sweden)
Олександр Павлович Кіркін
2017-06-01
Full Text Available Development of information technologies and market requirements in effective control over cargo flows, forces enterprises to look for new ways and methods of automated control over the technological operations. For rail transportation one of the most complicated tasks of automation is the cargo flows distribution over the sites of loading and unloading. In this article the solution with the use of one of the methods of artificial intelligence – a fuzzy inference has been proposed. The analysis of the last publications showed that the fuzzy inference method is effective for the solution of similar tasks, it makes it possible to accumulate experience, it is stable to temporary impacts of the environmental conditions. The existing methods of the cargo flows distribution over the sites of loading and unloading are too simplified and can lead to incorrect decisions. The purpose of the article is to create a distribution model of cargo flows of the enterprises over the sites of loading and unloading, basing on the fuzzy inference method and to automate the control. To achieve the objective a mathematical model of the cargo flows distribution over the sites of loading and unloading has been made using fuzzy logic. The key input parameters of the model are: «number of loading sites», «arrival of the next set of cars», «availability of additional operations». The output parameter is «a variety of set of cars». Application of the fuzzy inference method made it possible to reduce loading time by 15% and to reduce costs for preparatory operations before loading by 20%. Thus this method is an effective means and holds the greatest promise for railway competitiveness increase. Interaction between different types of transportation and their influence on the cargo flows distribution over the sites of loading and unloading hasn’t been considered. These sites may be busy transshipping at that very time which is characteristic of large enterprises
Energy Technology Data Exchange (ETDEWEB)
Senvar, O.; Sennaroglu, B.
2016-07-01
This study examines Clements’ Approach (CA), Box-Cox transformation (BCT), and Johnson transformation (JT) methods for process capability assessments through Weibull-distributed data with different parameters to figure out the effects of the tail behaviours on process capability and compares their estimation performances in terms of accuracy and precision. Design/methodology/approach: Usage of process performance index (PPI) Ppu is handled for process capability analysis (PCA) because the comparison issues are performed through generating Weibull data without subgroups. Box plots, descriptive statistics, the root-mean-square deviation (RMSD), which is used as a measure of error, and a radar chart are utilized all together for evaluating the performances of the methods. In addition, the bias of the estimated values is important as the efficiency measured by the mean square error. In this regard, Relative Bias (RB) and the Relative Root Mean Square Error (RRMSE) are also considered. Findings: The results reveal that the performance of a method is dependent on its capability to fit the tail behavior of the Weibull distribution and on targeted values of the PPIs. It is observed that the effect of tail behavior is more significant when the process is more capable. Research limitations/implications: Some other methods such as Weighted Variance method, which also give good results, were also conducted. However, we later realized that it would be confusing in terms of comparison issues between the methods for consistent interpretations... (Author)
International Nuclear Information System (INIS)
Takac, S.M.
1972-01-01
The method is based on perturbation of the reactor cell from a few up to few tens of percent. Measurements were performed for square lattice calls of zero power reactors Anna, NORA and RB, with metal uranium and uranium oxide fuel elements, water, heavy water and graphite moderators. Character and functional dependence of perturbations were obtained from the experimental results. Zero perturbation was determined by extrapolation thus obtaining the real physical neutron flux distribution in the reactor cell. Simple diffusion theory for partial plate cell perturbation was developed for verification of the perturbation method. The results of these calculation proved that introducing the perturbation sample in the fuel results in flattening the thermal neutron density dependent on the amplitude of the applied perturbation. Extrapolation applied for perturbed distributions was found to be justified