WorldWideScience

Sample records for program distribution methods

  1. PROGRAMMING OF METHODS FOR THE NEEDS OF LOGISTICS DISTRIBUTION SOLVING PROBLEMS

    Directory of Open Access Journals (Sweden)

    Andrea Štangová

    2014-06-01

    Full Text Available Logistics has become one of the dominant factors which is affecting the successful management, competitiveness and mentality of the global economy. Distribution logistics materializes the connesciton of production and consumer marke. It uses different methodology and methods of multicriterial evaluation and allocation. This thesis adresses the problem of the costs of securing the distribution of product. It was therefore relevant to design a software product thet would be helpful in solvin the problems related to distribution logistics. Elodis – electronic distribution logistics program was designed on the basis of theoretical analysis of the issue of distribution logistics and on the analysis of the software products market. The program uses a multicriterial evaluation methods to deremine the appropriate type and mathematical and geometrical method to determine an appropriate allocation of the distribution center, warehouse and company.

  2. System and Method for Providing a Climate Data Analytic Services Application Programming Interface Distribution Package

    Science.gov (United States)

    Schnase, John L. (Inventor); Duffy, Daniel Q. (Inventor); Tamkin, Glenn S. (Inventor)

    2016-01-01

    A system, method and computer-readable storage devices for providing a climate data analytic services application programming interface distribution package. The example system can provide various components. The system provides a climate data analytic services application programming interface library that enables software applications running on a client device to invoke the capabilities of a climate data analytic service. The system provides a command-line interface that provides a means of interacting with a climate data analytic service by issuing commands directly to the system's server interface. The system provides sample programs that call on the capabilities of the application programming interface library and can be used as templates for the construction of new client applications. The system can also provide test utilities, build utilities, service integration utilities, and documentation.

  3. Interactive statistical-distribution-analysis program utilizing numerical and graphical methods

    Energy Technology Data Exchange (ETDEWEB)

    Glandon, S. R.; Fields, D. E.

    1982-04-01

    The TERPED/P program is designed to facilitate the quantitative analysis of experimental data, determine the distribution function that best describes the data, and provide graphical representations of the data. This code differs from its predecessors, TEDPED and TERPED, in that a printer-plotter has been added for graphical output flexibility. The addition of the printer-plotter provides TERPED/P with a method of generating graphs that is not dependent on DISSPLA, Integrated Software Systems Corporation's confidential proprietary graphics package. This makes it possible to use TERPED/P on systems not equipped with DISSPLA. In addition, the printer plot is usually produced more rapidly than a high-resolution plot can be generated. Graphical and numerical tests are performed on the data in accordance with the user's assumption of normality or lognormality. Statistical analysis options include computation of the chi-squared statistic and its significance level and the Kolmogorov-Smirnov one-sample test confidence level for data sets of more than 80 points. Plots can be produced on a Calcomp paper plotter, a FR80 film plotter, or a graphics terminal using the high-resolution, DISSPLA-dependent plotter or on a character-type output device by the printer-plotter. The plots are of cumulative probability (abscissa) versus user-defined units (ordinate). The program was developed on a Digital Equipment Corporation (DEC) PDP-10 and consists of 1500 statements. The language used is FORTRAN-10, DEC's extended version of FORTRAN-IV.

  4. Cumulative Poisson Distribution Program

    Science.gov (United States)

    Bowerman, Paul N.; Scheuer, Ernest M.; Nolty, Robert

    1990-01-01

    Overflow and underflow in sums prevented. Cumulative Poisson Distribution Program, CUMPOIS, one of two computer programs that make calculations involving cumulative Poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), used independently of one another. CUMPOIS determines cumulative Poisson distribution, used to evaluate cumulative distribution function (cdf) for gamma distributions with integer shape parameters and cdf for X (sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Written in C.

  5. Stochastic Programming with Cauchy Distribution

    Directory of Open Access Journals (Sweden)

    Manas Kumar Pal

    2015-12-01

    Full Text Available The aim of this paper is to derive a method for solving a stochastic linear programming problem with Cauchy distribution. Assuming that the coefficients are distributed as Cauchy random variables, the stochastic linear programming is converted to a deterministic non-linear programming problem by a suitable transformation. Then an algorithm can be used to solve the resulting deterministic problem .A numerical example can be considered to illustrate the above methodology.

  6. NEWTPOIS- NEWTON POISSON DISTRIBUTION PROGRAM

    Science.gov (United States)

    Bowerman, P. N.

    1994-01-01

    The cumulative poisson distribution program, NEWTPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, NEWTPOIS (NPO-17715) and CUMPOIS (NPO-17714), can be used independently of one another. NEWTPOIS determines percentiles for gamma distributions with integer shape parameters and calculates percentiles for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. NEWTPOIS determines the Poisson parameter (lambda), that is; the mean (or expected) number of events occurring in a given unit of time, area, or space. Given that the user already knows the cumulative probability for a specific number of occurrences (n) it is usually a simple matter of substitution into the Poisson distribution summation to arrive at lambda. However, direct calculation of the Poisson parameter becomes difficult for small positive values of n and unmanageable for large values. NEWTPOIS uses Newton's iteration method to extract lambda from the initial value condition of the Poisson distribution where n=0, taking successive estimations until some user specified error term (epsilon) is reached. The NEWTPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting epsilon, n, and the cumulative probability of the occurrence of n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 30K. NEWTPOIS was developed in 1988.

  7. R Programs for Truncated Distributions

    Directory of Open Access Journals (Sweden)

    Saralees Nadarajah

    2006-08-01

    Full Text Available Truncated distributions arise naturally in many practical situations. In this note, we provide programs for computing six quantities of interest (probability density function, mean, variance, cumulative distribution function, quantile function and random numbers for any truncated distribution: whether it is left truncated, right truncated or doubly truncated. The programs are written in R: a freely downloadable statistical software.

  8. Distributed Programming with Shared Data

    NARCIS (Netherlands)

    Bal, H.E.; Tanenbaum, A.S.

    1991-01-01

    Until recently, at least one thing was clear about parallel programming: shared-memory machines were programmed in a language based on shared variables and distributed machines were programmed using message passing. Recent research on distributed systems and their languages, however, has led to new

  9. Newton/Poisson-Distribution Program

    Science.gov (United States)

    Bowerman, Paul N.; Scheuer, Ernest M.

    1990-01-01

    NEWTPOIS, one of two computer programs making calculations involving cumulative Poisson distributions. NEWTPOIS (NPO-17715) and CUMPOIS (NPO-17714) used independently of one another. NEWTPOIS determines Poisson parameter for given cumulative probability, from which one obtains percentiles for gamma distributions with integer shape parameters and percentiles for X(sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Program written in C.

  10. A distributed program composition system

    Science.gov (United States)

    Brown, Robert L.

    1989-01-01

    A graphical technique for creating distributed computer programs is investigated and a prototype implementation is described which serves as a testbed for the concepts. The type of programs under examination is restricted to those comprising relatively heavyweight parts that intercommunicate by passing messages of typed objects. Such programs are often presented visually as a directed graph with computer program parts as the nodes and communication channels as the edges. This class of programs, called parts-based programs, is not well supported by existing computer systems; much manual work is required to describe the program to the system, establish the communication paths, accommodate the heterogeneity of data types, and to locate the parts of the program on the various systems involved. The work described solves most of these problems by providing an interface for describing parts-based programs in this class in a way that closely models the way programmers think about them: using sketches of diagraphs. Program parts, the computational modes of the larger program system are categorized in libraries and are accessed with browsers. The process of programming has the programmer draw the program graph interactively. Heterogeneity is automatically accommodated by the insertion of type translators where necessary between the parts. Many decisions are necessary in the creation of a comprehensive tool for interactive creation of programs in this class. Possibilities are explored and the issues behind such decisions are presented. An approach to program composition is described, not a carefully implemented programming environment. However, a prototype implementation is described that can demonstrate the ideas presented.

  11. CUMPOIS- CUMULATIVE POISSON DISTRIBUTION PROGRAM

    Science.gov (United States)

    Bowerman, P. N.

    1994-01-01

    The Cumulative Poisson distribution program, CUMPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), can be used independently of one another. CUMPOIS determines the approximate cumulative binomial distribution, evaluates the cumulative distribution function (cdf) for gamma distributions with integer shape parameters, and evaluates the cdf for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. CUMPOIS calculates the probability that n or less events (ie. cumulative) will occur within any unit when the expected number of events is given as lambda. Normally, this probability is calculated by a direct summation, from i=0 to n, of terms involving the exponential function, lambda, and inverse factorials. This approach, however, eventually fails due to underflow for sufficiently large values of n. Additionally, when the exponential term is moved outside of the summation for simplification purposes, there is a risk that the terms remaining within the summation, and the summation itself, will overflow for certain values of i and lambda. CUMPOIS eliminates these possibilities by multiplying an additional exponential factor into the summation terms and the partial sum whenever overflow/underflow situations threaten. The reciprocal of this term is then multiplied into the completed sum giving the cumulative probability. The CUMPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting lambda and n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMPOIS was

  12. Distributed Programming with Shared Data

    NARCIS (Netherlands)

    Bal, H.E.; Tanenbaum, A.S.

    1988-01-01

    Operating system primitives (e.g., problem-oriented shared memory, shared virtual memory, the Agora shared memory) and languages (e.g., Concurrent Prolog, Linda, Emerald) for programming distributed systems have been proposed that support the shared-variable paradigm without the presence of physical

  13. Distributed antenna system and method

    Science.gov (United States)

    Fink, Patrick W. (Inventor); Dobbins, Justin A. (Inventor)

    2004-01-01

    System and methods are disclosed for employing one or more radiators having non-unique phase centers mounted to a body with respect to a plurality of transmitters to determine location characteristics of the body such as the position and/or attitude of the body. The one or more radiators may consist of a single, continuous element or of two or more discrete radiation elements whose received signals are combined. In a preferred embodiment, the location characteristics are determined using carrier phase measurements whereby phase center information may be determined or estimated. A distributed antenna having a wide angle view may be mounted to a moveable body in accord with the present invention. The distributed antenna may be utilized for maintaining signal contact with multiple spaced apart transmitters, such as a GPS constellation, as the body rotates without the need for RF switches to thereby provide continuous attitude and position determination of the body.

  14. Distributed optimization system and method

    Science.gov (United States)

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2003-06-10

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  15. A distributed programming environment for Ada

    Science.gov (United States)

    Brennan, Peter; Mcdonnell, Tom; Mcfarland, Gregory; Timmins, Lawrence J.; Litke, John D.

    1986-01-01

    Despite considerable commercial exploitation of fault tolerance systems, significant and difficult research problems remain in such areas as fault detection and correction. A research project is described which constructs a distributed computing test bed for loosely coupled computers. The project is constructing a tool kit to support research into distributed control algorithms, including a distributed Ada compiler, distributed debugger, test harnesses, and environment monitors. The Ada compiler is being written in Ada and will implement distributed computing at the subsystem level. The design goal is to provide a variety of control mechanics for distributed programming while retaining total transparency at the code level.

  16. Nonlinear programming analysis and methods

    CERN Document Server

    Avriel, Mordecai

    2012-01-01

    This text provides an excellent bridge between principal theories and concepts and their practical implementation. Topics include convex programming, duality, generalized convexity, analysis of selected nonlinear programs, techniques for numerical solutions, and unconstrained optimization methods.

  17. Programming the finite element method

    CERN Document Server

    Smith, I M; Margetts, L

    2013-01-01

    Many students, engineers, scientists and researchers have benefited from the practical, programming-oriented style of the previous editions of Programming the Finite Element Method, learning how to develop computer programs to solve specific engineering problems using the finite element method. This new fifth edition offers timely revisions that include programs and subroutine libraries fully updated to Fortran 2003, which are freely available online, and provides updated material on advances in parallel computing, thermal stress analysis, plasticity return algorithms, convection boundary c

  18. Methods for robustness programming

    NARCIS (Netherlands)

    Olieman, N.J.

    2008-01-01

    Robustness of an object is defined as the probability that an object will have properties as required. Robustness Programming (RP) is a mathematical approach for Robustness estimation and Robustness optimisation. An example in the context of designing a food product, is finding the best composition

  19. Programming Languages for Distributed Computing Systems

    NARCIS (Netherlands)

    Bal, H.E.; Steiner, J.G.; Tanenbaum, A.S.

    1989-01-01

    When distributed systems first appeared, they were programmed in traditional sequential languages, usually with the addition of a few library procedures for sending and receiving messages. As distributed applications became more commonplace and more sophisticated, this ad hoc approach became less

  20. Cidre: Programming with Distributed Shared Arrays

    OpenAIRE

    André, Françoise; Mahéo, Yves

    1996-01-01

    International audience; A programming model that is widely approved today for large applications is parallel programming with shared variables. We propose an implementation of shared arrays on distributed memory architectures: it provides the user with an uniform addressing scheme while being effcient thanks to a logical paging technique and optimized communication mechanisms.

  1. Separable programming theory and methods

    CERN Document Server

    Stefanov, Stefan M

    2001-01-01

    In this book, the author considers separable programming and, in particular, one of its important cases - convex separable programming Some general results are presented, techniques of approximating the separable problem by linear programming and dynamic programming are considered Convex separable programs subject to inequality equality constraint(s) and bounds on variables are also studied and iterative algorithms of polynomial complexity are proposed As an application, these algorithms are used in the implementation of stochastic quasigradient methods to some separable stochastic programs Numerical approximation with respect to I1 and I4 norms, as a convex separable nonsmooth unconstrained minimization problem, is considered as well Audience Advanced undergraduate and graduate students, mathematical programming operations research specialists

  2. Distributed Programming via Safe Closure Passing

    Directory of Open Access Journals (Sweden)

    Philipp Haller

    2016-02-01

    Full Text Available Programming systems incorporating aspects of functional programming, e.g., higher-order functions, are becoming increasingly popular for large-scale distributed programming. New frameworks such as Apache Spark leverage functional techniques to provide high-level, declarative APIs for in-memory data analytics, often outperforming traditional "big data" frameworks like Hadoop MapReduce. However, widely-used programming models remain rather ad-hoc; aspects such as implementation trade-offs, static typing, and semantics are not yet well-understood. We present a new asynchronous programming model that has at its core several principles facilitating functional processing of distributed data. The emphasis of our model is on simplicity, performance, and expressiveness. The primary means of communication is by passing functions (closures to distributed, immutable data. To ensure safe and efficient distribution of closures, our model leverages both syntactic and type-based restrictions. We report on a prototype implementation in Scala. Finally, we present preliminary experimental results evaluating the performance impact of a static, type-based optimization of serialization.

  3. Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution

    Energy Technology Data Exchange (ETDEWEB)

    Hamadameen, Abdulqader Othman [Optimization, Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia); Zainuddin, Zaitul Marlizawati [Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia)

    2014-06-19

    This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.

  4. Support For Distributed Programming In Extreme Style

    Directory of Open Access Journals (Sweden)

    Jacek Dajda

    2005-01-01

    Full Text Available The basic limitation emerging from practising eXtreme Programming methodology is theconstraint of close physical proximity between the members of the collaborating team including customer. This became the main idea behind research on XP supporting environmentfor geographically distributed teams. This work presents basic assumptions, elaborated architecture and selected implementation issues for the system of this type. Deliberations aresupplied with the initial results of the verification of its usability based on the users tests.

  5. Generalized Analysis of a Distribution Separation Method

    Directory of Open Access Journals (Sweden)

    Peng Zhang

    2016-04-01

    Full Text Available Separating two probability distributions from a mixture model that is made up of the combinations of the two is essential to a wide range of applications. For example, in information retrieval (IR, there often exists a mixture distribution consisting of a relevance distribution that we need to estimate and an irrelevance distribution that we hope to get rid of. Recently, a distribution separation method (DSM was proposed to approximate the relevance distribution, by separating a seed irrelevance distribution from the mixture distribution. It was successfully applied to an IR task, namely pseudo-relevance feedback (PRF, where the query expansion model is often a mixture term distribution. Although initially developed in the context of IR, DSM is indeed a general mathematical formulation for probability distribution separation. Thus, it is important to further generalize its basic analysis and to explore its connections to other related methods. In this article, we first extend DSM’s theoretical analysis, which was originally based on the Pearson correlation coefficient, to entropy-related measures, including the KL-divergence (Kullback–Leibler divergence, the symmetrized KL-divergence and the JS-divergence (Jensen–Shannon divergence. Second, we investigate the distribution separation idea in a well-known method, namely the mixture model feedback (MMF approach. We prove that MMF also complies with the linear combination assumption, and then, DSM’s linear separation algorithm can largely simplify the EM algorithm in MMF. These theoretical analyses, as well as further empirical evaluation results demonstrate the advantages of our DSM approach.

  6. Copula method for specific Burr distribution

    Science.gov (United States)

    Ismail, Nor Hidayah Binti; Khalid, Zarina Binti Mohd

    2015-02-01

    Copula method is discovered to become a useful method to joint two distributions and is known as dependence functions. It is a multivariate distribution functions whose one-dimensional margins are uniform on the interval (0, 1). The used of copula has expanded in many fields of study. Copula has many classes and families. However, in this research, copula methods which are Ali-Mikhail-Haq (AMH), Clayton and Gumbel are used on uncensored data to join specific Burr Type III and XII distributions using the theorem and algorithm of construction the copula. The result showed that AMH, Clayton and Gumbel copula fitted well with Burr distribution since the values of copula lie on the interval (0, 1).

  7. Methods for Distributed Optimal Energy Management

    DEFF Research Database (Denmark)

    Brehm, Robert

    The presented research deals with the fundamental underlying methods and concepts of how the growing number of distributed generation units based on renewable energy resources and distributed storage devices can be most efficiently integrated into the existing utility grid. In contrast to convent...... in a decentralised distributed system. This requires extensive communication between neighbouring nodes. A layered multi-agent system is introduced to provide a low latency communication based on a software-bus system in order to efficiently solve optimisation problems....... to conventional centralised optimal energy flow management systems, here-in, focus is set on how optimal energy management can be achieved in a decentralised distributed architecture such as a multi-agent system. Distributed optimisation methods are introduced, targeting optimisation of energy flow in virtual...... can be described as a transportation problem. As a basis for usage in energy management systems, methods and scenarios for solving non-linear transportation problems in multi-agent systems are introduced and evaluated. On this premise a method is presented to solve a generation units dispatching...

  8. Radionuclide Inventory and Distribution Program: the Galileo area

    Energy Technology Data Exchange (ETDEWEB)

    McArthur, R.D.; Kordas, J.F.

    1983-12-28

    The Galileo area is the first region of the Nevada Test Site to be surveyed by the Radionuclide Inventory and Distribution Program (RIDP). This report describes in detail the use of soil sampling and in situ spectrometry to estimate radionuclide activities at selected sampling locations; the descriptions of these methods will be used as a reference for future RIDP reports. The data collected at Galileo were analyzed by kriging and the polygons of influence method to estimate the total inventory and the distribution of six man-made radionuclides. The results of the different statistical methods agree fairly well, although the data did not give very good estimates of the variogram for kriging, and further study showed the results of kriging to be highly dependent on the variogram parameters. The results also showed that in situ spectrometry gives better estimates of radionuclide activity than soil sampling, which tends to miss highly radioactive particles associated with vegetation. 18 references, 28 figures, 11 tables.

  9. Distributed Pair Programming Using Collaboration Scripts: An Educational System and Initial Results

    Science.gov (United States)

    Tsompanoudi, Despina; Satratzemi, Maya; Xinogalos, Stelios

    2015-01-01

    Since pair programming appeared in the literature as an effective method of teaching computer programming, many systems were developed to cover the application of pair programming over distance. Today's systems serve personal, professional and educational purposes allowing distributed teams to work together on the same programming project. The…

  10. Understanding Tools and Practices for Distributed Pair Programming

    NARCIS (Netherlands)

    Schümmer, T.; Lukosch, S.G.

    2009-01-01

    When considering the principles for eXtreme Programming, distributed eXtreme Programming, especially distributed pair programming, is a paradox predetermined to failure. However, global software development as well as the outsourcing of software development are integral parts of software projects.

  11. Reconfiguration of distribution system using a binary programming model

    Directory of Open Access Journals (Sweden)

    Md Mashud Hyder

    2016-03-01

    Full Text Available Distribution system reconfiguration aims to choose a switching combination of branches of the system that optimize certain performance criteria of power supply while maintaining some specified constraints. The ability to automatically reconfigure the network quickly and reliably is a key requirement of self-healing networks which is an important part of the future Smart Grid system. We present a unified mathematical framework, which allows us to consider different objectives of distribution system reconfiguration problems in a flexible manner, and investigate its performance. The resulting optimization problem is in quadratic form which can be solved efficiently by using a quadratic mixed integer programming (QMIP solver. The proposed method has been applied for reconfiguring different standard test distribution systems.

  12. Distributed Reconstruction via Alternating Direction Method

    Directory of Open Access Journals (Sweden)

    Linyuan Wang

    2013-01-01

    Full Text Available With the development of compressive sensing theory, image reconstruction from few-view projections has received considerable research attentions in the field of computed tomography (CT. Total-variation- (TV- based CT image reconstruction has been shown to be experimentally capable of producing accurate reconstructions from sparse-view data. In this study, a distributed reconstruction algorithm based on TV minimization has been developed. This algorithm is very simple as it uses the alternating direction method. The proposed method can accelerate the alternating direction total variation minimization (ADTVM algorithm without losing accuracy.

  13. Calculation Methods for Wallenius’ Noncentral Hypergeometric Distribution

    DEFF Research Database (Denmark)

    Fog, Agner

    2008-01-01

    Two different probability distributions are both known in the literature as "the" noncentral hypergeometric distribution. Wallenius' noncentral hypergeometric distribution can be described by an urn model without replacement with bias. Fisher's noncentral hypergeometric distribution is the condit...

  14. Parallelizing Deadlock Resolution in Symbolic Synthesis of Distributed Programs

    Directory of Open Access Journals (Sweden)

    Fuad Abujarad

    2009-12-01

    Full Text Available Previous work has shown that there are two major complexity barriers in the synthesis of fault-tolerant distributed programs: (1 generation of fault-span, the set of states reachable in the presence of faults, and (2 resolving deadlock states, from where the program has no outgoing transitions. Of these, the former closely resembles with model checking and, hence, techniques for efficient verification are directly applicable to it. Hence, we focus on expediting the latter with the use of multi-core technology. We present two approaches for parallelization by considering different design choices. The first approach is based on the computation of equivalence classes of program transitions (called group computation that are needed due to the issue of distribution (i.e., inability of processes to atomically read and write all program variables. We show that in most cases the speedup of this approach is close to the ideal speedup and in some cases it is superlinear. The second approach uses traditional technique of partitioning deadlock states among multiple threads. However, our experiments show that the speedup for this approach is small. Consequently, our analysis demonstrates that a simple approach of parallelizing the group computation is likely to be the effective method for using multi-core computing in the context of deadlock resolution.

  15. Simplified Distributed Programming with Micro Objects

    Directory of Open Access Journals (Sweden)

    Maarten van Steen

    2010-07-01

    Full Text Available Developing large-scale distributed applications can be a daunting task. object-based environments have attempted to alleviate problems by providing distributed objects that look like local objects. We advocate that this approach has actually only made matters worse, as the developer needs to be aware of many intricate internal details in order to adequately handle partial failures. The result is an increase of application complexity. We present an alternative in which distribution transparency is lessened in favor of clearer semantics. In particular, we argue that a developer should always be offered the unambiguous semantics of local objects, and that distribution comes from copying those objects to where they are needed. We claim that it is often sufficient to provide only small, immutable objects, along with facilities to group objects into clusters.

  16. Neurolinguistics Programming: Method or Myth?

    Science.gov (United States)

    Gumm, W. B.; And Others

    1982-01-01

    The preferred modality by which 50 right-handed female college students encoded experience was assessed by recordings of conjugate eye movements, content analysis of the subject's verbal report, and the subject's self-report. Kappa analyses failed to reveal any agreement of the three assessment methods. (Author)

  17. Discount method for programming language evaluation

    DEFF Research Database (Denmark)

    Kurtev, Svetomir; Christensen, Tommy Aagaard; Thomsen, Bent

    2016-01-01

    This paper presents work in progress on developing a Discount Method for Programming Language Evaluation inspired by the Discount Usability Evaluation method (Benyon 2010) and the Instant Data Analysis method (Kjeldskov et al. 2004). The method is intended to bridge the gap between small scale...

  18. Sampling Methods for Wallenius' and Fisher's Noncentral Hypergeometric Distributions

    DEFF Research Database (Denmark)

    Fog, Agner

    2008-01-01

    Several methods for generating variates with univariate and multivariate Wallenius' and Fisher's noncentral hypergeometric distributions are developed. Methods for the univariate distributions include: simulation of urn experiments, inversion by binary search, inversion by chop-down search from t...

  19. Comparing four methods to estimate usual intake distributions.

    Science.gov (United States)

    Souverein, O W; Dekkers, A L; Geelen, A; Haubrock, J; de Vries, J H; Ocké, M C; Harttig, U; Boeing, H; van 't Veer, P

    2011-07-01

    The aim of this paper was to compare methods to estimate usual intake distributions of nutrients and foods. As 'true' usual intake distributions are not known in practice, the comparison was carried out through a simulation study, as well as empirically, by application to data from the European Food Consumption Validation (EFCOVAL) Study in which two 24-h dietary recalls (24-HDRs) and food frequency data were collected. The methods being compared were the Iowa State University Method (ISU), National Cancer Institute Method (NCI), Multiple Source Method (MSM) and Statistical Program for Age-adjusted Dietary Assessment (SPADE). Simulation data were constructed with varying numbers of subjects (n), different values for the Box-Cox transformation parameter (λ(BC)) and different values for the ratio of the within- and between-person variance (r(var)). All data were analyzed with the four different methods and the estimated usual mean intake and selected percentiles were obtained. Moreover, the 2-day within-person mean was estimated as an additional 'method'. These five methods were compared in terms of the mean bias, which was calculated as the mean of the differences between the estimated value and the known true value. The application of data from the EFCOVAL Project included calculations of nutrients (that is, protein, potassium, protein density) and foods (that is, vegetables, fruit and fish). Overall, the mean bias of the ISU, NCI, MSM and SPADE Methods was small. However, for all methods, the mean bias and the variation of the bias increased with smaller sample size, higher variance ratios and with more pronounced departures from normality. Serious mean bias (especially in the 95th percentile) was seen using the NCI Method when r(var) = 9, λ(BC) = 0 and n = 1000. The ISU Method and MSM showed a somewhat higher s.d. of the bias compared with NCI and SPADE Methods, indicating a larger method uncertainty. Furthermore, whereas the ISU, NCI and SPADE Methods produced

  20. A distributed implementation of a mode switching control program

    DEFF Research Database (Denmark)

    Holdgaard, Michael; Eriksen, Thomas Juul; Ravn, Anders P.

    1995-01-01

    A distributed implementation of a mode switched control program for a robot is described. The design of the control program is given by a set of real-time automatons. One of them plans a schedule for switching between a fixed set of control functions, another dispatches the control functions...

  1. Simple Calculation Programs for Biology Immunological Methods

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Simple Calculation Programs for Biology Immunological Methods. Computation of Ab/Ag Concentration from EISA data. Graphical Method; Raghava et al., 1992, J. Immuno. Methods 153: 263. Determination of affinity of Monoclonal Antibody. Using non-competitive ...

  2. Distribution Locational Marginal Pricing for Optimal Electric Vehicle Charging through Chance Constrained Mixed-Integer Programming

    DEFF Research Database (Denmark)

    Liu, Zhaoxi; Wu, Qiuwei; Oren, Shmuel S.

    2017-01-01

    This paper presents a distribution locational marginal pricing (DLMP) method through chance constrained mixed-integer programming designed to alleviate the possible congestion in the future distribution network with high penetration of electric vehicles (EVs). In order to represent the stochastic...

  3. Photovoltaic subsystem marketing and distribution model: programming manual. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1982-07-01

    Complete documentation of the marketing and distribution (M and D) computer model is provided. The purpose is to estimate the costs of selling and transporting photovoltaic solar energy products from the manufacturer to the final customer. The model adjusts for the inflation and regional differences in marketing and distribution costs. The model consists of three major components: the marketing submodel, the distribution submodel, and the financial submodel. The computer program is explained including the input requirements, output reports, subprograms and operating environment. The program specifications discuss maintaining the validity of the data and potential improvements. An example for a photovoltaic concentrator collector demonstrates the application of the model.

  4. School Wellness Programs: Magnitude and Distribution in New York City Public Schools

    Science.gov (United States)

    Stiefel, Leanna; Elbel, Brian; Pflugh Prescott, Melissa; Aneja, Siddhartha; Schwartz, Amy E.

    2017-01-01

    Background: Public schools provide students with opportunities to participate in many discretionary, unmandated wellness programs. Little is known about the number of these programs, their distribution across schools, and the kinds of students served. We provide evidence on these questions for New York City (NYC) public schools. Methods: Data on…

  5. Reduction Method for Active Distribution Networks

    DEFF Research Database (Denmark)

    Raboni, Pietro; Chen, Zhe

    2013-01-01

    On-line security assessment is traditionally performed by Transmission System Operators at the transmission level, ignoring the effective response of distributed generators and small loads. On the other hand the required computation time and amount of real time data for including Distribution Net...... by comparing the results obtained in PSCAD® with the detailed network model and with the reduced one. Moreover the control schemes of a wind turbine and a photovoltaic plant included in the detailed network model are described.......On-line security assessment is traditionally performed by Transmission System Operators at the transmission level, ignoring the effective response of distributed generators and small loads. On the other hand the required computation time and amount of real time data for including Distribution...

  6. Simple Calculation Programs for Biology Other Methods

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Simple Calculation Programs for Biology Other Methods. Hemolytic potency of drugs. Raghava et al., (1994) Biotechniques 17: 1148. FPMAP: methods for classification and identification of microorganisms 16SrRNA. graphical display of restriction and fragment map of ...

  7. comparison of estimation methods for fitting weibull distribution

    African Journals Online (AJOL)

    Tersor

    QuercusroburL.) stands in northwest Spain with the beta distribution. Investigación Agraria: Sistemasy Recursos Forestales 17(3):. 271-281. COMPARISON OF ESTIMATION METHODS FOR FITTING WEIBULL DISTRIBUTION TO THE NATURAL ...

  8. A method for statistically comparing spatial distribution maps

    Directory of Open Access Journals (Sweden)

    Reynolds Mary G

    2009-01-01

    Full Text Available Abstract Background Ecological niche modeling is a method for estimation of species distributions based on certain ecological parameters. Thus far, empirical determination of significant differences between independently generated distribution maps for a single species (maps which are created through equivalent processes, but with different ecological input parameters, has been challenging. Results We describe a method for comparing model outcomes, which allows a statistical evaluation of whether the strength of prediction and breadth of predicted areas is measurably different between projected distributions. To create ecological niche models for statistical comparison, we utilized GARP (Genetic Algorithm for Rule-Set Production software to generate ecological niche models of human monkeypox in Africa. We created several models, keeping constant the case location input records for each model but varying the ecological input data. In order to assess the relative importance of each ecological parameter included in the development of the individual predicted distributions, we performed pixel-to-pixel comparisons between model outcomes and calculated the mean difference in pixel scores. We used a two sample Student's t-test, (assuming as null hypothesis that both maps were identical to each other regardless of which input parameters were used to examine whether the mean difference in corresponding pixel scores from one map to another was greater than would be expected by chance alone. We also utilized weighted kappa statistics, frequency distributions, and percent difference to look at the disparities in pixel scores. Multiple independent statistical tests indicated precipitation as the single most important independent ecological parameter in the niche model for human monkeypox disease. Conclusion In addition to improving our understanding of the natural factors influencing the distribution of human monkeypox disease, such pixel-to-pixel comparison

  9. International Review of Standards and Labeling Programs for Distribution Transformers

    Energy Technology Data Exchange (ETDEWEB)

    Letschert, Virginie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Scholand, Michael [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Carreño, Ana María [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hernandez, Carolina [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-06-20

    Transmission and distribution (T&D) losses in electricity networks represent 8.5% of final energy consumption in the world. In Latin America, T&D losses range between 6% and 20% of final energy consumption, and represent 7% in Chile. Because approximately one-third of T&D losses take place in distribution transformers alone, there is significant potential to save energy and reduce costs and carbon emissions through policy intervention to increase distribution transformer efficiency. A large number of economies around the world have recognized the significant impact of addressing distribution losses and have implemented policies to support market transformation towards more efficient distribution transformers. As a result, there is considerable international experience to be shared and leveraged to inform countries interested in reducing distribution losses through policy intervention. The report builds upon past international studies of standards and labeling (S&L) programs for distribution transformers to present the current energy efficiency programs for distribution transformers around the world.

  10. DOE-EPRI distributed wind Turbine Verification Program (TVP III)

    Energy Technology Data Exchange (ETDEWEB)

    McGowin, C.; DeMeo, E. [Electric Power Research Institute, Palo Alto, CA (United States); Calvert, S. [Dept. of Energy, Washington, DC (United States)] [and others

    1997-12-31

    In 1992, the Electric Power Research Institute (EPRI) and the U.S. Department of Energy (DOE) initiated the Utility Wind Turbine Verification Program (TVP). The goal of the program is to evaluate prototype advanced wind turbines at several sites developed by U.S. electric utility companies. Two six MW wind projects have been installed under the TVP program by Central and South West Services in Fort Davis, Texas and Green Mountain Power Corporation in Searsburg, Vermont. In early 1997, DOE and EPRI selected five more utility projects to evaluate distributed wind generation using smaller {open_quotes}clusters{close_quotes} of wind turbines connected directly to the electricity distribution system. This paper presents an overview of the objectives, scope, and status of the EPRI-DOE TVP program and the existing and planned TVP projects.

  11. The Generalized Parton Distribution Program at Jefferson Lab

    Energy Technology Data Exchange (ETDEWEB)

    C. Munoz Camacho

    2010-05-01

    Recent results on the Generalized Parton Distribution (GPD) program at Jefferson Lab (JLab) will be presented. The emphasis will be in the Hall A program aiming at measuring Q^2-dependences of different terms of the Deeply Virtual Compton Scattering (DVCS) cross section. This is a fundamental step before one can extract GPD information from JLab DVCS data. The upcoming program in Hall A, using both a 6 GeV beam (2010) and a 11 GeV beam (~2015) will also be described.

  12. Distributed gas detection system and method

    Energy Technology Data Exchange (ETDEWEB)

    Challener, William Albert; Palit, Sabarni; Karp, Jason Harris; Kasten, Ansas Matthias; Choudhury, Niloy

    2017-11-21

    A distributed gas detection system includes one or more hollow core fibers disposed in different locations, one or more solid core fibers optically coupled with the one or more hollow core fibers and configured to receive light of one or more wavelengths from a light source, and an interrogator device configured to receive at least some of the light propagating through the one or more solid core fibers and the one or more hollow core fibers. The interrogator device is configured to identify a location of a presence of a gas-of-interest by examining absorption of at least one of the wavelengths of the light at least one of the hollow core fibers.

  13. A novel method for estimating distributions of body mass index.

    Science.gov (United States)

    Ng, Marie; Liu, Patrick; Thomson, Blake; Murray, Christopher J L

    2016-01-01

    Understanding trends in the distribution of body mass index (BMI) is a critical aspect of monitoring the global overweight and obesity epidemic. Conventional population health metrics often only focus on estimating and reporting the mean BMI and the prevalence of overweight and obesity, which do not fully characterize the distribution of BMI. In this study, we propose a novel method which allows for the estimation of the entire distribution. The proposed method utilizes the optimization algorithm, L-BFGS-B, to derive the distribution of BMI from three commonly available population health statistics: mean BMI, prevalence of overweight, and prevalence of obesity. We conducted a series of simulations to examine the properties, accuracy, and robustness of the method. We then illustrated the practical application of the method by applying it to the 2011-2012 US National Health and Nutrition Examination Survey (NHANES). Our method performed satisfactorily across various simulation scenarios yielding empirical (estimated) distributions which aligned closely with the true distributions. Application of the method to the NHANES data also showed a high level of consistency between the empirical and true distributions. In situations where there were considerable outliers, the method was less satisfactory at capturing the extreme values. Nevertheless, it remained accurate at estimating the central tendency and quintiles. The proposed method offers a tool that can efficiently estimate the entire distribution of BMI. The ability to track the distributions of BMI will improve our capacity to capture changes in the severity of overweight and obesity and enable us to better monitor the epidemic.

  14. Distributional theory for the DIA method

    NARCIS (Netherlands)

    Teunissen, P.J.G.

    2017-01-01

    The DIA method for the detection, identification and adaptation of model misspecifications combines estimation with testing. The aim of the present contribution is to introduce a unifying framework for the rigorous capture of this combination. By using a canonical model formulation and a

  15. Distribution Methods for Transferable Discharge Permits

    Science.gov (United States)

    Eheart, J. Wayland; Joeres, Erhard F.; David, Martin H.

    1980-10-01

    A mathematical model has been developed to simulate the operation of a single-price auction of transferable discharge permits. Permits may be sold at auction by the control authority or may be given, free of charge, to the dischargers according to some agreed-upon formula and subsequently redistributed by a similar auction. The sales method and four alternative free allocation schemes are compared through the example case of phosphorus discharge from point sources in the Wisconsin-Lake Michigan watershed.

  16. Methods and Tools for Profiling and Control of Distributed Systems

    Directory of Open Access Journals (Sweden)

    Sukharev Roman

    2017-01-01

    Full Text Available The article analyzes and standardizes methods for profiling distributed systems that focus on simulation to conduct experiments and build a graph model of the system. The theory of queueing networks is used for simulation modeling of distributed systems, receiving and processing user requests. To automate the above method of profiling distributed systems the software application was developed with a modular structure and similar to a SCADA-system.

  17. 29 CFR 4041A.42 - Method of distribution.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Method of distribution. 4041A.42 Section 4041A.42 Labor Regulations Relating to Labor (Continued) PENSION BENEFIT GUARANTY CORPORATION PLAN TERMINATIONS TERMINATION OF MULTIEMPLOYER PLANS Closeout of Sufficient Plans § 4041A.42 Method of distribution. The plan...

  18. Cathode power distribution system and method of using the same for power distribution

    Science.gov (United States)

    Williamson, Mark A; Wiedmeyer, Stanley G; Koehl, Eugene R; Bailey, James L; Willit, James L; Barnes, Laurel A; Blaskovitz, Robert J

    2014-11-11

    Embodiments include a cathode power distribution system and/or method of using the same for power distribution. The cathode power distribution system includes a plurality of cathode assemblies. Each cathode assembly of the plurality of cathode assemblies includes a plurality of cathode rods. The system also includes a plurality of bus bars configured to distribute current to each of the plurality of cathode assemblies. The plurality of bus bars include a first bus bar configured to distribute the current to first ends of the plurality of cathode assemblies and a second bus bar configured to distribute the current to second ends of the plurality of cathode assemblies.

  19. A Game-Theoretic Model for Distributed Programming by Contract

    DEFF Research Database (Denmark)

    Henriksen, Anders Starcke; Hvitved, Tom; Filinski, Andrzej

    2009-01-01

    We present an extension of the programming-by-contract (PBC) paradigm to a concurrent and distributed environment.  Classical PBC is characterized by absolute conformance of code to its specification, assigning blame in case of failures, and a hierarchical, cooperative decomposition model – none...... of which extend naturally to a distributed environment with multiple administrative peers. We therefore propose a more nuanced contract model based on quantifiable performance of implementations; assuming responsibility for success; and a fundamentally adversarial model of system integration, where each...

  20. A Comparison of Distribution Free and Non-Distribution Free Factor Analysis Methods

    Science.gov (United States)

    Ritter, Nicola L.

    2012-01-01

    Many researchers recognize that factor analysis can be conducted on both correlation matrices and variance-covariance matrices. Although most researchers extract factors from non-distribution free or parametric methods, researchers can also extract factors from distribution free or non-parametric methods. The nature of the data dictates the method…

  1. Regularized Primal-Dual Subgradient Method for Distributed Constrained Optimization.

    Science.gov (United States)

    Yuan, Deming; Ho, Daniel W C; Xu, Shengyuan

    2016-09-01

    In this paper, we study the distributed constrained optimization problem where the objective function is the sum of local convex cost functions of distributed nodes in a network, subject to a global inequality constraint. To solve this problem, we propose a consensus-based distributed regularized primal-dual subgradient method. In contrast to the existing methods, most of which require projecting the estimates onto the constraint set at every iteration, only one projection at the last iteration is needed for our proposed method. We establish the convergence of the method by showing that it achieves an O ( K (-1/4) ) convergence rate for general distributed constrained optimization, where K is the iteration counter. Finally, a numerical example is provided to validate the convergence of the propose method.

  2. Review of Congestion Management Methods for Distribution Networks with High Penetration of Distributed Energy Resources

    DEFF Research Database (Denmark)

    Huang, Shaojun; Wu, Qiuwei; Liu, Zhaoxi

    2014-01-01

    control methods. The market methods consist of dynamic tariff, distribution capacity market, shadow price and flexible service market. The direct control methods are comprised of network reconfiguration, reactive power control and active power control. Based on the review of the existing methods...

  3. Comparing four methods to estimate usual intake distributions

    NARCIS (Netherlands)

    Souverein, O.W.; Dekkers, A.L.; Geelen, A.; Haubrock, J.; Vries, de J.H.M.; Ocke, M.C.; Harttig, U.; Boeing, H.; Veer, van 't P.

    2011-01-01

    Background/Objectives: The aim of this paper was to compare methods to estimate usual intake distributions of nutrients and foods. As ‘true’ usual intake distributions are not known in practice, the comparison was carried out through a simulation study, as well as empirically, by application to data

  4. Programming by Numbers -- A Programming Method for Complete Novices

    NARCIS (Netherlands)

    Glaser, Hugh; Hartel, Pieter H.

    2000-01-01

    Students often have difficulty with the minutiae of program construction. We introduce the idea of `Programming by Numbers', which breaks some of the programming process down into smaller steps, giving such students a way into the process of Programming in the Small. Programming by Numbers does not

  5. Dynamic Subsidy Method for Congestion Management in Distribution Networks

    OpenAIRE

    Huang, Shaojun; Wu, Qiuwei

    2016-01-01

    Dynamic subsidy (DS) is a locational price paid by the distribution system operator (DSO) to its customers in order to shift energy consumption to designated hours and nodes. It is promising for demand side management and congestion management. This paper proposes a new DS method for congestion management in distribution networks, including the market mechanism, the mathematical formulation through a two-level optimization, and the method solving the optimization by tightening the constraints...

  6. Methods of assessing grain-size distribution during grain growth

    DEFF Research Database (Denmark)

    Tweed, Cherry J.; Hansen, Niels; Ralph, Brian

    1985-01-01

    This paper considers methods of obtaining grain-size distributions and ways of describing them. In order to collect statistically useful amounts of data, an automatic image analyzer is used, and the resulting data are subjected to a series of tests that evaluate the differences between two related...... distributions (before and after grain growth). The distributions are measured from two-dimensional sections, and both the data and the corresponding true three-dimensional grain-size distributions (obtained by stereological analysis) are collected. The techniques described here are illustrated by reference...

  7. Scalable Optimization Methods for Distribution Networks With High PV Integration

    Energy Technology Data Exchange (ETDEWEB)

    Guggilam, Swaroop S.; Dall' Anese, Emiliano; Chen, Yu Christine; Dhople, Sairaj V.; Giannakis, Georgios B.

    2016-07-01

    This paper proposes a suite of algorithms to determine the active- and reactive-power setpoints for photovoltaic (PV) inverters in distribution networks. The objective is to optimize the operation of the distribution feeder according to a variety of performance objectives and ensure voltage regulation. In general, these algorithms take a form of the widely studied ac optimal power flow (OPF) problem. For the envisioned application domain, nonlinear power-flow constraints render pertinent OPF problems nonconvex and computationally intensive for large systems. To address these concerns, we formulate a quadratic constrained quadratic program (QCQP) by leveraging a linear approximation of the algebraic power-flow equations. Furthermore, simplification from QCQP to a linearly constrained quadratic program is provided under certain conditions. The merits of the proposed approach are demonstrated with simulation results that utilize realistic PV-generation and load-profile data for illustrative distribution-system test feeders.

  8. Inspection Methods in Programming: Cliches and Plans.

    Science.gov (United States)

    1987-12-01

    PROGRAM ELEMENT. PROJECT. TASK Artificial Inteligence Laboratory AREA & WORK UN IT NUMBERS J 545 Technology Square Cambridge, MA 02139 $L. CONTROLLING...U) MASSACHUSETTS INST OF TECH CAMBRIDGE ARTIFICIAL INTELLIGENCE LAB C RICH DEC 87 AI-M-±05 UNCLASSIFIED NW014-B5-K-0124 F/G 12/5 NL ’lllll l l l...S %P W. J % % %s MASSACHUSETTS INSTITUTE OF TECHNOLOGY N ARTIFICIAL INTELLIGENCE LABORATORY 00 A.I. Memo No. 1005 December 1987 N Inspection Methods

  9. Determining on-fault earthquake magnitude distributions from integer programming

    Science.gov (United States)

    Geist, Eric L.; Parsons, Tom

    2018-02-01

    Earthquake magnitude distributions among faults within a fault system are determined from regional seismicity and fault slip rates using binary integer programming. A synthetic earthquake catalog (i.e., list of randomly sampled magnitudes) that spans millennia is first formed, assuming that regional seismicity follows a Gutenberg-Richter relation. Each earthquake in the synthetic catalog can occur on any fault and at any location. The objective is to minimize misfits in the target slip rate for each fault, where slip for each earthquake is scaled from its magnitude. The decision vector consists of binary variables indicating which locations are optimal among all possibilities. Uncertainty estimates in fault slip rates provide explicit upper and lower bounding constraints to the problem. An implicit constraint is that an earthquake can only be located on a fault if it is long enough to contain that earthquake. A general mixed-integer programming solver, consisting of a number of different algorithms, is used to determine the optimal decision vector. A case study is presented for the State of California, where a 4 kyr synthetic earthquake catalog is created and faults with slip ≥3 mm/yr are considered, resulting in >106 variables. The optimal magnitude distributions for each of the faults in the system span a rich diversity of shapes, ranging from characteristic to power-law distributions.

  10. Distributed Interior-point Method for Loosely Coupled Problems

    DEFF Research Database (Denmark)

    Pakazad, Sina Khoshfetrat; Hansson, Anders; Andersen, Martin Skovgaard

    2014-01-01

    In this paper, we put forth distributed algorithms for solving loosely coupled unconstrained and constrained optimization problems. Such problems are usually solved using algorithms that are based on a combination of decomposition and first order methods. These algorithms are commonly very slow...... and require many iterations to converge. In order to alleviate this issue, we propose algorithms that combine the Newton and interior-point methods with proximal splitting methods for solving such problems. Particularly, the algorithm for solving unconstrained loosely coupled problems, is based on Newton......’s method and utilizes proximal splitting to distribute the computations for calculating the Newton step at each iteration. A combination of this algorithm and the interior-point method is then used to introduce a distributed algorithm for solving constrained loosely coupled problems. We also provide...

  11. A Comparative Study of Distribution System Parameter Estimation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup

    2016-07-17

    In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.

  12. Entropy Methods For Univariate Distributions in Decision Analysis

    Science.gov (United States)

    Abbas, Ali E.

    2003-03-01

    One of the most important steps in decision analysis practice is the elicitation of the decision-maker's belief about an uncertainty of interest in the form of a representative probability distribution. However, the probability elicitation process is a task that involves many cognitive and motivational biases. Alternatively, the decision-maker may provide other information about the distribution of interest, such as its moments, and the maximum entropy method can be used to obtain a full distribution subject to the given moment constraints. In practice however, decision makers cannot readily provide moments for the distribution, and are much more comfortable providing information about the fractiles of the distribution of interest or bounds on its cumulative probabilities. In this paper we present a graphical method to determine the maximum entropy distribution between upper and lower probability bounds and provide an interpretation for the shape of the maximum entropy distribution subject to fractile constraints, (FMED). We also discuss the problems with the FMED in that it is discontinuous and flat over each fractile interval. We present a heuristic approximation to a distribution if in addition to its fractiles, we also know it is continuous and work through full examples to illustrate the approach.

  13. Dedicated Programming Language for Small Distributed Control Divices

    DEFF Research Database (Denmark)

    Madsen, Per Printz; Borch, Ole

    2007-01-01

    Small control computers are more and more used in modern households. These computers are for instance in washing machines, heating systems, secure systems, televisions and stereos. In the future all these computers will communicate with each other to implement the intelligent house. This can only....... This paper describes a new, flexible and simple language for programming distributed control tasks. The compiler for this language generates a target code that is very easy to interpret. A interpreter, that can be easy ported to different hardwares, is described. The new language is simple and easy to learn...

  14. Score Function of Distribution and Revival of the Moment Method

    Czech Academy of Sciences Publication Activity Database

    Fabián, Zdeněk

    2016-01-01

    Roč. 45, č. 4 (2016), s. 1118-1136 ISSN 0361-0926 R&D Projects: GA MŠk(CZ) LG12020 Institutional support: RVO:67985807 Keywords : characteristics of distributions * data characteristics * general moment method * Huber moment estimator * parametric methods * score function Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.311, year: 2016

  15. Dynamic Subsidy Method for Congestion Management in Distribution Networks

    DEFF Research Database (Denmark)

    Huang, Shaojun; Wu, Qiuwei

    2016-01-01

    Dynamic subsidy (DS) is a locational price paid by the distribution system operator (DSO) to its customers in order to shift energy consumption to designated hours and nodes. It is promising for demand side management and congestion management. This paper proposes a new DS method for congestion...... of the Roy Billinton Test System (RBTS) with high penetration of electric vehicles (EVs) and heat pumps (HPs). The case studies demonstrate the efficacy of the DS method for congestion management in distribution networks. Studies in this paper show that the DS method offers the customers a fair opportunity...

  16. Governance and assessment in a widely distributed medical education program in Australia.

    Science.gov (United States)

    Solarsh, Geoff; Lindley, Jennifer; Whyte, Gordon; Fahey, Michael; Walker, Amanda

    2012-06-01

    The learning objectives, curriculum content, and assessment standards for distributed medical education programs must be aligned across the health care systems and community contexts in which their students train. In this article, the authors describe their experiences at Monash University implementing a distributed medical education program at metropolitan, regional, and rural Australian sites and an offshore Malaysian site, using four different implementation models. Standardizing learning objectives, curriculum content, and assessment standards across all sites while allowing for site-specific implementation models created challenges for educational alignment. At the same time, this diversity created opportunities to customize the curriculum to fit a variety of settings and for innovations that have enriched the educational system as a whole.Developing these distributed medical education programs required a detailed review of Monash's learning objectives and curriculum content and their relevance to the four different sites. It also required a review of assessment methods to ensure an identical and equitable system of assessment for students at all sites. It additionally demanded changes to the systems of governance and the management of the educational program away from a centrally constructed and mandated curriculum to more collaborative approaches to curriculum design and implementation involving discipline leaders at multiple sites.Distributed medical education programs, like that at Monash, in which cohorts of students undertake the same curriculum in different contexts, provide potentially powerful research platforms to compare different pedagogical approaches to medical education and the impact of context on learning outcomes.

  17. Distributed MIMO-ISAR Sub-image Fusion Method

    Directory of Open Access Journals (Sweden)

    Gu Wenkun

    2017-02-01

    Full Text Available The fast fluctuation associated with maneuvering a target’s radar cross-section often affects the imaging performance stability of traditional monostatic Inverse Synthetic Aperture Radar (ISAR. To address this problem, in this study, we propose an imaging method based on the fusion of sub-images of frequencydiversity-distributed multiple Input-Multiple Output-Inverse Synthetic Aperture Radar (MIMO-ISAR. First, we establish the analytic expression of a two-dimensional ISAR sub-image acquired by different channels of distributed MIMO-ISAR. Then, we derive the distance and azimuth distortion factors of the image acquired by the different channels. By compensating for the distortion of the ISAR image, we ultimately realize distributed MIMO-ISAR fusion imaging. Simulations verify the validity of this imaging method using distributed MIMO-ISAR.

  18. Mathematical methods linear algebra normed spaces distributions integration

    CERN Document Server

    Korevaar, Jacob

    1968-01-01

    Mathematical Methods, Volume I: Linear Algebra, Normed Spaces, Distributions, Integration focuses on advanced mathematical tools used in applications and the basic concepts of algebra, normed spaces, integration, and distributions.The publication first offers information on algebraic theory of vector spaces and introduction to functional analysis. Discussions focus on linear transformations and functionals, rectangular matrices, systems of linear equations, eigenvalue problems, use of eigenvectors and generalized eigenvectors in the representation of linear operators, metric and normed vector

  19. Information-theoretic methods for estimating of complicated probability distributions

    CERN Document Server

    Zong, Zhi

    2006-01-01

    Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur

  20. Polynomial probability distribution estimation using the method of moments

    Science.gov (United States)

    Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949

  1. Polynomial probability distribution estimation using the method of moments.

    Science.gov (United States)

    Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.

  2. METHODS FOR ESTIMATING THE PARAMETERS OF THE POWER FUNCTION DISTRIBUTION.

    Directory of Open Access Journals (Sweden)

    azam zaka

    2013-10-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE In this paper, we present some methods for estimating the parameters of the two parameter Power function distribution. We used the least squares method (LSM, relative least squares method (RELS and ridge regression method (RR. Sampling behavior of the estimates is indicated by a Monte Carlo simulation. The objective of identifying the best estimator amongst them we use the Total Deviation (T.D and Mean Square Error (M.S.E as performance index. We determined the best method for estimation using different values for the parameters and different sample sizes.

  3. GIS spatial data partitioning method for distributed data processing

    Science.gov (United States)

    Zhou, Yan; Zhu, Qing; Zhang, Yeting

    2007-11-01

    Spatial data partitioning strategy plays an important role in GIS spatial data distributed storage and processing, its key problem is how to partition spatial data to distributed nodes in network environment. Existing main spatial data partitioning methods doesn't consider spatial locality and unstructured variable length characteristics of spatial data, these methods simply partition spatial data based on one or more attributes value that could result in storage capacity imbalance between distributed processing nodes. Aiming at these, we point out the two basic principles that spatial data partitioning should meet to in this paper. We propose a new spatial data partitioning method based on hierarchical decomposition method of low order Hilbert space-filling curve, which could avoid excessively intensive space partitioning by hierarchically decomposing subspaces. The proposed method uses Hilbert curve to impose a linear ordering on the multidimensional spatial objects, and partition the spatial objects according to this ordering. Experimental results show the proposed spatial data partitioning method not only achieves better storage load balance between distributed nodes, but also keeps well spatial locality of data objects after partitioning.

  4. Space program management methods and tools

    CERN Document Server

    Spagnulo, Marcello; Balduccini, Mauro; Nasini, Federico

    2013-01-01

    Beginning with the basic elements that differentiate space programs from other management challenges, Space Program Management explains through theory and example of real programs from around the world, the philosophical and technical tools needed to successfully manage large, technically complex space programs both in the government and commercial environment. Chapters address both systems and configuration management, the management of risk, estimation, measurement and control of both funding and the program schedule, and the structure of the aerospace industry worldwide.

  5. 7 CFR 250.64 - Food Distribution Program in the Trust Territory of the Pacific Islands.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 4 2010-01-01 2010-01-01 false Food Distribution Program in the Trust Territory of... DISTRIBUTION DONATION OF FOODS FOR USE IN THE UNITED STATES, ITS TERRITORIES AND POSSESSIONS AND AREAS UNDER ITS JURISDICTION Household Programs § 250.64 Food Distribution Program in the Trust Territory of the...

  6. Non-regularized inversion method from light scattering applied to ferrofluid magnetization curves for magnetic size distribution analysis

    Energy Technology Data Exchange (ETDEWEB)

    Rijssel, Jos van; Kuipers, Bonny W.M.; Erné, Ben H., E-mail: B.H.Erne@uu.nl

    2014-03-15

    A numerical inversion method known from the analysis of light scattering by colloidal dispersions is now applied to magnetization curves of ferrofluids. The distribution of magnetic particle sizes or dipole moments is determined without assuming that the distribution is unimodal or of a particular shape. The inversion method enforces positive number densities via a non-negative least squares procedure. It is tested successfully on experimental and simulated data for ferrofluid samples with known multimodal size distributions. The created computer program MINORIM is made available on the web. - Highlights: • A method from light scattering is applied to analyze ferrofluid magnetization curves. • A magnetic size distribution is obtained without prior assumption of its shape. • The method is tested successfully on ferrofluids with a known size distribution. • The practical limits of the method are explored with simulated data including noise. • This method is implemented in the program MINORIM, freely available online.

  7. Reliability Assessment of Active Distribution System Using Monte Carlo Simulation Method

    Directory of Open Access Journals (Sweden)

    Shaoyun Ge

    2014-01-01

    Full Text Available In this paper we have treated the reliability assessment problem of low and high DG penetration level of active distribution system using the Monte Carlo simulation method. The problem is formulated as a two-case program, the program of low penetration simulation and the program of high penetration simulation. The load shedding strategy and the simulation process were introduced in detail during each FMEA process. Results indicate that the integration of DG can improve the reliability of the system if the system was operated actively.

  8. Type systems for distributed programs components and sessions

    CERN Document Server

    Dardha, Ornela

    2016-01-01

    In this book we develop powerful techniques based on formal methods for the verification of correctness, consistency and safety properties related to dynamic reconfiguration and communication in complex distributed systems. In particular, static analysis techniques based on types and type systems are an adequate methodology considering their success in guaranteeing not only basic safety properties, but also more sophisticated ones like deadlock or lock freedom in concurrent settings. The main contributions of this book are twofold. i) We design a type system for a concurrent object-oriented calculus to statically ensure consistency of dynamic reconfigurations. ii) We define an encoding of the session pi-calculus, which models communication in distributed systems, into the standard typed pi-calculus. We use this encoding to derive properties like type safety and progress in the session pi-calculus by exploiting the corresponding properties in the standard typed pi-calculus.

  9. Differential Transformation Method for Temperature Distribution in a Radiating Fin

    DEFF Research Database (Denmark)

    Rahimi, M.; Hosseini, M. J.; Barari, Amin

    2011-01-01

    Radiating extended surfaces are widely used to enhance heat transfer between a primary surface and the environment. In this paper, the differential transformation method (DTM) is proposed for solving nonlinear differential equation of temperature distribution in a heat radiating fin. The concept...... of differential transformation is briefly introduced, and then we employed it to derive solutions of two nonlinear equations. The results obtained by DTM are compared with those derived from the analytical solution to verify the accuracy of the proposed method....

  10. Method for calculating voltage distribution along lengthy insulator strings

    Energy Technology Data Exchange (ETDEWEB)

    Perelman, L.S.

    1978-01-01

    This computer method is based on the simultaneous solution of a set of equations with potential coefficients for charges of conductors, tower and insulator caps, and a set of equations for the insulator capacitance chain. The effect of various factors on the voltage distribution along strings for 750 and 1150/1500 kV lines is considered.

  11. Visual Method for Spectral Energy Distribution Calculation of ...

    Indian Academy of Sciences (India)

    c Indian Academy of Sciences. Visual Method for Spectral Energy Distribution Calculation of Blazars. Y. Huang1,3 & J. H. Fan2,3,∗. 1School of Computer Science and Education Software, Guangzhou University,. Guangzhou 510006, China. 2Centre for Astrophysics, Guangzhou University, Guangzhou 510006, China.

  12. Electric Utility Transmission and Distribution Line Engineering Program

    Energy Technology Data Exchange (ETDEWEB)

    Peter McKenny

    2010-08-31

    Economic development in the United States depends on a reliable and affordable power supply. The nation will need well educated engineers to design a modern, safe, secure, and reliable power grid for our future needs. An anticipated shortage of qualified engineers has caused considerable concern in many professional circles, and various steps are being taken nationwide to alleviate the potential shortage and ensure the North American power system's reliability, and our world-wide economic competitiveness. To help provide a well-educated and trained workforce which can sustain and modernize the nation's power grid, Gonzaga University's School of Engineering and Applied Science has established a five-course (15-credit hour) Certificate Program in Transmission and Distribution (T&D) Engineering. The program has been specifically designed to provide working utility engineering professionals with on-line access to advanced engineering courses which cover modern design practice with an industry-focused theoretical foundation. A total of twelve courses have been developed to-date and students may select any five in their area of interest for the T&D Certificate. As each course is developed and taught by a team of experienced engineers (from public and private utilities, consultants, and industry suppliers), students are provided a unique opportunity to interact directly with different industry experts over the eight weeks of each course. Course material incorporates advanced aspects of civil, electrical, and mechanical engineering disciplines that apply to power system design and are appropriate for graduate engineers. As such, target students for the certificate program include: (1) recent graduates with a Bachelor of Science Degree in an engineering field (civil, mechanical, electrical, etc.); (2) senior engineers moving from other fields to the utility industry (i.e. paper industry to utility engineering or project management positions); and (3) regular

  13. Computationally intensive econometrics using a distributed matrix-programming language.

    Science.gov (United States)

    Doornik, Jurgen A; Hendry, David F; Shephard, Neil

    2002-06-15

    This paper reviews the need for powerful computing facilities in econometrics, focusing on concrete problems which arise in financial economics and in macroeconomics. We argue that the profession is being held back by the lack of easy-to-use generic software which is able to exploit the availability of cheap clusters of distributed computers. Our response is to extend, in a number of directions, the well-known matrix-programming interpreted language Ox developed by the first author. We note three possible levels of extensions: (i) Ox with parallelization explicit in the Ox code; (ii) Ox with a parallelized run-time library; and (iii) Ox with a parallelized interpreter. This paper studies and implements the first case, emphasizing the need for deterministic computing in science. We give examples in the context of financial economics and time-series modelling.

  14. 3D crack aperture distribution from a nuclear imaging method

    Science.gov (United States)

    Sardini, Paul; Kuva, Jukka; Siitari-Kauppi, Marja; Bonnet, Marine; Hellmuth, Karl-Heinz

    2017-04-01

    Cracks in solid rocks are multi-scale entities because of their spatial, length and aperture distributions. Aperture distributions of cracks are not well known because their full aperture range (1 mm) is not accessible using common imaging techniques, such as SEM or X-Ray computed micro-tomography. Knowing the aperture distribution or cracks is, however, highly relevant to understanding flow in rocks. In crystalline rocks the lack of knowledge about the crack aperture distribution keeps us from a clear understanding of the relationships of porosity and permeability. A nuclear imaging method based on the full saturation of connected rock porosity by a 14C-doped resin (the 14-C PMMA method) allows detecting the connected microcrack network using autoradiography. Even if cracks are detected only on 2D sections, an estimate of the 3D aperture distribution of these cracks is possible. To this end, a set of "artificial crack" standards was prepared and investigated. These standards consisted of a PMMA layer of known thickness between two glass plates. Analysis of experimental autoradiographic profiles around these artificial cracks allows determination of their aperture. This methodology was then applied to different rock samples, mainly granitic ones.

  15. Multi-level methods and approximating distribution functions

    Science.gov (United States)

    Wilson, D.; Baker, R. E.

    2016-07-01

    Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie's direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie's direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146-179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.

  16. Synchronization Methods for Three Phase Distributed Power Generation Systems

    DEFF Research Database (Denmark)

    Timbus, Adrian Vasile; Teodorescu, Remus; Blaabjerg, Frede

    2005-01-01

    on the grid side. Therefore, considerations about power generation, safe running and grid synchronization must be done before connecting these systems to the utility network. This paper is mainly dealing with the grid synchronization issues of distributed systems. An overview of the synchronization methods......Nowadays, it is a general trend to increase the electricity production using Distributed Power Generation Systems (DPGS) based on renewable energy resources such as wind, sun or hydrogen. If these systems are not properly controlled, their connection to the utility network can generate problems...... as well as their major characteristics is given. New solutions to optimize the synchronization methods when running on distorted grid conditions are discussed. Simulation and experimental results are used to evaluate the behavior of the synchronization methods under different kind of grid disturbances...

  17. System and Method for Monitoring Distributed Asset Data

    Science.gov (United States)

    Gorinevsky, Dimitry (Inventor)

    2015-01-01

    A computer-based monitoring system and monitoring method implemented in computer software for detecting, estimating, and reporting the condition states, their changes, and anomalies for many assets. The assets are of same type, are operated over a period of time, and outfitted with data collection systems. The proposed monitoring method accounts for variability of working conditions for each asset by using regression model that characterizes asset performance. The assets are of the same type but not identical. The proposed monitoring method accounts for asset-to-asset variability; it also accounts for drifts and trends in the asset condition and data. The proposed monitoring system can perform distributed processing of massive amounts of historical data without discarding any useful information where moving all the asset data into one central computing system might be infeasible. The overall processing is includes distributed preprocessing data records from each asset to produce compressed data.

  18. The frequency-independent control method for distributed generation systems

    DEFF Research Database (Denmark)

    Naderi, Siamak; Pouresmaeil, Edris; Gao, Wenzhong David

    2012-01-01

    are controlled in the synchronously rotating orthogonal . dq reference frame. The transformed variables are used in control of the voltage source inverter that connects DG to distribution network. Due to importance of distributed resources in modern power systems, development of new, practical, cost......-effective and simple control strategies is obligatory. The new control method of this paper does not need a Phase Locked Loop (PLL) in control circuit and has fast dynamic response in providing active and reactive power to nonlinear load. From extensive simulation results, high performance of this control strategy...

  19. Method of imaging the electrical conductivity distribution of a subsurface

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Timothy C.

    2017-09-26

    A method of imaging electrical conductivity distribution of a subsurface containing metallic structures with known locations and dimensions is disclosed. Current is injected into the subsurface to measure electrical potentials using multiple sets of electrodes, thus generating electrical resistivity tomography measurements. A numeric code is applied to simulate the measured potentials in the presence of the metallic structures. An inversion code is applied that utilizes the electrical resistivity tomography measurements and the simulated measured potentials to image the subsurface electrical conductivity distribution and remove effects of the subsurface metallic structures with known locations and dimensions.

  20. A hybrid method for assessment of soil pollutants spatial distribution

    Science.gov (United States)

    Tarasov, D. A.; Medvedev, A. N.; Sergeev, A. P.; Shichkin, A. V.; Buevich, A. G.

    2017-07-01

    The authors propose a hybrid method to predict the distribution of topsoil pollutants (Cu and Cr). The method combines artificial neural networks and kriging. Corresponding computer models were built and tested on real data on example of subarctic regions of Russia. The network structure selection was based on the minimization of the Root-mean-square error between real and predicted concentrations. The constructed models show that the prognostic accuracy of the artificial neural network is higher than in case of the geostatistical (kriging) and deterministic methods. The conclusion is that hybridization of models (artificial neural network and kriging) provides the improvement of the total predictive accuracy.

  1. Electrical power distribution control methods, electrical energy demand monitoring methods, and power management devices

    Science.gov (United States)

    Chassin, David P.; Donnelly, Matthew K.; Dagle, Jeffery E.

    2006-12-12

    Electrical power distribution control methods, electrical energy demand monitoring methods, and power management devices are described. In one aspect, an electrical power distribution control method includes providing electrical energy from an electrical power distribution system, applying the electrical energy to a load, providing a plurality of different values for a threshold at a plurality of moments in time and corresponding to an electrical characteristic of the electrical energy, and adjusting an amount of the electrical energy applied to the load responsive to an electrical characteristic of the electrical energy triggering one of the values of the threshold at the respective moment in time.

  2. Communication Systems and Study Method for Active Distribution Power systems

    DEFF Research Database (Denmark)

    Wei, Mu; Chen, Zhe

    Due to the involvement and evolvement of communication technologies in contemporary power systems, the applications of modern communication technologies in distribution power system are becoming increasingly important. In this paper, the International Organization for Standardization (ISO......) reference seven-layer model of communication systems, and the main communication technologies and protocols on each corresponding layer are introduced. Some newly developed communication techniques, like Ethernet, are discussed with reference to the possible applications in distributed power system....... The suitability of the communication technology to the distribution power system with active renewable energy based generation units is discussed. Subsequently the typical possible communication systems are studied by simulation. In this paper, a novel method of integrating communication system impact into power...

  3. Distribution-independent hierarchicald N-body methods

    Energy Technology Data Exchange (ETDEWEB)

    Aluru, Srinivas [Iowa State Univ., Ames, IA (United States)

    1994-07-27

    The N-body problem is to simulate the motion of N particles under the influence of mutual force fields based on an inverse square law. The problem has applications in several domains including astrophysics, molecular dynamics, fluid dynamics, radiosity methods in computer graphics and numerical complex analysis. Research efforts have focused on reducing the O(N2) time per iteration required by the naive algorithm of computing each pairwise interaction. Widely respected among these are the Barnes-Hut and Greengard methods. Greengard claims his algorithm reduces the complexity to O(N) time per iteration. Throughout this thesis, we concentrate on rigorous, distribution-independent, worst-case analysis of the N-body methods. We show that Greengard`s algorithm is not O(N), as claimed. Both Barnes-Hut and Greengard`s methods depend on the same data structure, which we show is distribution-dependent. For the distribution that results in the smallest running time, we show that Greengard`s algorithm is Ω(N log2 N) in two dimensions and Ω(N log4 N) in three dimensions. We have designed a hierarchical data structure whose size depends entirely upon the number of particles and is independent of the distribution of the particles. We show that both Greengard`s and Barnes-Hut algorithms can be used in conjunction with this data structure to reduce their complexity. Apart from reducing the complexity of the Barnes-Hut algorithm, the data structure also permits more accurate error estimation. We present two- and three-dimensional algorithms for creating the data structure. The multipole method designed using this data structure has a complexity of O(N log N) in two dimensions and O(N log2 N) in three dimensions.

  4. Isotope Production and Distribution Program`s Fiscal Year 1997 financial statement audit

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-03-27

    The Department of Energy Isotope Production and Distribution Program mission is to serve the national need for a reliable supply of isotope products and services for medicine, industry and research. The program produces and sells hundreds of stable and radioactive isotopes that are widely utilized by domestic and international customers. Isotopes are produced only where there is no U.S. private sector capability or other production capacity is insufficient to meet U.S. needs. The Department encourages private sector investment in new isotope production ventures and will sell or lease its existing facilities and inventories for commercial purposes. The Isotope Program reports to the Director of the Office of Nuclear Energy, Science and Technology. The Isotope Program operates under a revolving fund established by the Fiscal Year (FY) 1990 Energy and Water Appropriations Act and maintains financial viability by earning revenues from the sale of isotopes and services and through annual appropriations. The FY 1995 Energy and Water Appropriations Act modified predecessor acts to allow prices charged for Isotope Program products and services to be based on production costs, market value, the needs of the research community, and other factors. Although the Isotope Program functions as a business, prices set for small-volume, high-cost isotopes that are needed for research purposes may not achieve full-cost recovery. As a result, isotopes produced by the Isotope Program for research and development are priced to provide a reasonable return to the U.S. Government without discouraging their use. Commercial isotopes are sold on a cost-recovery basis. Because of its pricing structure, when selecting isotopes for production, the Isotope Program must constantly balance current isotope demand, market conditions, and societal benefits with its determination to operate at the lowest possible cost to U.S. taxpayers. Thus, this report provides a financial analysis of this situation.

  5. Resolution methods in proving the program correctness

    Directory of Open Access Journals (Sweden)

    Markoski Branko

    2007-01-01

    Full Text Available Program testing determines whether its behavior matches the specification, and also how it behaves in different exploitation conditions. Proving of program correctness is reduced to finding a proof for assertion that given sequence of formulas represents derivation within a formal theory of special predicted calculus. A well-known variant of this conception is described: correctness based on programming logic rules. It is shown that programming logic rules may be used in automatic resolution procedure. Illustrative examples are given, realized in prolog-like LP-language (with no restrictions to Horn's clauses and without the final failure. Basic information on LP-language are also given. It has been shown how a Pascal-program is being executed in LP-system proffer.

  6. Real-Time Reactive Power Distribution in Microgrids by Dynamic Programing

    DEFF Research Database (Denmark)

    Levron, Yoash; Beck, Yuval; Katzir, Liran

    2017-01-01

    as radial ones. The optimization problem is formulated with the cluster reactive powers as free variables, and the solution space is spanned by the cluster reactive power outputs. The optimal solution is then constructed by efficiently scanning the entire solution space, by scanning every possible......In this paper a new real-time optimization method for reactive power distribution in microgrids is proposed. The method enables location of a globally optimal distribution of reactive power under normal operating conditions. The method exploits the typical compact structure of microgrids to obtain...... combination of reactive powers, by means of dynamic programming. Since every single step involves a one-dimensional problem, the complexity of the solution is only linear with the number of clusters, and as a result, a globally optimal solution may be obtained in real time. The paper includes the results...

  7. Translation techniques for distributed-shared memory programming models

    Energy Technology Data Exchange (ETDEWEB)

    Fuller, Douglas James [Iowa State Univ., Ames, IA (United States)

    2005-01-01

    The high performance computing community has experienced an explosive improvement in distributed-shared memory hardware. Driven by increasing real-world problem complexity, this explosion has ushered in vast numbers of new systems. Each new system presents new challenges to programmers and application developers. Part of the challenge is adapting to new architectures with new performance characteristics. Different vendors release systems with widely varying architectures that perform differently in different situations. Furthermore, since vendors need only provide a single performance number (total MFLOPS, typically for a single benchmark), they only have strong incentive initially to optimize the API of their choice. Consequently, only a fraction of the available APIs are well optimized on most systems. This causes issues porting and writing maintainable software, let alone issues for programmers burdened with mastering each new API as it is released. Also, programmers wishing to use a certain machine must choose their API based on the underlying hardware instead of the application. This thesis argues that a flexible, extensible translator for distributed-shared memory APIs can help address some of these issues. For example, a translator might take as input code in one API and output an equivalent program in another. Such a translator could provide instant porting for applications to new systems that do not support the application's library or language natively. While open-source APIs are abundant, they do not perform optimally everywhere. A translator would also allow performance testing using a single base code translated to a number of different APIs. Most significantly, this type of translator frees programmers to select the most appropriate API for a given application based on the application (and developer) itself instead of the underlying hardware.

  8. Distributional monte carlo methods for the boltzmann equation

    Science.gov (United States)

    Schrock, Christopher R.

    Stochastic particle methods (SPMs) for the Boltzmann equation, such as the Direct Simulation Monte Carlo (DSMC) technique, have gained popularity for the prediction of flows in which the assumptions behind the continuum equations of fluid mechanics break down; however, there are still a number of issues that make SPMs computationally challenging for practical use. In traditional SPMs, simulated particles may possess only a single velocity vector, even though they may represent an extremely large collection of actual particles. This limits the method to converge only in law to the Boltzmann solution. This document details the development of new SPMs that allow the velocity of each simulated particle to be distributed. This approach has been termed Distributional Monte Carlo (DMC). A technique is described which applies kernel density estimation to Nanbu's DSMC algorithm. It is then proven that the method converges not just in law, but also in solution for Linfinity(R 3) solutions of the space homogeneous Boltzmann equation. This provides for direct evaluation of the velocity density function. The derivation of a general Distributional Monte Carlo method is given which treats collision interactions between simulated particles as a relaxation problem. The framework is proven to converge in law to the solution of the space homogeneous Boltzmann equation, as well as in solution for Linfinity(R3) solutions. An approach based on the BGK simplification is presented which computes collision outcomes deterministically. Each technique is applied to the well-studied Bobylev-Krook-Wu solution as a numerical test case. Accuracy and variance of the solutions are examined as functions of various simulation parameters. Significantly improved accuracy and reduced variance are observed in the normalized moments for the Distributional Monte Carlo technique employing discrete BGK collision modeling.

  9. Numerical methods for computing the temperature distribution in satellite systems

    OpenAIRE

    Gómez-Valadés Maturano, Francisco José

    2012-01-01

    [ANGLÈS] The present thesis has been done at ASTRIUM company to find new methods to obtain temperature distributions. Current software packages such as ESATAN or ESARAD provide not only excellent thermal analysis solutions, at a high price as they are very time consuming though, but also radiative simulations in orbit scenarios. Since licenses of this product are usually limited for the use of many engineers, it is important to provide new tools to do these calculations. In consequence, a dif...

  10. Program distribution for unclassified scientific and technical reports: Instructions and category scope notes: Revision 75

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, W.F. Jr.; Amburn, D. (eds.)

    1989-08-01

    The DOE Office of Scientific and Technical Information (OSTI) in conjunction with DOE Program Offices establishes distribution categories for research, development, and technological reports emanating from DOE programs. The revised category numbers and scope notes contained in this publication were coordinated between OSTI and DOE Program Offices with input from DOE Field Offices. Decisions regarding whether reports will be distributed are made by Program Offices in conjunction with DOE Field Offices and OSTI. This revision of DOE/OSTI-4500 contains the revised distribution category numbers and scope notes along with the printed copy requirement, which shows the number of copies required to make program distribution.

  11. Distributing Earthquakes Among California's Faults: A Binary Integer Programming Approach

    Science.gov (United States)

    Geist, E. L.; Parsons, T.

    2016-12-01

    Statement of the problem is simple: given regional seismicity specified by a Gutenber-Richter (G-R) relation, how are earthquakes distributed to match observed fault-slip rates? The objective is to determine the magnitude-frequency relation on individual faults. The California statewide G-R b-value and a-value are estimated from historical seismicity, with the a-value accounting for off-fault seismicity. UCERF3 consensus slip rates are used, based on geologic and geodetic data and include estimates of coupling coefficients. The binary integer programming (BIP) problem is set up such that each earthquake from a synthetic catalog spanning millennia can occur at any location along any fault. The decision vector, therefore, consists of binary variables, with values equal to one indicating the location of each earthquake that results in an optimal match of slip rates, in an L1-norm sense. Rupture area and slip associated with each earthquake are determined from a magnitude-area scaling relation. Uncertainty bounds on the UCERF3 slip rates provide explicit minimum and maximum constraints to the BIP model, with the former more important to feasibility of the problem. There is a maximum magnitude limit associated with each fault, based on fault length, providing an implicit constraint. Solution of integer programming problems with a large number of variables (>105 in this study) has been possible only since the late 1990s. In addition to the classic branch-and-bound technique used for these problems, several other algorithms have been recently developed, including pre-solving, sifting, cutting planes, heuristics, and parallelization. An optimal solution is obtained using a state-of-the-art BIP solver for M≥6 earthquakes and California's faults with slip-rates > 1 mm/yr. Preliminary results indicate a surprising diversity of on-fault magnitude-frequency relations throughout the state.

  12. A Dual Method for Computing Power Transfer Distribution Factors

    OpenAIRE

    Ronellenfitsch, Henrik; Timme, Marc; Witthaut, Dirk

    2015-01-01

    Power Transfer Distribution Factors (PTDFs) play a crucial role in power grid security analysis, planning, and redispatch. Fast calculation of the PTDFs is therefore of great importance. In this paper, we present a non-approximative dual method of computing PTDFs. It uses power flows along topological cycles of the network but still relies on simple matrix algebra. At the core, our method changes the size of the matrix that needs to be inverted to calculate the PTDFs from $N\\times N$, where $...

  13. Applying the Priority Distribution Method for Employee Motivation

    Directory of Open Access Journals (Sweden)

    Jonas Žaptorius

    2013-09-01

    Full Text Available In an age of increasing healthcare expenditure, the efficiency of healthcare services is a burning issue. This paper deals with the creation of a performance-related remuneration system, which would meet requirements for efficiency and sustainable quality. In real world scenarios, it is difficult to create an objective and transparent employee performance evaluation model dealing with both qualitative and quantitative criteria. To achieve these goals, the use of decision support methods is suggested and analysed. The systematic approach of practical application of the Priority Distribution Method to healthcare provider organisations is created and described.

  14. KNET - DISTRIBUTED COMPUTING AND/OR DATA TRANSFER PROGRAM

    Science.gov (United States)

    Hui, J.

    1994-01-01

    KNET facilitates distributed computing between a UNIX compatible local host and a remote host which may or may not be UNIX compatible. It is capable of automatic remote login. That is, it performs on the user's behalf the chore of handling host selection, user name, and password to the designated host. Once the login has been successfully completed, the user may interactively communicate with the remote host. Data output from the remote host may be directed to the local screen, to a local file, and/or to a local process. Conversely, data input from the keyboard, a local file, or a local process may be directed to the remote host. KNET takes advantage of the multitasking and terminal mode control features of the UNIX operating system. A parent process is used as the upper layer for interfacing with the local user. A child process is used for a lower layer for interfacing with the remote host computer, and optionally one or more child processes can be used for the remote data output. Output may be directed to the screen and/or to the local processes under the control of a data pipe switch. In order for KNET to operate, the local and remote hosts must observe a common communications protocol. KNET is written in ANSI standard C-language for computers running UNIX. It has been successfully implemented on several Sun series computers and a DECstation 3100 and used to run programs remotely on VAX VMS and UNIX based computers. It requires 100K of RAM under SunOS and 120K of RAM under DEC RISC ULTRIX. An electronic copy of the documentation is provided on the distribution medium. The standard distribution medium for KNET is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. KNET was developed in 1991 and is a copyrighted work with all copyright vested in NASA. UNIX is a registered trademark of AT&T Bell Laboratories. Sun and SunOS are trademarks of Sun Microsystems, Inc. DECstation, VAX, VMS, and

  15. 45 CFR 2519.600 - How are funds for Higher Education programs distributed?

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 4 2010-10-01 2010-10-01 false How are funds for Higher Education programs...) CORPORATION FOR NATIONAL AND COMMUNITY SERVICE HIGHER EDUCATION INNOVATIVE PROGRAMS FOR COMMUNITY SERVICE Distribution of Funds § 2519.600 How are funds for Higher Education programs distributed? All funds under this...

  16. Establishment of a Standard Analytical Model of Distribution Network with Distributed Generators and Development of Multi Evaluation Method for Network Configuration Candidates

    Science.gov (United States)

    Hayashi, Yasuhiro; Kawasaki, Shoji; Matsuki, Junya; Matsuda, Hiroaki; Sakai, Shigekazu; Miyazaki, Teru; Kobayashi, Naoki

    Since a distribution network has many sectionalizing switches, there are huge radial network configuration candidates by states (opened or closed) of sectionalizing switches. Recently, the total number of distributed generation such as photovoltaic generation system and wind turbine generation system connected to the distribution network is drastically increased. The distribution network with the distributed generators must be operated keeping reliability of power supply and power quality. Therefore, the many configurations of the distribution network with the distributed generators must be evaluated multiply from various viewpoints such as distribution loss, total harmonic distortion, voltage imbalance and so on. In this paper, the authors propose a multi evaluation method to evaluate the distribution network configuration candidates satisfied with constraints of voltage and line current limit from three viewpoints ((1) distribution loss, (2) total harmonic distortion and (3) voltage imbalance). After establishing a standard analytical model of three sectionalized and three connected distribution network configuration with distributed generators based on the practical data, the multi evaluation for the established model is carried out by using the proposed method based on EMTP (Electro-Magnetic Transients Programs).

  17. State Electricity Regulatory Policy and Distributed Resources: Distributed Resource Distribution Credit Pilot Programs--Revealing the Value to Consumers and Vendors

    Energy Technology Data Exchange (ETDEWEB)

    Moskovitz, D.; Harrington, C.; Shirley, W.; Cowart, R.; Sedano, R.; Weston, F.

    2002-10-01

    Designing and implementing credit-based pilot programs for distributed resources distribution is a low-cost, low-risk opportunity to find out how these resources can help defer or avoid costly electric power system (utility grid) distribution upgrades. This report describes implementation options for deaveraged distribution credits and distributed resource development zones. Developing workable programs implementing these policies can dramatically increase the deployment of distributed resources in ways that benefit distributed resource vendors, users, and distribution utilities. This report is one in the State Electricity Regulatory Policy and Distributed Resources series developed under contract to NREL (see Annual Technical Status Report of the Regulatory Assistance Project: September 2000-September 2001, NREL/SR-560-32733). Other titles in this series are: (1) Accommodating Distributed Resources in Wholesale Markets, NREL/SR-560-32497; (2) Distributed Resources and Electric System Re liability, NREL/SR-560-32498; (3) Distribution System Cost Methodologies for Distributed Generation, NREL/SR-560-32500; (4) Distribution System Cost Methodologies for Distributed Generation Appendices, NREL/SR-560-32501.

  18. Distributed Research Project Scheduling Based on Multi-Agent Methods

    Directory of Open Access Journals (Sweden)

    Constanta Nicoleta Bodea

    2011-01-01

    Full Text Available Different project planning and scheduling approaches have been developed. The Operational Research (OR provides two major planning techniques: CPM (Critical Path Method and PERT (Program Evaluation and Review Technique. Due to projects complexity and difficulty to use classical methods, new approaches were developed. Artificial Intelligence (AI initially promoted the automatic planner concept, but model-based planning and scheduling methods emerged later on. The paper adresses the project scheduling optimization problem, when projects are seen as Complex Adaptive Systems (CAS. Taken into consideration two different approaches for project scheduling optimization: TCPSP (Time- Constrained Project Scheduling and RCPSP (Resource-Constrained Project Scheduling, the paper focuses on a multiagent implementation in MATLAB for TCSP. Using the research project as a case study, the paper includes a comparison between two multi-agent methods: Genetic Algorithm (GA and Ant Colony Algorithm (ACO.

  19. Interior-Point Methods for Linear Programming: A Review

    Science.gov (United States)

    Singh, J. N.; Singh, D.

    2002-01-01

    The paper reviews some recent advances in interior-point methods for linear programming and indicates directions in which future progress can be made. Most of the interior-point methods belong to any of three categories: affine-scaling methods, potential reduction methods and central path methods. These methods are discussed together with…

  20. ESADA Plutonium Program Critical Experiments: Power Distribution Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Akkurt, H.

    2001-06-12

    In 1967, a series of critical experiments were conducted at the Westinghouse Reactor Evaluation Center (WREC) using mixed-oxide (MOX) PuO{sub 2}-UO{sub 2} and/or UO{sub 2} fuels in various lattices and configurations. These experiments were performed under the joint sponsorship of Empire State Atomic Development Associates (ESADA) plutonium program and Westinghouse. The purpose of these experiments was to develop experimental data useful in validating analytical methods used in the design of plutonium-bearing replacement fuel for water reactors. Three different fuel types were used during the experimental program: two MOX fuels and a low-enriched UO{sub 2} fuel. The MOX fuels were distinguished by their {sup 240}Pu content: 8 wt % {sup 240}Pu and 24 wt % {sup 240}Pu. Both MOX fuels contained 2.0 wt % PuO{sub 2} in natural UO{sub 2}. The UO{sub 2} fuel with 2.72 wt % enrichment was used for comparison with the plutonium data and for use in multiregion experiments.

  1. Simulation of Temperature Distribution In a Rectangular Cavity using Finite Element Method

    CERN Document Server

    Naa, Christian

    2013-01-01

    This paper presents the study and implementation of finite element method to find the temperature distribution in a rectangular cavity which contains a fluid substance. The fluid motion is driven by a sudden temperature difference applied to two opposite side walls of the cavity. The remaining walls were considered adiabatic. Fluid properties were assumed incompressible. The problem has been approached by two-dimensional transient conduction which applied on the heated sidewall and one-dimensional steady state convection-diffusion equation which applied inside the cavity. The parameters which investigated are time and velocity. These parameters were computed together with boundary conditions which result in temperature distribution in the cavity. The implementation of finite element method was resulted in algebraic equation which is in vector and matrix form. Therefore, MATLAB programs used to solve this algebraic equation. The final temperature distribution results were presented in contour map within the re...

  2. A Prediction Method of Higher Harmonics Resonance in Distribution System

    Science.gov (United States)

    Ikeda, Yuji; Toyama, Atushi; Takeda, Keiki; Naitoh, Tadashi; Masaki, Kazuyuki

    There are many PWM control apparatuses, which are higher harmonics current sources, in distribution systems. And higher harmonics causes over current by parallel resonance. To avoid the over current, it is necessary to predict frequency characteristics of resonance. In this paper, a new prediction method is proposed. Firstly the line model, in which frequency characteristics of line inductance and resistance are taken account, are approximated by R-L parallel circuits. And then, using eigen value and eigenvector, the problem is decomposed to each eigen mode problem. Moreover, to get a bird's-eye view of phenomenon, a variable separate type approximate is introduced. Finally a new index matrix, which gives the distribution of over current, is introduced.

  3. 76 FR 81015 - Notice of Public Webinar on Implementation of Distribution Integrity Management Programs

    Science.gov (United States)

    2011-12-27

    ... Distribution Integrity Management Programs AGENCY: Pipeline and Hazardous Materials Safety Administration... have prepared and implemented distribution integrity management plans (DIMP) by August 2, 2011. Federal....regulations.gov including any personal information provided. Please see the Privacy Act statement immediately...

  4. Numerical methods for integrating particle-size frequency distributions

    Science.gov (United States)

    Weltje, Gert Jan; Roberson, Sam

    2012-07-01

    This article presents a suite of numerical methods contained within a Matlab toolbox for constructing complete particle-size distributions from diverse particle-size data. These centre around the application of a constrained cubic-spline interpolation to logit-transformed cumulative percentage frequency data. This approach allows for the robust prediction of frequency values for a set of common particle-size categories. The scheme also calculates realistic, smoothly tapering tails for open-ended distributions using a non-linear extrapolation algorithm. An inversion of established graphic measures to calculate graphic cumulative percentiles is also presented. The robustness of the interpolation-extrapolation model is assessed using particle-size data from 4885 sediment samples from The Netherlands. The influence of the number, size and position of particle-size categories on the accuracy of modeled particle-size distributions was investigated by running a series of simulations using the empirical data set. Goodness-of-fit statistics between modeled distributions and input data are calculated by measuring the Euclidean distance between log-ratio transformed particle-size distributions. Technique accuracy, estimated as the mean goodness-of-fit between repeat sample measurements, was used to identify optimum model parameters. Simulations demonstrate that the data can be accurately characterized by 22 equal-width particle-size categories and 63 equiprobable particle-size categories. Optimal interpolation parameters are highly dependent on the density and position of particle-size categories in the original data set and on the overall level of technique accuracy.

  5. Standard test method for distribution coefficients of inorganic species by the batch method

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This test method covers the determination of distribution coefficients of chemical species to quantify uptake onto solid materials by a batch sorption technique. It is a laboratory method primarily intended to assess sorption of dissolved ionic species subject to migration through pores and interstices of site specific geomedia. It may also be applied to other materials such as manufactured adsorption media and construction materials. Application of the results to long-term field behavior is not addressed in this method. Distribution coefficients for radionuclides in selected geomedia are commonly determined for the purpose of assessing potential migratory behavior of contaminants in the subsurface of contaminated sites and waste disposal facilities. This test method is also applicable to studies for parametric studies of the variables and mechanisms which contribute to the measured distribution coefficient. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement a...

  6. The distributed diagonal force decomposition method for parallelizing molecular dynamics simulations.

    Science.gov (United States)

    Borštnik, Urban; Miller, Benjamin T; Brooks, Bernard R; Janežič, Dušanka

    2011-11-15

    Parallelization is an effective way to reduce the computational time needed for molecular dynamics simulations. We describe a new parallelization method, the distributed-diagonal force decomposition method, with which we extend and improve the existing force decomposition methods. Our new method requires less data communication during molecular dynamics simulations than replicated data and current force decomposition methods, increasing the parallel efficiency. It also dynamically load-balances the processors' computational load throughout the simulation. The method is readily implemented in existing molecular dynamics codes and it has been incorporated into the CHARMM program, allowing its immediate use in conjunction with the many molecular dynamics simulation techniques that are already present in the program. We also present the design of the Force Decomposition Machine, a cluster of personal computers and networks that is tailored to running molecular dynamics simulations using the distributed diagonal force decomposition method. The design is expandable and provides various degrees of fault resilience. This approach is easily adaptable to computers with Graphics Processing Units because it is independent of the processor type being used. Copyright © 2011 Wiley Periodicals, Inc.

  7. Analysis of the Spatial Distribution of Galaxies by Multiscale Methods

    Directory of Open Access Journals (Sweden)

    E. Saar

    2005-09-01

    Full Text Available Galaxies are arranged in interconnected walls and filaments forming a cosmic web encompassing huge, nearly empty, regions between the structures. Many statistical methods have been proposed in the past in order to describe the galaxy distribution and discriminate the different cosmological models. We present in this paper multiscale geometric transforms sensitive to clusters, sheets, and walls: the 3D isotropic undecimated wavelet transform, the 3D ridgelet transform, and the 3D beamlet transform. We show that statistical properties of transform coefficients measure in a coherent and statistically reliable way, the degree of clustering, filamentarity, sheetedness, and voidedness of a data set.

  8. A Distributed System for Learning Programming On-Line

    Science.gov (United States)

    Verdu, Elena; Regueras, Luisa M.; Verdu, Maria J.; Leal, Jose P.; de Castro, Juan P.; Queiros, Ricardo

    2012-01-01

    Several Web-based on-line judges or on-line programming trainers have been developed in order to allow students to train their programming skills. However, their pedagogical functionalities in the learning of programming have not been clearly defined. EduJudge is a project which aims to integrate the "UVA On-line Judge", an existing…

  9. Planning and Optimization Methods for Active Distribution Systems

    DEFF Research Database (Denmark)

    Abbey, Chad; Baitch, Alex; Bak-Jensen, Birgitte

    distribution planning. Active distribution networks (ADNs) have systems in place to control a combination of distributed energy resources (DERs), defined as generators, loads and storage. With these systems in place, the AND becomes an Active Distribution System (ADS). Distribution system operators (DSOs) have...

  10. A quantitative method for clustering size distributions of elements

    Science.gov (United States)

    Dillner, Ann M.; Schauer, James J.; Christensen, William F.; Cass, Glen R.

    A quantitative method was developed to group similarly shaped size distributions of particle-phase elements in order to ascertain sources of the elements. This method was developed and applied using data from two sites in Houston, TX; one site surrounded by refineries, chemical plants and vehicular and commercial shipping traffic, and the other site, 25 miles inland surrounded by residences, light industrial facilities and vehicular traffic. Twenty-four hour size-segregated (0.056fluid catalytic cracking unit catalysts, fuel oil burning, a coal-fired power plant, and high-temperature metal working. The clustered elements were generally attributed to different sources at the two sites during each sampling day indicating the diversity of local sources that impact heavy metals concentrations in the region.

  11. Data distribution method of workflow in the cloud environment

    Science.gov (United States)

    Wang, Yong; Wu, Junjuan; Wang, Ying

    2017-08-01

    Cloud computing for workflow applications provides the required high efficiency calculation and large storage capacity and it also brings challenges to the protection of trade secrets and other privacy data. Because of privacy data will cause the increase of the data transmission time, this paper presents a new data allocation algorithm based on data collaborative damage degree, to improve the existing data allocation strategy? Safety and public cloud computer algorithm depends on the private cloud; the static allocation method in the initial stage only to the non-confidential data division to improve the original data, in the operational phase will continue to generate data to dynamically adjust the data distribution scheme. The experimental results show that the improved method is effective in reducing the data transmission time.

  12. Granular contact dynamics using mathematical programming methods

    DEFF Research Database (Denmark)

    Krabbenhoft, K.; Lyamin, A. V.; Huang, J.

    2012-01-01

    A class of variational formulations for discrete element analysis of granular media is presented. These formulations lead naturally to convex mathematical programs that can be solved using standard and readily available tools. In contrast to traditional discrete element analysis, the present...... is developed and it is concluded that the associated sliding rule, in the context of granular contact dynamics, may be viewed as an artifact of the time discretization and that the use of an associated flow rule at the particle scale level generally is physically acceptable. (C) 2012 Elsevier Ltd. All rights...

  13. The impact of Japan's 2004 postgraduate training program on intra-prefectural distribution of pediatricians in Japan.

    Directory of Open Access Journals (Sweden)

    Rie Sakai

    Full Text Available OBJECTIVE: Inequity in physician distribution poses a challenge to many health systems. In Japan, a new postgraduate training program for all new medical graduates was introduced in 2004, and researchers have argued that this program has increased inequalities in physician distribution. We examined the trends in the geographic distribution of pediatricians as well as all physicians from 1996 to 2010 to identify the impact of the launch of the new training program. METHODS: The Gini coefficient was calculated using municipalities as the study unit within each prefecture to assess whether there were significant changes in the intra-prefectural distribution of all physicians and pediatricians before and after the launch of the new training program. The effect of the new program was quantified by estimating the difference in the slope in the time trend of the Gini coefficients before and after 2004 using a linear change-point regression design. We categorized 47 prefectures in Japan into two groups: 1 predominantly urban and 2 others by the definition from OECD to conduct stratified analyses by urban-rural status. RESULTS: The trends in physician distribution worsened after 2004 for all physicians (p value<.0001 and pediatricians (p value = 0.0057. For all physicians, the trends worsened after 2004 both in predominantly urban prefectures (p value = 0.0012 and others (p value<0.0001, whereas, for pediatricians, the distribution worsened in others (p value = 0.0343, but not in predominantly urban prefectures (p value =0.0584. CONCLUSION: The intra-prefectural distribution of physicians worsened after the launch of the new training program, which may reflect the impact of the new postgraduate program. In pediatrics, changes in the Gini trend differed significantly before and after the launch of the new training program in others, but not in predominantly urban prefectures. Further observation is needed to explore how this difference in trends affects

  14. Improvement of program-goal method in defence planning system

    Directory of Open Access Journals (Sweden)

    Р.М. Федоренко

    2005-01-01

    Full Text Available  The article proposes the conception of defence planning system development with program-goal method application that based on analysis of advanced experience of developed countries. Article specified the place of program-goal method in resolving defence planning tasks of national security.

  15. Reduction of dimensionality in dynamic programming-based solution methods for nonlinear integer programming

    Directory of Open Access Journals (Sweden)

    Balasubramanian Ram

    1988-01-01

    Full Text Available This paper suggests a method of formulating any nonlinear integer programming problem, with any number of constraints, as an equivalent single constraint problem, thus reducing the dimensionality of the associated dynamic programming problem.

  16. Remote Data Transfer (RDT): An Interprocess Data Transfer Method for Distributed Environments

    Science.gov (United States)

    1992-05-01

    D’L-TR-3339 AD-A250 859 TECHNICAL REPORT BRL-TR-3339 r iAY2 0 I992i 51BR L 2 REMOTE DATA TRANSFER (RDT): AN INTERPROCESS DATA TRANSFER METHOD FOR...30 Sep 91 4. TITLE AND SUBTITLE S. FUNDING NUMBERS Remote Data Transfer (RdT): An Interprocess Data Transfer Method C-AHPCRC for Distributed...NUMBER OF PAGES 57 RPC: Remote Procedure Call; RdT: Remote Data Transfer ; 16. PRICE CODE XDR: External Data Representation, computer programs, software

  17. USER STORY SOFTWARE ESTIMATION:A SIMPLIFICATION OF SOFTWARE ESTIMATION MODEL WITH DISTRIBUTED EXTREME PROGRAMMING ESTIMATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Ridi Ferdiana

    2011-01-01

    Full Text Available Software estimation is an area of software engineering concerned with the identification, classification and measurement of features of software that affect the cost of developing and sustaining computer programs [19]. Measuring the software through software estimation has purpose to know the complexity of the software, estimate the human resources, and get better visibility of execution and process model. There is a lot of software estimation that work sufficiently in certain conditions or step in software engineering for example measuring line of codes, function point, COCOMO, or use case points. This paper proposes another estimation technique called Distributed eXtreme Programming Estimation (DXP Estimation. DXP estimation provides a basic technique for the team that using eXtreme Programming method in onsite or distributed development. According to writer knowledge this is a first estimation technique that applied into agile method in eXtreme Programming.

  18. 75 FR 22027 - Food Distribution Program on Indian Reservations: Amendments Related to the Food, Conservation...

    Science.gov (United States)

    2010-04-27

    ... Food and Nutrition Service 7 CFR Part 253 RIN 0584-AD95 Food Distribution Program on Indian Reservations: Amendments Related to the Food, Conservation, and Energy Act of 2008 AGENCY: Food and Nutrition Service, USDA. ACTION: Proposed rule. SUMMARY: This rule proposes to amend Food Distribution Program on...

  19. A microcomputer program for energy assessment and aggregation using the triangular probability distribution

    Science.gov (United States)

    Crovelli, R.A.; Balay, R.H.

    1991-01-01

    A general risk-analysis method was developed for petroleum-resource assessment and other applications. The triangular probability distribution is used as a model with an analytic aggregation methodology based on probability theory rather than Monte-Carlo simulation. Among the advantages of the analytic method are its computational speed and flexibility, and the saving of time and cost on a microcomputer. The input into the model consists of a set of components (e.g. geologic provinces) and, for each component, three potential resource estimates: minimum, most likely (mode), and maximum. Assuming a triangular probability distribution, the mean, standard deviation, and seven fractiles (F100, F95, F75, F50, F25, F5, and F0) are computed for each component, where for example, the probability of more than F95 is equal to 0.95. The components are aggregated by combining the means, standard deviations, and respective fractiles under three possible siutations (1) perfect positive correlation, (2) complete independence, and (3) any degree of dependence between these two polar situations. A package of computer programs named the TRIAGG system was written in the Turbo Pascal 4.0 language for performing the analytic probabilistic methodology. The system consists of a program for processing triangular probability distribution assessments and aggregations, and a separate aggregation routine for aggregating aggregations. The user's documentation and program diskette of the TRIAGG system are available from USGS Open File Services. TRIAGG requires an IBM-PC/XT/AT compatible microcomputer with 256kbyte of main memory, MS-DOS 3.1 or later, either two diskette drives or a fixed disk, and a 132 column printer. A graphics adapter and color display are optional. ?? 1991.

  20. Development of advanced methods for planning electric energy distribution systems. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Goenen, T.; Foote, B.L.; Thompson, J.C.; Fagan, J.E.

    1979-10-01

    An extensive search was made for the identification and collection of reports published in the open literature which describes distribution planning methods and techniques. In addition, a questionnaire has been prepared and sent to a large number of electric power utility companies. A large number of these companies were visited and/or their distribution planners interviewed for the identification and description of distribution system planning methods and techniques used by these electric power utility companies and other commercial entities. Distribution systems planning models were reviewed and a set of new mixed-integer programming models were developed for the optimal expansion of the distribution systems. The models help the planner to select: (1) optimum substation locations; (2) optimum substation expansions; (3) optimum substation transformer sizes; (4) optimum load transfers between substations; (5) optimum feeder routes and sizes subject to a set of specified constraints. The models permit following existing right-of-ways and avoid areas where feeders and substations cannot be constructed. The results of computer runs were analyzed for adequacy in serving projected loads within regulation limits for both normal and emergency operation.

  1. Simple method of generating and distributing frequency-entangled qudits

    Science.gov (United States)

    Jin, Rui-Bo; Shimizu, Ryosuke; Fujiwara, Mikio; Takeoka, Masahiro; Wakabayashi, Ryota; Yamashita, Taro; Miki, Shigehito; Terai, Hirotaka; Gerrits, Thomas; Sasaki, Masahide

    2016-11-01

    High-dimensional, frequency-entangled photonic quantum bits (qudits for d-dimension) are promising resources for quantum information processing in an optical fiber network and can also be used to improve channel capacity and security for quantum communication. However, up to now, it is still challenging to prepare high-dimensional frequency-entangled qudits in experiments, due to technical limitations. Here we propose and experimentally implement a novel method for a simple generation of frequency-entangled qudts with d\\gt 10 without the use of any spectral filters or cavities. The generated state is distributed over 15 km in total length. This scheme combines the technique of spectral engineering of biphotons generated by spontaneous parametric down-conversion and the technique of spectrally resolved Hong-Ou-Mandel interference. Our frequency-entangled qudits will enable quantum cryptographic experiments with enhanced performances. This distribution of distinct entangled frequency modes may also be useful for improved metrology, quantum remote synchronization, as well as for fundamental test of stronger violation of local realism.

  2. Execution time support for scientific programs on distributed memory machines

    Science.gov (United States)

    Berryman, Harry; Saltz, Joel; Scroggs, Jeffrey

    1990-01-01

    Optimizations are considered that are required for efficient execution of code segments that consists of loops over distributed data structures. The PARTI (Parallel Automated Runtime Toolkit at ICASE) execution time primitives are designed to carry out these optimizations and can be used to implement a wide range of scientific algorithms on distributed memory machines. These primitives allow the user to control array mappings in a way that gives an appearance of shared memory. Computations can be based on a global index set. Primitives are used to carry out gather and scatter operations on distributed arrays. Communications patterns are derived at runtime, and the appropriate send and receive messages are automatically generated.

  3. Dirichlet and Related Distributions Theory, Methods and Applications

    CERN Document Server

    Ng, Kai Wang; Tang, Man-Lai

    2011-01-01

    The Dirichlet distribution appears in many areas of application, which include modelling of compositional data, Bayesian analysis, statistical genetics, and nonparametric inference. This book provides a comprehensive review of the Dirichlet distribution and two extended versions, the Grouped Dirichlet Distribution (GDD) and the Nested Dirichlet Distribution (NDD), arising from likelihood and Bayesian analysis of incomplete categorical data and survey data with non-response. The theoretical properties and applications are also reviewed in detail for other related distributions, such as the inve

  4. Distributed System Resource Racing Conditions Automated Testing Method

    Directory of Open Access Journals (Sweden)

    Robertas Jasaitis

    2013-11-01

    Full Text Available Most applications today are designed to be networked. It is natural that such applications operate on some data. Such applications are multi threaded by nature and it is a common situation when few clients are using the same application at the same time. So it is possible that those users operate with the same data. It is possible that in such an operation some data might be treated incorrectly or some client instance may operate on outdated data. In order to avoid such situations proper consistent testing should be performed. Therefore it is complicated to perform such testing manually. Automated tool which would help to solve this problem is wanted. In this paper we are presenting an automated testing method that is able to detect problems related to resource racing conditions in a distributed system.

  5. A Suppression Method of Higher Harmonics Resonance in Distribution System

    Science.gov (United States)

    Sugimura, Shouji; Naitoh, Tadashi; Toyama, Atsushi; Ohta, Fumihiko

    There are many PWM control apparatuses, which are higher harmonics current sources, in distribution systems. And higher harmonics causes over current by parallel resonance. To avoid the over current, it is necessary to suppress the resonance. In this paper, a new suppression method, which uses the effect of source connected point in resonance circuit, is proposed. Firstly it shows that the optimal point, which gives a minimum current amplification degree, is orthogonal to static condenser voltage. And then, using eigenvector of state equation, the participation factor is defined. When the participation factor is zero, the orthogonal condition is introduced. Therefore the optimal point is given by using with the participation factor. Finally, adopting numerical examination, it is known that multiple optimal points usually exist. Therefore, we can choose the advantageous point as a source connected node.

  6. Mathematical Methods Applied to Economy Optimization of an Electric Vehicle with Distributed Power Train System

    Directory of Open Access Journals (Sweden)

    Binbin Sun

    2016-01-01

    Full Text Available This research presents mathematical methods to develop a high-efficiency power train system for a microelectric vehicle (MEV. First of all, to get the optimal ratios of a two-speed gearbox, the functional relationship of energy consumption and transmissions is established using the design of experiment (DOE and min-max fitting distance methods. The convex characteristic of the model and the main and interactive effects of transmissions on energy consumption are revealed and hill-climbing method is adopted to search the optimal ratios. Then, to develop an efficient and real-time drive strategy, an optimization program is proposed including shift schedule, switch law, and power distribution optimization. Particularly, to construct a mathematical predictive distribution model, firstly Latin hypercube design (LHD method is adopted to generate random and discrete operations of the MEV; secondly the optimal power distribution coefficients under various LHD points are confirmed based on offline genetic algorithm (GA; then Gauss radial basis function (RBF is utilized to solve the low-precision problem in polynomial model. Finally, simulation verifications of the optimized scheme are carried out. Results show that the proposed mathematical methods for the optimizations of transmissions and drive strategy are able to establish a high-efficiency power train system.

  7. Execution Models for Mapping Programs onto Distributed Memory Parallel Computers

    Science.gov (United States)

    1992-03-01

    DISTRIBUTED MEMORY PARALLEL COMPUTERS Alan Sussman Contract No. NAS1-18605 March 1992 Institute for Computer Applications in Science and Engineering NASA...MEMORY PARALLEL COMPUTERS Alan Sussman 1 Institute for Computer Applications in Science and Engineering NASA Langley Research Center Hampton, VA 23665...Computation onto Distributed Mem- ory Parallel Computers . PhD thesis, Carnegie Mellon University, September 1991. Also available as Technical Report

  8. Exploring the Literacy Practices of Refugee Families Enrolled in a Book Distribution Program and an Intergenerational Family Literacy Program

    Science.gov (United States)

    Singh, Sunita; Sylvia, Monica R.; Ridzi, Frank

    2015-01-01

    This ethnographic study presents findings of the literacy practices of Burmese refugee families and their interaction with a book distribution program paired with an intergenerational family literacy program. The project was organized at the level of Bronfenbrenner's exosystem (in "Ecology of human development". Cambridge, Harvard…

  9. Method to render second order beam optics programs symplectic

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, D.; Servranckx, R.V.

    1984-10-01

    We present evidence that second order matrix-based beam optics programs violate the symplectic condition. A simple method to avoid this difficulty, based on a generating function approach to evaluating transfer maps, is described. A simple example illustrating the non-symplectricity of second order matrix methods, and the effectiveness of our solution to the problem, is provided. We conclude that it is in fact possible to bring second order matrix optics methods to a canonical form. The procedure for doing so has been implemented in the program DIMAT, and could be implemented in programs such as TRANSPORT and TURTLE, making them useful in multiturn applications. 15 refs.

  10. Reactors: A data-oriented synchronous/asynchronous programming model for distributed applications

    DEFF Research Database (Denmark)

    Field, John; Marinescu, Maria-Cristina; Stefansen, Christian Oskar Erik

    2009-01-01

    Our aim is to define the kernel of a simple and uniform programming model–the reactor model–which can serve as a foundation for building and evolving internet-scale programs. Such programs are characterized by collections of loosely-coupled distributed components that are assembled on the fly to ...

  11. Development of methods for DSM and distribution automation planning

    Energy Technology Data Exchange (ETDEWEB)

    Kaerkkaeinen, S.; Kekkonen, V. [VTT Energy, Espoo (Finland); Rissanen, P. [Tietosavo Oy (Finland)

    1998-08-01

    Demand-Side Management (DSM) is usually an utility (or sometimes governmental) activity designed to influence energy demand of customers (both level and load variation). It includes basic options like strategic conservation or load growth, peak clipping. Load shifting and fuel switching. Typical ways to realize DSM are direct load control, innovative tariffs, different types of campaign etc. Restructuring of utility in Finland and increased competition in electricity market have had dramatic influence on the DSM. Traditional ways are impossible due to the conflicting interests of generation, network and supply business and increased competition between different actors in the market. Costs and benefits of DSM are divided to different companies, and different type of utilities are interested only in those activities which are beneficial to them. On the other hand, due to the increased competition the suppliers are diversifying to different types of products and increasing number of customer services partly based on DSM are available. The aim of this project was to develop and assess methods for DSM and distribution automation planning from the utility point of view. The methods were also applied to case studies at utilities

  12. Systems and methods for interpolation-based dynamic programming

    KAUST Repository

    Rockwood, Alyn

    2013-01-03

    Embodiments of systems and methods for interpolation-based dynamic programming. In one embodiment, the method includes receiving an object function and a set of constraints associated with the objective function. The method may also include identifying a solution on the objective function corresponding to intersections of the constraints. Additionally, the method may include generating an interpolated surface that is in constant contact with the solution. The method may also include generating a vector field in response to the interpolated surface.

  13. An overview of solution methods for multi-objective mixed integer linear programming programs

    DEFF Research Database (Denmark)

    Andersen, Kim Allan; Stidsen, Thomas Riis

    Multiple objective mixed integer linear programming (MOMIP) problems are notoriously hard to solve to optimality, i.e. finding the complete set of non-dominated solutions. We will give an overview of existing methods. Among those are interactive methods, the two phases method and enumeration...... methods. In particular we will discuss the existing branch and bound approaches for solving multiple objective integer programming problems. Despite the fact that branch and bound methods has been applied successfully to integer programming problems with one criterion only a few attempts has been made...

  14. Primal-Dual Method of Solving Convex Quadratic Programming Problems

    Directory of Open Access Journals (Sweden)

    V. Moraru

    2000-10-01

    Full Text Available This paper presents a primal-dual method for solving quadratic programming problems. The method is based on finding an exact solution of a finite sequence of unconstrained quadratic prigraamming problems and on finding an aproximative solution of constrained minimization problem with simple constraints. The subproblem with simple constraints is solved by the interior-reflective Newton's method [6].

  15. Conceptual evaluation of population health surveillance programs: method and example.

    Science.gov (United States)

    El Allaki, Farouk; Bigras-Poulin, Michel; Ravel, André

    2013-03-01

    Veterinary and public health surveillance programs can be evaluated to assess and improve the planning, implementation and effectiveness of these programs. Guidelines, protocols and methods have been developed for such evaluation. In general, they focus on a limited set of attributes (e.g., sensitivity and simplicity), that are assessed quantitatively whenever possible, otherwise qualitatively. Despite efforts at standardization, replication by different evaluators is difficult, making evaluation outcomes open to interpretation. This ultimately limits the usefulness of surveillance evaluations. At the same time, the growing demand to prove freedom from disease or pathogen, and the Sanitary and Phytosanitary Agreement and the International Health Regulations require stronger surveillance programs. We developed a method for evaluating veterinary and public health surveillance programs that is detailed, structured, transparent and based on surveillance concepts that are part of all types of surveillance programs. The proposed conceptual evaluation method comprises four steps: (1) text analysis, (2) extraction of the surveillance conceptual model, (3) comparison of the extracted surveillance conceptual model to a theoretical standard, and (4) validation interview with a surveillance program designer. This conceptual evaluation method was applied in 2005 to C-EnterNet, a new Canadian zoonotic disease surveillance program that encompasses laboratory based surveillance of enteric diseases in humans and active surveillance of the pathogens in food, water, and livestock. The theoretical standard used for evaluating C-EnterNet was a relevant existing structure called the "Population Health Surveillance Theory". Five out of 152 surveillance concepts were absent in the design of C-EnterNet. However, all of the surveillance concept relationships found in C-EnterNet were valid. The proposed method can be used to improve the design and documentation of surveillance programs. It

  16. Mathematical methods in physics distributions, Hilbert space operators, variational methods, and applications in quantum physics

    CERN Document Server

    Blanchard, Philippe

    2015-01-01

    The second edition of this textbook presents the basic mathematical knowledge and skills that are needed for courses on modern theoretical physics, such as those on quantum mechanics, classical and quantum field theory, and related areas.  The authors stress that learning mathematical physics is not a passive process and include numerous detailed proofs, examples, and over 200 exercises, as well as hints linking mathematical concepts and results to the relevant physical concepts and theories.  All of the material from the first edition has been updated, and five new chapters have been added on such topics as distributions, Hilbert space operators, and variational methods.   The text is divided into three main parts. Part I is a brief introduction to distribution theory, in which elements from the theories of ultradistributions and hyperfunctions are considered in addition to some deeper results for Schwartz distributions, thus providing a comprehensive introduction to the theory of generalized functions. P...

  17. Application of the simplex method of linear programming model to ...

    African Journals Online (AJOL)

    This work discussed how the simplex method of linear programming could be used to maximize the profit of any business firm using Saclux Paint Company as a case study. It equally elucidated the effect variation in the optimal result obtained from linear programming model, will have on any given firm. It was demonstrated ...

  18. Data Collection Methods for Evaluating Museum Programs and Exhibitions

    Science.gov (United States)

    Nelson, Amy Crack; Cohn, Sarah

    2015-01-01

    Museums often evaluate various aspects of their audiences' experiences, be it what they learn from a program or how they react to an exhibition. Each museum program or exhibition has its own set of goals, which can drive what an evaluator studies and how an evaluation evolves. When designing an evaluation, data collection methods are purposefully…

  19. Optimal reactive power and voltage control in distribution networks with distributed generators by fuzzy adaptive hybrid particle swarm optimisation method

    DEFF Research Database (Denmark)

    Chen, Shuheng; Hu, Weihao; Su, Chi

    2015-01-01

    A new and efficient methodology for optimal reactive power and voltage control of distribution networks with distributed generators based on fuzzy adaptive hybrid PSO (FAHPSO) is proposed. The objective is to minimize comprehensive cost, consisting of power loss and operation cost of transformers...... that the proposed method can search a more promising control schedule of all transformers, all capacitors and all distributed generators with less time consumption, compared with other listed artificial intelligent methods....

  20. MULTIBASE -- A Research Program in Heterogeneous Distributed DBMS Technology.

    Science.gov (United States)

    1980-09-01

    distribution of the data and the availability of "fast access paths", the Mul- tibase software must optimize queries so they can be exe- cuted...Subsystem Section 4 4.3 Basic Operations There are three types of sites in the breadboard Mul- tibase : OSIS, Codasyl, and GDM. Each type of site is capable of

  1. Retrieval of spherical particle size distribution with an improved Tikhonov iteration method

    OpenAIRE

    Tang Hong

    2012-01-01

    The problem of retrieval for spherical particle size distribution in the independent mode is studied, and an improved Tikhonov iteration method is proposed. In this method, the particle size distribution is retrieved from the light extinction data through the Phillips-Twomey method firstly in the independent mode, and then the obtained inversion results of the particle size distribution is used as the initial distribution and the final retrieved particle size distribution is obtained. S...

  2. PMDGP: A distributed Object-Oriented Genetic Programming Environment.

    NARCIS (Netherlands)

    Meulen, Pieter G.M.; Schipper, Han; Bazen, A.M.; Gerez, Sabih H.

    2001-01-01

    In this paper, an environment for using genetic programming is presented. Although not restricted to a specific domain, our intention is to apply it to image processing problems such as fingerprint recognition. The environment performs tasks like: population management, genetic operators and

  3. An experiment with content distribution methods in touchscreen mobile devices.

    Science.gov (United States)

    Garcia-Lopez, Eva; Garcia-Cabot, Antonio; de-Marcos, Luis

    2015-09-01

    This paper compares the usability of three different content distribution methods (scrolling, paging and internal links) in touchscreen mobile devices as means to display web documents. Usability is operationalized in terms of effectiveness, efficiency and user satisfaction. These dimensions are then measured in an experiment (N = 23) in which users are required to find words in regular-length web documents. Results suggest that scrolling is statistically better in terms of efficiency and user satisfaction. It is also found to be more effective but results were not significant. Our findings are also compared with existing literature to propose the following guideline: "try to use vertical scrolling in web pages for mobile devices instead of paging or internal links, except when the content is too large, then paging is recommended". With an ever increasing number of touchscreen web-enabled mobile devices, this new guideline can be relevant for content developers targeting the mobile web as well as institutions trying to improve the usability of their content for mobile platforms. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  4. A Novel Method of Clock Synchronization in Distributed Systems

    Science.gov (United States)

    Li, Gun; Niu, Meng-jie; Chai, Yang-shun; Chen, Xin; Ren, Yan-qiu

    2017-04-01

    Time synchronization plays an important role in the spacecraft formation flight and constellation autonomous navigation, etc. For the application of clock synchronization in a network system, it is not always true that all the observed nodes in the network are interconnected, therefore, it is difficult to achieve the high-precision time synchronization of a network system in the condition that a certain node can only obtain the measurement information of clock from a single neighboring node, but cannot obtain it from other nodes. Aiming at this problem, a novel method of high-precision time synchronization in a network system is proposed. In this paper, each clock is regarded as a node in the network system, and based on the definition of different topological structures of a distributed system, the three control algorithms of time synchronization under the following three cases are designed: without a master clock (reference clock), with a master clock (reference clock), and with a fixed communication delay in the network system. And the validity of the designed clock synchronization protocol is proved by both stability analysis and numerical simulation.

  5. Performance Appraisal Method of Logistic Distribution for Fresh Agricultural Products

    OpenAIRE

    Yu, Hang; Zhang, Kai

    2010-01-01

    Through the initial selection, screening and simplification, a set of performance appraisal system of logistic distribution suited to fresh agricultural products is established. In the process of establishing the appraisal indicator, the representative appraisal indicator of logistic distribution of fresh agricultural products is further obtained by delivering experts’ survey and applying the ABC screening system. The distribution costs, transportation and service level belong to the first ...

  6. An Efficacious Multi-Objective Fuzzy Linear Programming Approach for Optimal Power Flow Considering Distributed Generation.

    Science.gov (United States)

    Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri

    2016-01-01

    This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality.

  7. An Efficacious Multi-Objective Fuzzy Linear Programming Approach for Optimal Power Flow Considering Distributed Generation.

    Directory of Open Access Journals (Sweden)

    Warid Warid

    Full Text Available This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF formulation was converted into a crisp OPF in a successive linear programming (SLP framework and solved using an efficient interior point method (IPM. To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality.

  8. Critical review and hydrologic application of threshold detection methods for the generalized Pareto (GP) distribution

    Science.gov (United States)

    Mamalakis, Antonios; Langousis, Andreas; Deidda, Roberto

    2016-04-01

    Estimation of extreme rainfall from data constitutes one of the most important issues in statistical hydrology, as it is associated with the design of hydraulic structures and flood water management. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a generalized Pareto (GP) distribution model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches, such as non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data, graphical methods where one studies the dependence of GP distribution parameters (or related metrics) on the threshold level u, and Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GP distribution model is applicable. In this work, we review representative methods for GP threshold detection, discuss fundamental differences in their theoretical bases, and apply them to 1714 daily rainfall records from the NOAA-NCDC open-access database, with more than 110 years of data. We find that non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while methods that are based on asymptotic properties of the upper distribution tail lead to unrealistically high threshold and shape parameter estimates. The latter is justified by theoretical arguments, and it is especially the case in rainfall applications, where the shape parameter of the GP distribution is low; i.e. on the order of 0.1 ÷ 0.2. Better performance is demonstrated by graphical methods and GoF metrics that rely on pre-asymptotic properties of the GP distribution. For daily rainfall, we find that GP threshold estimates range between 2÷12 mm/d with a mean value of 6.5 mm/d, while the existence of quantization in the

  9. Pyrochemical and Dry Processing Methods Program. A selected bibliography

    Energy Technology Data Exchange (ETDEWEB)

    McDuffie, H.F.; Smith, D.H.; Owen, P.T.

    1979-03-01

    This selected bibliography with abstracts was compiled to provide information support to the Pyrochemical and Dry Processing Methods (PDPM) Program sponsored by DOE and administered by the Argonne National Laboratory. Objectives of the PDPM Program are to evaluate nonaqueous methods of reprocessing spent fuel as a route to the development of proliferation-resistant and diversion-resistant methods for widespread use in the nuclear industry. Emphasis was placed on the literature indexed in the ERDA--DOE Energy Data Base (EDB). The bibliography includes indexes to authors, subject descriptors, EDB subject categories, and titles.

  10. Numerical methods of mathematical optimization with Algol and Fortran programs

    CERN Document Server

    Künzi, Hans P; Zehnder, C A; Rheinboldt, Werner

    1971-01-01

    Numerical Methods of Mathematical Optimization: With ALGOL and FORTRAN Programs reviews the theory and the practical application of the numerical methods of mathematical optimization. An ALGOL and a FORTRAN program was developed for each one of the algorithms described in the theoretical section. This should result in easy access to the application of the different optimization methods.Comprised of four chapters, this volume begins with a discussion on the theory of linear and nonlinear optimization, with the main stress on an easily understood, mathematically precise presentation. In addition

  11. An Application Programming Interface For Developing Distributed Algorithm Along With Proposed Meta Language Concept

    Directory of Open Access Journals (Sweden)

    Kishalay Bairagi

    2015-08-01

    Full Text Available Abstract In computer science an application programming interface API is an interface that defines the ways by which an application programming may request services from libraries.7 Libraries of a programming language are the list of all classes and interfaces along with their fields constructors and methods that are part of that language. For example java is an object oriented programming language which has a rich set of built-in classes and interfaces packaged in the API also known as java API7. Therefore a programmer can easily borrow built-in classes or interfaces to use the fields constructors and methods of those classes and interfaces in his or her application and is able to be free from the hazards of thinking the implementation details of those functions and constructors and writing it down to the application he or she is developing. An API 7 also helps a programmer to write a short and compact code to save time of program and application development and to produce a quality code having readability and understandability than the code without having the application of API. Almost all the modern programming languages come up with the rich set of APIs. The basic difference between an API and library lies in the fact that while API reflects the expected behaviour library is an actual implementation of this set of rules. 7 On the other hand the relation with framework is based on several libraries implementing several APIs but instead of normal use of an API the access to the behaviour built into the framework is made possible by extending its contents with new classes and interfaces.7 This paper presents a component of framework4 where the API for the distributed algorithms has been plugged into the framework so that a programmer can get services from the built- in classes and interfaces for easily understandable compact and faster program development. Here a concept of meta language consisting of very simple constructs has been introduced

  12. Efficient immune-GA method for DNOs in sizing and placement of distributed generation units

    OpenAIRE

    Soroudi, Alireza; Ehsan, Mehdi

    2011-01-01

    This paper proposes a hybrid heuristic optimization method based on genetic algorithm and immune systems to maximize the benefits of Distribution Network Operators (DNOs) accrued due to sizing and placement of Distributed Generation (DG) units in distribution networks. The effects of DG units in reducing the reinforcement costs and active power losses of distribution network have been investigated. In the presented method, the integration of DG units in distribution network is done considerin...

  13. ATLAS9: Model atmosphere program with opacity distribution functions

    Science.gov (United States)

    Kurucz, Robert L.

    2017-10-01

    ATLAS9 computes model atmospheres using a fixed set of pretabulated opacities, allowing one to work on huge numbers of stars and interpolate in large grids of models to determine parameters quickly. The code works with two different sets of opacity distribution functions (ODFs), one with “big” wavelength intervals covering the whole spectrum and the other with 1221 “little” wavelength intervals covering the whole spectrum. The ODFs use a 12-step representation; the radiation field is computed starting with the highest step and working down. If a lower step does not matter because the line opacity is small relative to the continuum at all depths, all the lower steps are lumped together and not computed to save time.

  14. An efficient linear programming method for Optimal Transportation

    OpenAIRE

    Oberman, Adam M.; Ruan, Yuanlong

    2015-01-01

    An efficient method for computing solutions to the Optimal Transportation (OT) problem with a wide class of cost functions is presented. The standard linear programming (LP) discretization of the continuous problem becomes intractible for moderate grid sizes. A grid refinement method results in a linear cost algorithm. Weak convergence of solutions is stablished. Barycentric projection of transference plans is used to improve the accuracy of solutions. The method is applied to more general pr...

  15. 77 FR 10997 - Energy Conservation Program: Energy Conservation Standards for Distribution Transformers; Correction

    Science.gov (United States)

    2012-02-24

    ... Part 431 RIN 1904-AC04 Energy Conservation Program: Energy Conservation Standards for Distribution... regarding energy conservation standards for distribution transformers. It was recently discovered that... the Energy Policy and Conservation Act of 1975 (EPCA or the Act), Public Law 94-163 (42 U.S.C. 6291...

  16. Problem-Solving Methods for the Prospective Development of Urban Power Distribution Network

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2014-01-01

    Full Text Available This article succeeds the former A. P. K nko’ and A. I. Kuzmina’ ubl t on titled "A mathematical model of urban distribution electro-network considering its future development" (electronic scientific and technical magazine "Science and education" No. 5, 2014.The article offers a model of urban power distribution network as a set of transformer and distribution substations and cable lines. All elements of the network and new consumers are determined owing to vectors of parameters consistent with them.A problem of the urban power distribution network design, taking into account a prospective development of the city, is presented as a problem of discrete programming. It is in deciding on the optimal option to connect new consumers to the power supply network, on the number and sites to build new substations, and on the option to include them in the power supply network.Two methods, namely a reduction method for a set the nested tasks of global minimization and a decomposition method are offered to solve the problem.In reduction method the problem of prospective development of power supply network breaks into three subtasks of smaller dimension: a subtask to define the number and sites of new transformer and distribution substations, a subtask to define the option to connect new consumers to the power supply network, and a subtask to include new substations in the power supply network. The vector of the varied parameters is broken into three subvectors consistent with the subtasks. Each subtask is solved using an area of admissible vector values of the varied parameters at the fixed components of the subvectors obtained when solving the higher subtasks.In decomposition method the task is presented as a set of three, similar to reduction method, reductions of subtasks and a problem of coordination. The problem of coordination specifies a sequence of the subtasks solution, defines the moment of calculation termination. Coordination is realized by

  17. Distribution Locational Marginal Pricing through Quadratic Programming for Congestion Management in Distribution Networks

    DEFF Research Database (Denmark)

    Huang, Shaojun; Wu, Qiuwei; Oren, Shmuel S.

    2015-01-01

    ) calculates dynamic tariffs and publishes them to the aggregators, who make the optimal energy plans for the flexible demands. The DLMP through QP instead of linear programing as studied in previous literatures solves the multiple solution issue of the ag- gregator optimization which may cause...

  18. Evaluating a physician leadership development program - a mixed methods approach.

    Science.gov (United States)

    Throgmorton, Cheryl; Mitchell, Trey; Morley, Tom; Snyder, Marijo

    2016-05-16

    Purpose - With the extent of change in healthcare today, organizations need strong physician leaders. To compensate for the lack of physician leadership education, many organizations are sending physicians to external leadership programs or developing in-house leadership programs targeted specifically to physicians. The purpose of this paper is to outline the evaluation strategy and outcomes of the inaugural year of a Physician Leadership Academy (PLA) developed and implemented at a Michigan-based regional healthcare system. Design/methodology/approach - The authors applied the theoretical framework of Kirkpatrick's four levels of evaluation and used surveys, observations, activity tracking, and interviews to evaluate the program outcomes. The authors applied grounded theory techniques to the interview data. Findings - The program met targeted outcomes across all four levels of evaluation. Interview themes focused on the significance of increasing self-awareness, building relationships, applying new skills, and building confidence. Research limitations/implications - While only one example, this study illustrates the importance of developing the evaluation strategy as part of the program design. Qualitative research methods, often lacking from learning evaluation design, uncover rich themes of impact. The study supports how a PLA program can enhance physician learning, engagement, and relationship building throughout and after the program. Physician leaders' partnership with organization development and learning professionals yield results with impact to individuals, groups, and the organization. Originality/value - Few studies provide an in-depth review of evaluation methods and outcomes of physician leadership development programs. Healthcare organizations seeking to develop similar in-house programs may benefit applying the evaluation strategy outlined in this study.

  19. Early Impacts of a Healthy Food Distribution Program on the Availability and Price of Fresh Fruits and Vegetables in Small Retail Venues in Los Angeles.

    Science.gov (United States)

    DeFosset, Amelia R; Gase, Lauren N; Webber, Eliza; Kuo, Tony

    2017-10-01

    Healthy food distribution programs that allow small retailers to purchase fresh fruits and vegetables at wholesale prices may increase the profitability of selling produce. While promising, little is known about how these programs affect the availability of fresh fruits and vegetables in underserved communities. This study examined the impacts of a healthy food distribution program in Los Angeles County over its first year of operation (August 2015-2016). Assessment methods included: (1) a brief survey examining the characteristics, purchasing habits, and attitudes of stores entering the program; (2) longitudinal tracking of sales data examining changes in the volume and variety of fruits and vegetables distributed through the program; and (3) the collection of comparison price data from wholesale market databases and local grocery stores. Seventeen stores participated in the program over the study period. One-fourth of survey respondents reported no recent experience selling produce. Analysis of sales data showed that, on average, the total volume of produce distributed through the program increased by six pounds per week over the study period (95% confidence limit: 4.50, 7.50); trends varied by store and produce type. Produce prices offered through the program approximated those at wholesale markets, and were lower than prices at full-service grocers. Results suggest that healthy food distribution programs may reduce certain supply-side barriers to offering fresh produce in small retail venues. While promising, more work is needed to understand the impacts of such programs on in-store environments and consumer behaviors.

  20. Improving Distribution of Military Programs’ Technical Criteria

    Science.gov (United States)

    1993-08-01

    Macadam Base Course CEGS 02237 02/01/89 Water-Bound Macadem Base Course CEGS 02238 02/01/89 Bituminous-Stabilized Base Crse, Sbbase or Sbgrade CEGS...CEGS 02366 07/01/89 Precast Concrete Piling CEGS 02367 09/01/86 Pressure Injected Concrete Footings CEGS 02371 07/01/89 Auger-Placed Grout Piles CEGS...818-6 02/27170 Grouting Methods & Equipment TM 5-818-7 09/01/83 Foundations in Expansive Soils TM 5-820-1 08/20/87 Surface Drainage Facilities

  1. The effects of different irrigation methods on root distribution ...

    African Journals Online (AJOL)

    drip, subsurface drip, surface and under-tree micro sprinkler) on the root distribution, intensity and effective root depth of “Williams Pride” and “Jersey Mac” apple cultivars budded on M9, rapidly grown in Isparta Region. The rootstocks were ...

  2. Community Based Distribution of Child Spacing Methods at ...

    African Journals Online (AJOL)

    nity through a network of village health volunteers provid- ing information, counselling and community-based distribution (CBD) of oral contraceptives, condoms and spermicides. • Demonstration of a three-tiered child spacing services de- livery model (community, clinic/outreach sites, and hospi- tal) which can be replicated ...

  3. DISTRIBUTED ELECTRICAL POWER PRODUCTION SYSTEM AND METHOD OF CONTROL THEREOF

    DEFF Research Database (Denmark)

    2010-01-01

    The present invention relates to a distributed electrical power production system wherein two or more electrical power units comprise respective sets of power supply attributes. Each set of power supply attributes is associated with a dynamic operating state of a particular electrical power unit....

  4. Methods of psychoeducational program evaluation in mental health settings.

    Science.gov (United States)

    Walsh, J

    1992-04-01

    Psychoeducational programs for families of the mentally ill became widespread during the 1980s as a means of providing a forum for the relevant education and mutual support of participants. While these programs are thought to be extremely useful as interventions, very little emphasis has been placed on evaluation as a means of demonstrating their effectiveness in achieving goals. There is a possibility, then, that psychoeducation will continue to flourish with little direct evidence of positive outcomes for its family participants. This article consists of a literature review of existing methods of psychoeducational program evaluation, both quantitative and qualitative, all of which may be applicable in certain circumstances. The process by which an evaluation instrument was developed for a program with families of the mentally ill is then presented in some detail.

  5. Method and computer program product for maintenance and modernization backlogging

    Science.gov (United States)

    Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

    2013-02-19

    According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

  6. Method for computing the optimal signal distribution and channel capacity.

    Science.gov (United States)

    Shapiro, E G; Shapiro, D A; Turitsyn, S K

    2015-06-15

    An iterative method for computing the channel capacity of both discrete and continuous input, continuous output channels is proposed. The efficiency of new method is demonstrated in comparison with the classical Blahut - Arimoto algorithm for several known channels. Moreover, we also present a hybrid method combining advantages of both the Blahut - Arimoto algorithm and our iterative approach. The new method is especially efficient for the channels with a priory unknown discrete input alphabet.

  7. Relaxation and decomposition methods for mixed integer nonlinear programming

    CERN Document Server

    Nowak, Ivo; Bank, RE

    2005-01-01

    This book presents a comprehensive description of efficient methods for solving nonconvex mixed integer nonlinear programs, including several numerical and theoretical results, which are presented here for the first time. It contains many illustrations and an up-to-date bibliography. Because on the emphasis on practical methods, as well as the introduction into the basic theory, the book is accessible to a wide audience. It can be used both as a research and as a graduate text.

  8. A method and fortran program for quantitative sampling in paleontology

    Science.gov (United States)

    Tipper, J.C.

    1976-01-01

    The Unit Sampling Method is a binomial sampling method applicable to the study of fauna preserved in rocks too well cemented to be disaggregated. Preliminary estimates of the probability of detecting each group in a single sampling unit can be converted to estimates of the group's volumetric abundance by means of correction curves obtained by a computer simulation technique. This paper describes the technique and gives the FORTRAN program. ?? 1976.

  9. The application of the dynamic programming method in investment optimization

    Directory of Open Access Journals (Sweden)

    Petković Nina

    2016-01-01

    Full Text Available This paper deals with the problem of investment in Measuring Transformers Factory in Zajecar and the application of the dynamic programming method as one of the methods used in business process optimization. Dynamic programming is a special case of nonlinear programming that is widely applicable to nonlinear systems in economics. Measuring Transformers Factory in Zajecar was founded in 1969. It manufactures electrical equipment, primarily low and medium voltage current measuring transformers, voltage transformers, bushings, etc. The company offers a wide range of products and for this paper's needs the company's management selected three products for each of which optimal investment costing was made. The purpose was to see which product would be the most profitable and thus proceed with the manufacturing and selling of that particular product or products.

  10. Study program for constant current capacitor charging method

    Energy Technology Data Exchange (ETDEWEB)

    Pugh, C.

    1978-10-04

    The objective of the study program was to determine the best method of charging 20,000 to 132,000 microfarads of capacitance to 22 kVdc in 14 to 15 sec. Component costs, sizes, weights, line current graphs, copies of calculations and manufacturer's data are included.

  11. Reconstructing Program Theories : Methods Available and Problems to be Solved

    NARCIS (Netherlands)

    Leeuw, Frans de

    2003-01-01

    This paper discusses methods for reconstructing theories underlying programs and policies. It describes three approaches. One is empirical–analytical in nature and focuses on interviews, documents and argumentational analysis. The second has strategic assessment, group dynamics, and dialogue as its

  12. Dynamic Frames Based Verification Method for Concurrent Java Programs

    NARCIS (Netherlands)

    Mostowski, Wojciech

    2016-01-01

    In this paper we discuss a verification method for concurrent Java programs based on the concept of dynamic frames. We build on our earlier work that proposes a new, symbolic permission system for concurrent reasoning and we provide the following new contributions. First, we describe our approach

  13. Adaptation-II of the surrogate methods for linear programming ...

    African Journals Online (AJOL)

    Adaptation-II of the surrogate methods for linear programming problems. SO Oko. Abstract. No Abstract. Global Journal of Mathematical Sciences Vol. 5(1) 2006: 63-71. Full Text: EMAIL FULL TEXT EMAIL FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT · http://dx.doi.org/10.4314/gjmas.v5i1.21381.

  14. Path Following in the Exact Penalty Method of Convex Programming.

    Science.gov (United States)

    Zhou, Hua; Lange, Kenneth

    2015-07-01

    Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value.

  15. Path Following in the Exact Penalty Method of Convex Programming

    Science.gov (United States)

    Zhou, Hua; Lange, Kenneth

    2015-01-01

    Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value. PMID:26366044

  16. A Distributed Bio-Inspired Method for Multisite Grid Mapping

    Directory of Open Access Journals (Sweden)

    I. De Falco

    2010-01-01

    Full Text Available Computational grids assemble multisite and multiowner resources and represent the most promising solutions for processing distributed computationally intensive applications, each composed by a collection of communicating tasks. The execution of an application on a grid presumes three successive steps: the localization of the available resources together with their characteristics and status; the mapping which selects the resources that, during the estimated running time, better support this execution and, at last, the scheduling of the tasks. These operations are very difficult both because the availability and workload of grid resources change dynamically and because, in many cases, multisite mapping must be adopted to exploit all the possible benefits. As the mapping problem in parallel systems, already known as NP-complete, becomes even harder in distributed heterogeneous environments as in grids, evolutionary techniques can be adopted to find near-optimal solutions. In this paper an effective and efficient multisite mapping, based on a distributed Differential Evolution algorithm, is proposed. The aim is to minimize the time required to complete the execution of the application, selecting from among all the potential ones the solution which reduces the use of the grid resources. The proposed mapper is tested on different scenarios.

  17. Frequent statement and dereference elimination for imperative and object-oriented distributed programs.

    Science.gov (United States)

    El-Zawawy, Mohamed A

    2014-01-01

    This paper introduces new approaches for the analysis of frequent statement and dereference elimination for imperative and object-oriented distributed programs running on parallel machines equipped with hierarchical memories. The paper uses languages whose address spaces are globally partitioned. Distributed programs allow defining data layout and threads writing to and reading from other thread memories. Three type systems (for imperative distributed programs) are the tools of the proposed techniques. The first type system defines for every program point a set of calculated (ready) statements and memory accesses. The second type system uses an enriched version of types of the first type system and determines which of the ready statements and memory accesses are used later in the program. The third type system uses the information gather so far to eliminate unnecessary statement computations and memory accesses (the analysis of frequent statement and dereference elimination). Extensions to these type systems are also presented to cover object-oriented distributed programs. Two advantages of our work over related work are the following. The hierarchical style of concurrent parallel computers is similar to the memory model used in this paper. In our approach, each analysis result is assigned a type derivation (serves as a correctness proof).

  18. Frequent Statement and Dereference Elimination for Imperative and Object-Oriented Distributed Programs

    Directory of Open Access Journals (Sweden)

    Mohamed A. El-Zawawy

    2014-01-01

    Full Text Available This paper introduces new approaches for the analysis of frequent statement and dereference elimination for imperative and object-oriented distributed programs running on parallel machines equipped with hierarchical memories. The paper uses languages whose address spaces are globally partitioned. Distributed programs allow defining data layout and threads writing to and reading from other thread memories. Three type systems (for imperative distributed programs are the tools of the proposed techniques. The first type system defines for every program point a set of calculated (ready statements and memory accesses. The second type system uses an enriched version of types of the first type system and determines which of the ready statements and memory accesses are used later in the program. The third type system uses the information gather so far to eliminate unnecessary statement computations and memory accesses (the analysis of frequent statement and dereference elimination. Extensions to these type systems are also presented to cover object-oriented distributed programs. Two advantages of our work over related work are the following. The hierarchical style of concurrent parallel computers is similar to the memory model used in this paper. In our approach, each analysis result is assigned a type derivation (serves as a correctness proof.

  19. THE EDUCATIONAL-METHODICAL “GIS-BAIKAL” PROGRAM

    Directory of Open Access Journals (Sweden)

    A. N. Beshentsev

    2017-01-01

    Full Text Available The article outlines the main types of territorial activities that need qualified personnel in the field of creating and using subject and interdisciplinary geoinformation systems. Methodical problems of modern geoinformation education in the Baikal region are determined and emphasis is placed on the need for training students and schoolchildren of senior classes on regional geographic material. As a means of optimizing geoinformation education and public education, the “GISBaikal” teaching and methodological program has been proposed and tested. Technological and methodological elements of this program have been considered. The advantages of electronic teaching aids are substantiated, the features of visual materials are considered, the authors’ methodical development is proposed, combining multimedia presentations and printed materials. The goals and objectives of the educational GIS, consisting of information, technological and analytical subsystems, are defined, the structure and content of subsystems is described. The territorial levels for the solution of spatial problems are established, the corresponding scales and elements of the content of topographic bases are determined. The results of approbation of the program in conducting summer environmental practices with students and schoolchildren in the senior classes at the Istomino IEOC on the shores of Lake Baikal are presented. The mechanism of training, the time organization, procedures and operations of carrying out field and cameral work during the program implementation is described. In conclusion, the assessment of the educational potential of the proposed educational and methodological program is given, the conclusions about its effectiveness are substantiated, the fields of application are proposed.

  20. An Application Programming Interface For Developing Distributed Algorithm Along With Proposed Meta Language Concept

    OpenAIRE

    Kishalay Bairagi

    2015-01-01

    Abstract In computer science an application programming interface API is an interface that defines the ways by which an application programming may request services from libraries.7 Libraries of a programming language are the list of all classes and interfaces along with their fields constructors and methods that are part of that language. For example java is an object oriented programming language which has a rich set of built-in classes and interfaces packaged in the API also known as java ...

  1. Implementation of the parametric variation method in an EMTP program

    DEFF Research Database (Denmark)

    Holdyk, Andrzej; Holbøll, Joachim

    2013-01-01

    The paper presents an algorithm for- and shows the implementation of a method to perform parametric variation studies using electromagnetic transients programs applied to an offshore wind farm. Those kind of studies are used to investigate the sensitivity of a given phenomena to variation...... of parameters in an electric system. The proposed method allows varying any parameter of a circuit, including the simulation settings and exploits the specific structure of the ATP-EMTP software. In the implementation of the method, Matlab software is used to control the execution of the ATP solver. Two...

  2. Overdose prevention for injection drug users: Lessons learned from naloxone training and distribution programs in New York City

    Directory of Open Access Journals (Sweden)

    Nandi Vijay

    2007-01-01

    Full Text Available Abstract Background Fatal heroin overdose is a significant cause of mortality for injection drug users (IDUs. Many of these deaths are preventable because opiate overdoses can be quickly and safely reversed through the injection of Naloxone [brand name Narcan], a prescription drug used to revive persons who have overdosed on heroin or other opioids. Currently, in several cities in the United States, drug users are being trained in naloxone administration and given naloxone for immediate and successful reversals of opiate overdoses. There has been very little formal description of the challenges faced in the development and implementation of large-scale IDU naloxone administration training and distribution programs and the lessons learned during this process. Methods During a one year period, over 1,000 participants were trained in SKOOP (Skills and Knowledge on Opiate Prevention and received a prescription for naloxone by a medical doctor on site at a syringe exchange program (SEP in New York City. Participants in SKOOP were over the age of 18, current participants of SEPs, and current or former drug users. We present details about program design and lessons learned during the development and implementation of SKOOP. Lessons learned described in the manuscript are collectively articulated by the evaluators and implementers of the project. Results There were six primary challenges and lessons learned in developing, implementing, and evaluating SKOOP. These include a political climate surrounding naloxone distribution; b extant prescription drug laws; c initial low levels of recruitment into the program; d development of participant appropriate training methodology; e challenges in the design of a suitable formal evaluation; and f evolution of program response to naloxone. Conclusion Other naloxone distribution programs may anticipate similar challenges to SKOOP and we identify mechanisms to address them. Strategies include being flexible in

  3. The role of unconditioned and conditioned cumulative distribution functions in reservoir reliability programing

    Science.gov (United States)

    Simonović, Slobodan P.; Mariño, Miguel A.

    1981-07-01

    This paper presents a comparison of results obtained from the reliability program of a reservoir management problem based on the use of unconditioned and conditioned cumulative distribution functions (CDF's). The reliability program considers the reservoir releases and the reliabilities of the elements of the system as decision variables. Data from the Vardar River system in Yugoslavia are fitted with Pearson Type-III distributions for the case of unconditioned CDF's and with bivariate gamma distributions for the case of conditioned CDF's. The use of conditioned CDF's in the reliability program yields objective-function values and reliability tolerances that are greater than those obtained from the use of unconditioned CDF's. Thus, the use of conditioned CDF's represents one step forward in overcoming the conservative nature of stochastic models used for the design and/or operation of multipurpose multiple reservoir systems.

  4. Calculation of Pressure Distribution at Rotary Body Surface with the Vortex Element Method

    Directory of Open Access Journals (Sweden)

    S. A. Dergachev

    2014-01-01

    Full Text Available Vortex element method allows to simulate unsteady hydrodynamic processes in incompressible environment, taking into account the evolution of the vortex sheet, including taking into account the deformation or moving of the body or part of construction.For the calculation of the hydrodynamic characteristics of the method based on vortex element software package was developed MVE3D. Vortex element (VE in program is symmetrical Vorton-cut. For satisfying the boundary conditions at the surface used closed frame of vortons.With this software system modeled incompressible flow around a cylindrical body protection elongation L / D = 13 with a front spherical blunt with the angle of attack of 10 °. We analyzed the distribution of the pressure coefficient on the body surface of the top and bottom forming.The calculate results were compared with known Results of experiment.Considered design schemes with different number of Vorton framework. Also varied radius of VE. Calculation make possible to establish the degree of sampling surface needed to produce close to experiment results. It has been shown that an adequate reproducing the pressure distribution in the transition region spherical cylindrical surface, on the windward side requires a high degree of sampling.Based on these results Can be possible need to improve on the design scheme of body's surface, allowing more accurate to describe the flow vorticity in areas with abrupt changes of geometry streamlined body.

  5. Qualitative methods in operations research on contraceptive distribution systems: a case study from Nigeria.

    Science.gov (United States)

    Webb, G; Ladipo, O A; McNamara, R

    1991-01-01

    This article discusses the application of qualitative methods in operations research on a family planning service delivery system. Market traders in Ibadan, Nigeria were trained to sell oral contraceptives, condoms, and spermicidal foaming tablets in a collaborative research project of the Fertility Research Unit of the University College Hospital, Ibadan, and the Center for Population and Family Health of Columbia University. Focus group discussion, participant observation, and semi-structured interviews were used to investigate the cultural acceptability of distribution of contraceptives in the market places and the motivations of participating traders. The strength of the market associations was a factor influencing acceptance of the project and the number of customers for the traders' other wares were found to positively influence the volume of sales of contraceptives. Traders were motivated by the status associated with participating in a program of a well-known health institution. Findings from qualitative research suggest areas for quantitative studies and vice versa in an interactive process.

  6. Density Distributions in TATB Prepared by Various Methods

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, D M; Fontes, A T

    2008-05-13

    The density distribution of two legacy types of 1,3,5-triamino-2,4,6-trinitrobenzene (TATB) particles were compared with TATB synthesized by new routes and recrystallized in several different solvents using a density gradient technique. Legacy wet (WA) and dry aminated (DA) TATB crystalline aggregates gave average densities of 1.9157 and 1.9163 g/cc, respectively. Since the theoretical maximum density (TMD) for a perfect crystal is 1.937 g/cc, legacy TATB crystals averaged 99% of TMD or about 1% voids. TATB synthesized from phloroglucinol (P) had comparable particle size to legacy TATBs, but significantly lower density, 1.8340 g/cc. TATB synthesized from 3,5 dibromoanisole (BA) was very difficult to measure because it contained extremely fine particles, but had an average density of 1.8043 g/cc over a very broad range. Density distributions of TATB recrystallized from dimethylsulfoxide (DMSO), sulfolane, and an 80/20 mixture of DMSO with the ionic liquid 1-ethyl-3-methyl- imidazolium acetate (EMImOAc), with some exceptions, gave average densities comparable or better than the legacy TATBs.

  7. Space charge distribution measurement methods and particle loaded insulating materials

    Energy Technology Data Exchange (ETDEWEB)

    Hole, S [Laboratoire des Instruments et Systemes d' Ile de France, Universite Pierre et Marie Curie-Paris6, 10 rue Vauquelin, 75005 Paris (France); Sylvestre, A [Laboratoire d' Electrostatique et des Materiaux Dielectriques, CNRS UMR5517, 25 avenue des Martyrs, BP 166, 38042 Grenoble cedex 9 (France); Lavallee, O Gallot [Laboratoire d' Etude Aerodynamiques, CNRS UMR6609, boulevard Marie et Pierre Curie, Teleport 2, BP 30179, 86962 Futuroscope, Chasseneuil (France); Guillermin, C [Schneider Electric Industries SAS, 22 rue Henry Tarze, 38000 Grenoble (France); Rain, P [Laboratoire d' Electrostatique et des Materiaux Dielectriques, CNRS UMR5517, 25 avenue des Martyrs, BP 166, 38042 Grenoble cedex 9 (France); Rowe, S [Schneider Electric Industries SAS, 22 rue Henry Tarze, 38000 Grenoble (France)

    2006-03-07

    In this paper the authors discuss the effects of particles (fillers) mixed in a composite polymer on the space charge measurement techniques. The origin of particle-induced spurious signals is determined and silica filled epoxy resin is analysed using the laser-induced-pressure-pulse (LIPP) method, the pulsed-electro-acoustic (PEA) method and the laser-induced-thermal-pulse (LITP) method. A spurious signal identified as the consequence of a piezoelectric effect of some silica particles is visible for all the method. Moreover, space charges are clearly detected at the epoxy/silica interface after a 10 kV mm{sup -1} poling at room temperature for 2 h.

  8. 77 FR 48733 - Transitional Program for Covered Business Method Patents-Definitions of Covered Business Method...

    Science.gov (United States)

    2012-08-14

    ... August 14, 2012 Part IV Department of Commerce Patent and Trademark Office 37 CFR Part 42 Transitional Program for Covered Business Method Patents--Definitions of Covered Business Method Patent and... / Rules and Regulations#0;#0; ] DEPARTMENT OF COMMERCE Patent and Trademark Office 37 CFR Part 42 RIN 0651...

  9. P3T+: A Performance Estimator for Distributed and Parallel Programs

    Directory of Open Access Journals (Sweden)

    T. Fahringer

    2000-01-01

    Full Text Available Developing distributed and parallel programs on today's multiprocessor architectures is still a challenging task. Particular distressing is the lack of effective performance tools that support the programmer in evaluating changes in code, problem and machine sizes, and target architectures. In this paper we introduce P3T+ which is a performance estimator for mostly regular HPF (High Performance Fortran programs but partially covers also message passing programs (MPI. P3T+ is unique by modeling programs, compiler code transformations, and parallel and distributed architectures. It computes at compile-time a variety of performance parameters including work distribution, number of transfers, amount of data transferred, transfer times, computation times, and number of cache misses. Several novel technologies are employed to compute these parameters: loop iteration spaces, array access patterns, and data distributions are modeled by employing highly effective symbolic analysis. Communication is estimated by simulating the behavior of a communication library used by the underlying compiler. Computation times are predicted through pre-measured kernels on every target architecture of interest. We carefully model most critical architecture specific factors such as cache lines sizes, number of cache lines available, startup times, message transfer time per byte, etc. P3T+ has been implemented and is closely integrated with the Vienna High Performance Compiler (VFC to support programmers develop parallel and distributed applications. Experimental results for realistic kernel codes taken from real-world applications are presented to demonstrate both accuracy and usefulness of P3T+.

  10. Control of water distribution networks with dynamic DMA topology using strictly feasible sequential convex programming

    Science.gov (United States)

    Wright, Robert; Abraham, Edo; Parpas, Panos; Stoianov, Ivan

    2015-12-01

    The operation of water distribution networks (WDN) with a dynamic topology is a recently pioneered approach for the advanced management of District Metered Areas (DMAs) that integrates novel developments in hydraulic modeling, monitoring, optimization, and control. A common practice for leakage management is the sectorization of WDNs into small zones, called DMAs, by permanently closing isolation valves. This facilitates water companies to identify bursts and estimate leakage levels by measuring the inlet flow for each DMA. However, by permanently closing valves, a number of problems have been created including reduced resilience to failure and suboptimal pressure management. By introducing a dynamic topology to these zones, these disadvantages can be eliminated while still retaining the DMA structure for leakage monitoring. In this paper, a novel optimization method based on sequential convex programming (SCP) is outlined for the control of a dynamic topology with the objective of reducing average zone pressure (AZP). A key attribute for control optimization is reliable convergence. To achieve this, the SCP method we propose guarantees that each optimization step is strictly feasible, resulting in improved convergence properties. By using a null space algorithm for hydraulic analyses, the computations required are also significantly reduced. The optimized control is actuated on a real WDN operated with a dynamic topology. This unique experimental program incorporates a number of technologies set up with the objective of investigating pioneering developments in WDN management. Preliminary results indicate AZP reductions for a dynamic topology of up to 6.5% over optimally controlled fixed topology DMAs. This article was corrected on 12 JAN 2016. See the end of the full text for details.

  11. Integrated Data Collection Analysis (IDCA) Program - SSST Testing Methods

    Energy Technology Data Exchange (ETDEWEB)

    Sandstrom, Mary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Geoffrey W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Daniel N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pollard, Colin J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warner, Kirstin F. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Remmers, Daniel L. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Sorensen, Daniel N. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Whinnery, LeRoy L. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Phillips, Jason J. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Shelley, Timothy J. [Bureau of Alcohol, Tobacco and Firearms (ATF), Huntsville, AL (United States); Reyes, Jose A. [Applied Research Associates, Tyndall AFB, FL (United States); Hsu, Peter C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reynolds, John G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-03-25

    The Integrated Data Collection Analysis (IDCA) program is conducting a proficiency study for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are the methods used for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis during the IDCA program. These methods changed throughout the Proficiency Test and the reasons for these changes are documented in this report. The most significant modifications in standard testing methods are: 1) including one specified sandpaper in impact testing among all the participants, 2) diversifying liquid test methods for selected participants, and 3) including sealed sample holders for thermal testing by at least one participant. This effort, funded by the Department of Homeland Security (DHS), is putting the issues of safe handling of these materials in perspective with standard military explosives. The study is adding SSST testing results for a broad suite of different HMEs to the literature. Ultimately the study will suggest new guidelines and methods and possibly establish the SSST testing accuracies needed to develop safe handling practices for HMEs. Each participating testing laboratory uses identical test materials and preparation methods wherever possible. The testing performers involved are Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory (LANL), Indian Head Division, Naval Surface Warfare Center, (NSWC IHD), Sandia National Laboratories (SNL), and Air Force Research Laboratory (AFRL/RXQL). These tests are conducted as a proficiency study in order to establish some consistency in test protocols, procedures, and experiments and to compare results when these testing variables cannot be made consistent.

  12. Method for measuring the size distribution of airborne rhinovirus

    Energy Technology Data Exchange (ETDEWEB)

    Russell, M.L.; Goth-Goldstein, R.; Apte, M.G.; Fisk, W.J.

    2002-01-01

    About 50% of viral-induced respiratory illnesses are caused by the human rhinovirus (HRV). Measurements of the concentrations and sizes of bioaerosols are critical for research on building characteristics, aerosol transport, and mitigation measures. We developed a quantitative reverse transcription-coupled polymerase chain reaction (RT-PCR) assay for HRV and verified that this assay detects HRV in nasal lavage samples. A quantitation standard was used to determine a detection limit of 5 fg of HRV RNA with a linear range over 1000-fold. To measure the size distribution of HRV aerosols, volunteers with a head cold spent two hours in a ventilated research chamber. Airborne particles from the chamber were collected using an Andersen Six-Stage Cascade Impactor. Each stage of the impactor was analyzed by quantitative RT-PCR for HRV. For the first two volunteers with confirmed HRV infection, but with mild symptoms, we were unable to detect HRV on any stage of the impactor.

  13. Development of methods for DSM and distribution automation planning

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M.; Seppaelae, A.; Kekkonen, V.; Koreneff, G. [VTT Energy, Espoo (Finland)

    1996-12-31

    In the de-regulated electricity market, the power trading companies have to face new problems. The biggest challenges are caused by the uncertainty in the load magnitudes. In order to minimize the risks in power purchase and also in retail sales, the power traders should have as reliable and accurate estimates for hourly demands of their customers as possible. New tools have been developed for the distribution load estimation and for the management of energy balances of the trading companies. These tools are based on the flexible combination of the information available from several sources, like direct customer measurements, network measurements, load models and statistical data. These functions also serve as an information source for higher level activities of the electricity selling companies. These activities and the associated functions have been studied in the prototype system called DEM, which is now being developed for the operation of Finnish utilities in the newly de-regulated power market

  14. Load forecasting method considering temperature effect for distribution network

    Directory of Open Access Journals (Sweden)

    Meng Xiao Fang

    2016-01-01

    Full Text Available To improve the accuracy of load forecasting, the temperature factor was introduced into the load forecasting in this paper. This paper analyzed the characteristics of power load variation, and researched the rule of the load with the temperature change. Based on the linear regression analysis, the mathematical model of load forecasting was presented with considering the temperature effect, and the steps of load forecasting were given. Used MATLAB, the temperature regression coefficient was calculated. Using the load forecasting model, the full-day load forecasting and time-sharing load forecasting were carried out. By comparing and analyzing the forecast error, the results showed that the error of time-sharing load forecasting method was small in this paper. The forecasting method is an effective method to improve the accuracy of load forecasting.

  15. A Method for Evaluating Physical Activity Programs in Schools.

    Science.gov (United States)

    Kelly, Cheryl; Carpenter, Dick; Tucker, Elizabeth; Luna, Carmen; Donovan, John; Behrens, Timothy K

    2017-09-14

    Providing opportunities for students to be physically active during the school day leads to increased academic performance, better focus, and fewer behavioral problems. As schools begin to incorporate more physical activity programming into the school day, evaluators need methods to measure how much physical activity students are being offered through this programming. Because classroom-based physical activity is often offered in 3-minute to 5-minute bouts at various times of the day, depending on the teachers' time to incorporate it, it is a challenge to evaluate this activity. This article describes a method to estimate the number of physical activity minutes provided before, during, and after school. The web-based tool can be used to gather data cost-effectively from a large number of schools. Strategies to increase teacher response rates and assess intensity of activity should be explored.

  16. Next Generation Nuclear Plant Methods Technical Program Plan

    Energy Technology Data Exchange (ETDEWEB)

    Richard R. Schultz; Abderrafi M. Ougouag; David W. Nigg; Hans D. Gougar; Richard W. Johnson; William K. Terry; Chang H. Oh; Donald W. McEligot; Gary W. Johnsen; Glenn E. McCreery; Woo Y. Yoon; James W. Sterbentz; J. Steve Herring; Temitope A. Taiwo; Thomas Y. C. Wei; William D. Pointer; Won S. Yang; Michael T. Farmer; Hussein S. Khalil; Madeline A. Feltus

    2010-12-01

    One of the great challenges of designing and licensing the Very High Temperature Reactor (VHTR) is to confirm that the intended VHTR analysis tools can be used confidently to make decisions and to assure all that the reactor systems are safe and meet the performance objectives of the Generation IV Program. The research and development (R&D) projects defined in the Next Generation Nuclear Plant (NGNP) Design Methods Development and Validation Program will ensure that the tools used to perform the required calculations and analyses can be trusted. The Methods R&D tasks are designed to ensure that the calculational envelope of the tools used to analyze the VHTR reactor systems encompasses, or is larger than, the operational and transient envelope of the VHTR itself. The Methods R&D focuses on the development of tools to assess the neutronic and thermal fluid behavior of the plant. The fuel behavior and fission product transport models are discussed in the Advanced Gas Reactor (AGR) program plan. Various stress analysis and mechanical design tools will also need to be developed and validated and will ultimately also be included in the Methods R&D Program Plan. The calculational envelope of the neutronics and thermal-fluids software tools intended to be used on the NGNP is defined by the scenarios and phenomena that these tools can calculate with confidence. The software tools can only be used confidently when the results they produce have been shown to be in reasonable agreement with first-principle results, thought-problems, and data that describe the “highly ranked” phenomena inherent in all operational conditions and important accident scenarios for the VHTR.

  17. Next Generation Nuclear Plant Methods Technical Program Plan -- PLN-2498

    Energy Technology Data Exchange (ETDEWEB)

    Richard R. Schultz; Abderrafi M. Ougouag; David W. Nigg; Hans D. Gougar; Richard W. Johnson; William K. Terry; Chang H. Oh; Donald W. McEligot; Gary W. Johnsen; Glenn E. McCreery; Woo Y. Yoon; James W. Sterbentz; J. Steve Herring; Temitope A. Taiwo; Thomas Y. C. Wei; William D. Pointer; Won S. Yang; Michael T. Farmer; Hussein S. Khalil; Madeline A. Feltus

    2010-09-01

    One of the great challenges of designing and licensing the Very High Temperature Reactor (VHTR) is to confirm that the intended VHTR analysis tools can be used confidently to make decisions and to assure all that the reactor systems are safe and meet the performance objectives of the Generation IV Program. The research and development (R&D) projects defined in the Next Generation Nuclear Plant (NGNP) Design Methods Development and Validation Program will ensure that the tools used to perform the required calculations and analyses can be trusted. The Methods R&D tasks are designed to ensure that the calculational envelope of the tools used to analyze the VHTR reactor systems encompasses, or is larger than, the operational and transient envelope of the VHTR itself. The Methods R&D focuses on the development of tools to assess the neutronic and thermal fluid behavior of the plant. The fuel behavior and fission product transport models are discussed in the Advanced Gas Reactor (AGR) program plan. Various stress analysis and mechanical design tools will also need to be developed and validated and will ultimately also be included in the Methods R&D Program Plan. The calculational envelope of the neutronics and thermal-fluids software tools intended to be used on the NGNP is defined by the scenarios and phenomena that these tools can calculate with confidence. The software tools can only be used confidently when the results they produce have been shown to be in reasonable agreement with first-principle results, thought-problems, and data that describe the “highly ranked” phenomena inherent in all operational conditions and important accident scenarios for the VHTR.

  18. Quasi-Newton Methods for Solving Nonlinear Programming Problems

    Directory of Open Access Journals (Sweden)

    V.Moraru

    1996-03-01

    Full Text Available In the present paper the problem of constrained equality optimization is reduced to sequential solving a series of problems of quadratic programming. The Hessian of the Lagrangian is approximated by a sequence of symmetric positive definite matrices. The matrix approximation is updated at every iteration by a Gram- Schmidt modified algorithm. We establish that methods is locally convergent and the sequence {xk}converges to the solution a two-step superlinear rate.

  19. A Cooperative Downloading Method for VANET Using Distributed Fountain Code.

    Science.gov (United States)

    Liu, Jianhang; Zhang, Wenbin; Wang, Qi; Li, Shibao; Chen, Haihua; Cui, Xuerong; Sun, Yi

    2016-10-12

    Cooperative downloading is one of the effective methods to improve the amount of downloaded data in vehicular ad hoc networking (VANET). However, the poor channel quality and short encounter time bring about a high packet loss rate, which decreases transmission efficiency and fails to satisfy the requirement of high quality of service (QoS) for some applications. Digital fountain code (DFC) can be utilized in the field of wireless communication to increase transmission efficiency. For cooperative forwarding, however, processing delay from frequent coding and decoding as well as single feedback mechanism using DFC cannot adapt to the environment of VANET. In this paper, a cooperative downloading method for VANET using concatenated DFC is proposed to solve the problems above. The source vehicle and cooperative vehicles encodes the raw data using hierarchical fountain code before they send to the client directly or indirectly. Although some packets may be lost, the client can recover the raw data, so long as it receives enough encoded packets. The method avoids data retransmission due to packet loss. Furthermore, the concatenated feedback mechanism in the method reduces the transmission delay effectively. Simulation results indicate the benefits of the proposed scheme in terms of increasing amount of downloaded data and data receiving rate.

  20. Making distribution of wheelchairs sustainable: A Wheels for the World program in North India, October 2015

    Directory of Open Access Journals (Sweden)

    Jubin Varghese

    2015-01-01

    Full Text Available A description of a program carried out in October 2015 in North India of distribution of wheelchairs and other assistive devices for persons with disabilities. Applying cooperative approaches through churches, NGOs and networks, outside resources were utilized to develop a sustainable approach to meeting identified disability needs in low-resource settings.

  1. 78 FR 39548 - Food Distribution Program on Indian Reservations: Amendments Related to the Food, Conservation...

    Science.gov (United States)

    2013-07-02

    ... Food and Nutrition Service 7 CFR Part 253 RIN 0584-AD95 Food Distribution Program on Indian Reservations: Amendments Related to the Food, Conservation, and Energy Act of 2008; Approval of Information... Reservations: Amendments Related to the Food, Conservation, and Energy Act of 2008 was published on April 6...

  2. Quantifying Carbon and distributional benefits of solar home system programs in Bangladesh

    OpenAIRE

    Wang, Limin; Bandyopadhyay, Sushenjit; Cosgrove-Davies, Mac; Samad, Hussain

    2011-01-01

    Scaling-up adoption of renewable energy technology, such as solar home systems, to expand electricity access in developing countries can accelerate the transition to low-carbon economic development. Using a purposely collected national household survey, this study quantifies the carbon and distributional benefits of solar home system programs in Bangladesh. Three key findings are generated...

  3. Probabilistic Fuzzy Goal Programming Problems Involving Pareto Distribution: Some Additive Approaches

    Directory of Open Access Journals (Sweden)

    S.K. Barik

    2015-06-01

    Full Text Available In many real-life decision making problems, probabilistic fuzzy goal programming problems are used where some of the input parameters of the problem are considered as random variables with fuzzy aspiration levels. In the present paper, a linearly constrained probabilistic fuzzy goal programming programming problem is presented where the right hand side parameters in some constraints follows Pareto distribution with known mean and variance. Also the aspiration levels are considered as fuzzy. Further, simple, weighted, and preemptive additive approaches are discussed for probabilistic fuzzy goal programming model. These additive approaches are employed to aggregating the membership values and form crisp equivalent deterministic models. The resulting models are then solved by using standard linear mathematical programming techniques. The developed methodology and solution procedures are illustrated with a numerical example.

  4. Queueing-theoretic solution methods for models of parallel and distributed systems

    NARCIS (Netherlands)

    O.J. Boxma (Onno); G.M. Koole (Ger); Z. Liu

    1994-01-01

    textabstractThis paper aims to give an overview of solution methods for the performance analysis of parallel and distributed systems. After a brief review of some important general solution methods, we discuss key models of parallel and distributed systems, and optimization issues, from the

  5. Distributional Monte Carlo Methods for the Boltzmann Equation

    Science.gov (United States)

    2013-03-01

    real valued function over R3 such that ∫ R3 φ (v) Q ( f , f ) dv exists. Cercignani shows [29] ∫ R3 φ (v) Q ( f , f ) dv = 1 4 ∫ R3 ∫ R3 ∫ S + [ f ( v...1957. [26] Cercignani , C. “Existence and Uniqueness in the Large for Boundary Value Problems in Kinetic Theory”. Journal of Mathematical Physics, 8(8...1653–1656, 1967. [27] Cercignani , C. The Boltzmann Equation and Its Applications. Springer-Verlag, 1988. [28] Cercignani , C. Mathematical Methods in

  6. Seasonal comparison of two spatially distributed evapotranspiration mapping methods

    Science.gov (United States)

    Kisfaludi, Balázs; Csáki, Péter; Péterfalvi, József; Primusz, Péter

    2017-04-01

    More rainfall is disposed of through evapotranspiration (ET) on a global scale than through runoff and storage combined. In Hungary, about 90% of the precipitation evapotranspirates from the land and only 10% goes to surface runoff and groundwater recharge. Therefore, evapotranspiration is a very important element of the water balance, so it is a suitable parameter for the calibration of hydrological models. Monthly ET values of two MODIS-data based ET products were compared for the area of Hungary and for the vegetation period of the year 2008. The differences were assessed by land cover types and by elevation zones. One ET map was the MOD16, aiming at global coverage and provided by the MODIS Global Evaporation Project. The other method is called CREMAP, it was developed at the Budapest University of Technology and Economics for regional scale ET mapping. CREMAP was validated for the area of Hungary with good results, but ET maps were produced only for the period of 2000-2008. The aim of this research was to evaluate the performance of the MOD16 product compared to the CREMAP method. The average difference between the two products was the highest during summer, CREMAP estimating higher ET values by about 25 mm/month. In the spring and autumn, MOD16 ET values were higher by an average of 6 mm/month. The differences by land cover types showed a similar seasonal pattern to the average differences, and they correlated strongly with each other. Practically the same difference values could be calculated for arable lands and forests that together cover nearly 75% of the area of the country. Therefore, it can be said that the seasonal changes had the same effect on the two method's ET estimations in each land cover type areas. The analysis by elevation zones showed that on elevations lower than 200 m AMSL the trends of the difference values were similar to the average differences. The correlation between the values of these elevation zones was also strong. However weaker

  7. Program-target methods of management small business

    Directory of Open Access Journals (Sweden)

    Gurova Ekaterina

    2017-01-01

    Full Text Available Experience of small businesses in Russia are just beginning their path to development. difficulties arise with the involvement of small businesses in the implementation of government development programmes. Small business in modern conditions to secure a visible prospect of development without the implementation of state support programmes. Ways and methods of regulation of development of the market economy are diverse. The total mass of the huge role is played by the program-target methods of regulation. The article describes the basic principles on the use of program-target approach to the development of a specific sector of the economy, as small businesses, designed to play an important role in getting the national economy out of crisis. The material in this publication is built from the need to maintain the connection between the theory of government regulation, practice of formation of development programs at the regional level and the needs of small businesses. Essential for the formation of entrepreneurship development programmes is to preserve the flexibility of small businesses in making management decisions related to the selection and change of activities.

  8. Academic training: From Evolution Theory to Parallel and Distributed Genetic Programming

    CERN Multimedia

    2007-01-01

    2006-2007 ACADEMIC TRAINING PROGRAMME LECTURE SERIES 15, 16 March From 11:00 to 12:00 - Main Auditorium, bldg. 500 From Evolution Theory to Parallel and Distributed Genetic Programming F. FERNANDEZ DE VEGA / Univ. of Extremadura, SP Lecture No. 1: From Evolution Theory to Evolutionary Computation Evolutionary computation is a subfield of artificial intelligence (more particularly computational intelligence) involving combinatorial optimization problems, which are based to some degree on the evolution of biological life in the natural world. In this tutorial we will review the source of inspiration for this metaheuristic and its capability for solving problems. We will show the main flavours within the field, and different problems that have been successfully solved employing this kind of techniques. Lecture No. 2: Parallel and Distributed Genetic Programming The successful application of Genetic Programming (GP, one of the available Evolutionary Algorithms) to optimization problems has encouraged an ...

  9. User-Defined Data Distributions in High-Level Programming Languages

    Science.gov (United States)

    Diaconescu, Roxana E.; Zima, Hans P.

    2006-01-01

    One of the characteristic features of today s high performance computing systems is a physically distributed memory. Efficient management of locality is essential for meeting key performance requirements for these architectures. The standard technique for dealing with this issue has involved the extension of traditional sequential programming languages with explicit message passing, in the context of a processor-centric view of parallel computation. This has resulted in complex and error-prone assembly-style codes in which algorithms and communication are inextricably interwoven. This paper presents a high-level approach to the design and implementation of data distributions. Our work is motivated by the need to improve the current parallel programming methodology by introducing a paradigm supporting the development of efficient and reusable parallel code. This approach is currently being implemented in the context of a new programming language called Chapel, which is designed in the HPCS project Cascade.

  10. Evaluation of the Overdose Education and Naloxone Distribution Program of the Baltimore Student Harm Reduction Coalition.

    Science.gov (United States)

    Lewis, Dinah A; Park, Ju Nyeong; Vail, Laura; Sine, Mark; Welsh, Christopher; Sherman, Susan G

    2016-07-01

    Although historically the majority of overdose education and naloxone distribution (OEND) programs have targeted opioid users, states are increasingly passing laws that enable third-party prescriptions of naloxone to individuals who may be able to respond to an overdose, including friends and family members of individuals who use opioids. In this report, we discuss the Baltimore Student Harm Reduction Coalition (BSHRC) OEND program, Maryland's first community-based, state-authorized training program under a new law allowing third-party naloxone prescription. In an 8-month pilot period, 250 free naloxone kits were distributed, and 3 overdose reversals were reported to BSHRC. Trainings were effective in increasing self-efficacy surrounding overdose prevention and response, which appears to persist at up to 12 months following the training.

  11. Using Program Package NSPCG to Analyze the Trunk Reservation Service Protection Method

    DEFF Research Database (Denmark)

    Barker, Vincent A.; Nielsen, Bo Friis

    1994-01-01

    Unlike certain service protection methods for mixed traffic streams, such as the class-limitation method, the trunk reservation scheme cannot be based on a product form property of a stationary probability distribution vector. Rather, the analysis of the trunk reservation scheme requires solving......, by purely numerical methods, a set of balance equations, Ax = 0, often of very high order. Since the coefficient matrix is typically sparse, it is natural to apply iterative methods to this task. Many such methods have been incorporated in program package NSPCG, developed at the Center for Numerical...... Analysis at the University of Texas at Austin. In this paper we report our experience in applying the NSPCG package to a typical system arising from the trunk reservation scheme....

  12. A robust fusion method for multiview distributed video coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Ascenso, Joao; Brites, Catarina

    2014-01-01

    to have the various views available simultaneously. However, in multiview DVC (M-DVC), the decoder can still exploit the redundancy between views, avoiding the need for inter-camera communication. The key element of every DVC decoder is the side information (SI), which can be generated by leveraging intra......-view or inter-view redundancy for multiview video data. In this paper, a novel learning-based fusion technique is proposed, which is able to robustly fuse an inter-view SI and an intra-view (temporal) SI. An inter-view SI generation method capable of identifying occluded areas is proposed and is coupled...... values. The proposed solution is able to achieve gains up to 0.9 dB in Bjøntegaard difference when compared with the best-performing (in a RD sense) single SI DVC decoder, chosen as the best of an inter-view and a temporal SI-based decoder one....

  13. A fully distributed method for dynamic spectrum sharing in femtocells

    DEFF Research Database (Denmark)

    Da Costa, Gustavo Wagner Oliveira; Cattoni, Andrea Fabio; Kovacs, Istvan

    2012-01-01

    The traffic in cellular networks has been growing at an accelerated rate. In order to meet the rising demand for large data volumes, shrinking the cell size may be the only viable option. In fact, locally deployed small cells, namely picocells and femtocells, will certainly play a major role...... in meeting the IMT-Advanced requirements for the next generation of cellular networks. Notwithstanding, several aspects of femtocell deployment are very challenging, especially in closed subscriber group femtocells: massive deployment, user definition of access point location and high density. When...... such characteristics are combined the traditional network planning and optimization of cellular networks fails to be cost effective. Therefore, a greater deal of automation is needed in femtocells. In particular, this paper proposes a novel method for autonomous selection of spectrum/ channels in femtocells...

  14. Mathematical programming methods for large-scale topology optimization problems

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana

    This thesis investigates new optimization methods for structural topology optimization problems. The aim of topology optimization is finding the optimal design of a structure. The physical problem is modelled as a nonlinear optimization problem. This powerful tool was initially developed for mech......This thesis investigates new optimization methods for structural topology optimization problems. The aim of topology optimization is finding the optimal design of a structure. The physical problem is modelled as a nonlinear optimization problem. This powerful tool was initially developed...... for the classical minimum compliance problem. Two of the state-of-the-art optimization algorithms are investigated and implemented for this structural topology optimization problem. A Sequential Quadratic Programming (TopSQP) and an interior point method (TopIP) are developed exploiting the specific mathematical...

  15. Agent-based method for distributed clustering of textual information

    Science.gov (United States)

    Potok, Thomas E [Oak Ridge, TN; Reed, Joel W [Knoxville, TN; Elmore, Mark T [Oak Ridge, TN; Treadwell, Jim N [Louisville, TN

    2010-09-28

    A computer method and system for storing, retrieving and displaying information has a multiplexing agent (20) that calculates a new document vector (25) for a new document (21) to be added to the system and transmits the new document vector (25) to master cluster agents (22) and cluster agents (23) for evaluation. These agents (22, 23) perform the evaluation and return values upstream to the multiplexing agent (20) based on the similarity of the document to documents stored under their control. The multiplexing agent (20) then sends the document (21) and the document vector (25) to the master cluster agent (22), which then forwards it to a cluster agent (23) or creates a new cluster agent (23) to manage the document (21). The system also searches for stored documents according to a search query having at least one term and identifying the documents found in the search, and displays the documents in a clustering display (80) of similarity so as to indicate similarity of the documents to each other.

  16. Cyber-EDA: Estimation of Distribution Algorithms with Adaptive Memory Programming

    Directory of Open Access Journals (Sweden)

    Peng-Yeng Yin

    2013-01-01

    Full Text Available The estimation of distribution algorithm (EDA aims to explicitly model the probability distribution of the quality solutions to the underlying problem. By iterative filtering for quality solution from competing ones, the probability model eventually approximates the distribution of global optimum solutions. In contrast to classic evolutionary algorithms (EAs, EDA framework is flexible and is able to handle inter variable dependence, which usually imposes difficulties on classic EAs. The success of EDA relies on effective and efficient building of the probability model. This paper facilitates EDA from the adaptive memory programming (AMP domain which has developed several improved forms of EAs using the Cyber-EA framework. The experimental result on benchmark TSP instances supports our anticipation that the AMP strategies can enhance the performance of classic EDA by deriving a better approximation for the true distribution of the target solutions.

  17. Productivity of Pair Programming in a Distributed Environment - Results from Two Controlled Case Studies

    Science.gov (United States)

    Pietinen, Sami; Tenhunen, Vesa; Tukiainen, Markku

    Several methods and techniques have surfaced to address the ongoing concerns of quality and productivity of software development. Among these is the Pair Programming (PP) method, which has gained a lot off attention through being an essential part of an agile software development methodology called the eXtreme Programming (XP). In this paper, we present the results of two controlled case studies that investigate the possible productivity improvement through the incorporation of PP over solo programming. The main focus is on implementation task, more specifically in programming, although PP is suitable for other tasks too. Our results show that very high level of PP use might be difficult to achieve in a very tightly scheduled software development project, but some of the benefits can be seen to come true even with proportional use of PP. In our case, PP added the additional effort of 13% over solo programming.

  18. Implementation of the Distributed Parallel Program for Geoid Heights Computation Using MPI and Openmp

    Science.gov (United States)

    Lee, S.; Kim, J.; Jung, Y.; Choi, J.; Choi, C.

    2012-07-01

    Much research have been carried out using optimization algorithms for developing high-performance program, under the parallel computing environment with the evolution of the computer hardware technology such as dual-core processor and so on. Then, the studies by the parallel computing in geodesy and surveying fields are not so many. The present study aims to reduce running time for the geoid heights computation and carrying out least-squares collocation to improve its accuracy using distributed parallel technology. A distributed parallel program was developed in which a multi-core CPU-based PC cluster was adopted using MPI and OpenMP library. Geoid heights were calculated by the spherical harmonic analysis using the earth geopotential model of the National Geospatial-Intelligence Agency(2008). The geoid heights around the Korean Peninsula were calculated and tested in diskless-based PC cluster environment. As results, for the computing geoid heights by a earth geopotential model, the distributed parallel program was confirmed more effective to reduce the computational time compared to the sequential program.

  19. A two-stage inexact joint-probabilistic programming method for air quality management under uncertainty.

    Science.gov (United States)

    Lv, Y; Huang, G H; Li, Y P; Yang, Z F; Sun, W

    2011-03-01

    A two-stage inexact joint-probabilistic programming (TIJP) method is developed for planning a regional air quality management system with multiple pollutants and multiple sources. The TIJP method incorporates the techniques of two-stage stochastic programming, joint-probabilistic constraint programming and interval mathematical programming, where uncertainties expressed as probability distributions and interval values can be addressed. Moreover, it can not only examine the risk of violating joint-probability constraints, but also account for economic penalties as corrective measures against any infeasibility. The developed TIJP method is applied to a case study of a regional air pollution control problem, where the air quality index (AQI) is introduced for evaluation of the integrated air quality management system associated with multiple pollutants. The joint-probability exists in the environmental constraints for AQI, such that individual probabilistic constraints for each pollutant can be efficiently incorporated within the TIJP model. The results indicate that useful solutions for air quality management practices have been generated; they can help decision makers to identify desired pollution abatement strategies with minimized system cost and maximized environmental efficiency. Copyright © 2010 Elsevier Ltd. All rights reserved.

  20. Multi-objective Optimization Method for Distribution System Configuration using Pareto Optimal Solution

    Science.gov (United States)

    Hayashi, Yasuhiro; Takano, Hirotaka; Matsuki, Junya; Nishikawa, Yuji

    Distribution network has huge number of configuration candidates because the network configuration is determined by state of many sectionalizing switches (opened or closed) installing in terms of keeping power quality, reliability and so on. Since feeder current and voltage depends on the network configuration, distribution loss, voltage imbalance and bank efficiency can be controlled by changing state of these switches. In addition, feeder current and voltage change by out put of distributed generators (DGs) such as photovoltaic generation system, wind turbine generation system and so on, connected to the feeder. Recently, total number of DGs connected to distribution network increases drastically. Therefore, many configuration candidates of the distribution network must be evaluated multiply from various viewpoints such as distribution loss, voltage imbalance, bank efficiency and so on, considering power supply from connected DGs. In this paper, the authors propose a multi-objective optimization method from three evaluation viewpoints ((1) distribution loss, (2) voltage imbalance and (3) bank efficiency) using pareto optimal solution. In the proposed method, after several high-ranking candidates with small distribution loss are extracted by combinatorial optimization method, each candidate are evaluated from the viewpoints of voltage imbalance and bank efficiency using pareto optimal solution, then loss minimum configuration is determined as the best configuration among these solutions. Numerical simulations are carried out for a real scale system model consists of 72 distribution feeders and 234 sectionalizing switches in order to examine the validity of the proposed method.

  1. Twelve tips for teaching in a provincially distributed medical education program.

    Science.gov (United States)

    Wong, Roger Y; Chen, Luke; Dhadwal, Gurbir; Fok, Mark C; Harder, Ken; Huynh, Hanh; Lunge, Ryan; Mackenzie, Mark; Mckinney, James; Ovalle, William; Rauniyar, Pooja; Tse, Luke; Villanyi, Diane

    2012-01-01

    As distributed undergraduate and postgraduate medical education becomes more common, the challenges with the teaching and learning process also increase. To collaboratively engage front line teachers in improving teaching in a distributed medical program. We recently conducted a contest on teaching tips in a provincially distributed medical education program and received entries from faculty and resident teachers. Tips that are helpful for teaching around clinical cases at distributed teaching sites include: ask "what if" questions to maximize clinical teaching opportunities, try the 5-min short snapper, multitask to allow direct observation, create dedicated time for feedback, there are really no stupid questions, and work with heterogeneous group of learners. Tips that are helpful for multi-site classroom teaching include: promote teacher-learner connectivity, optimize the long distance working relationship, use the reality television show model to maximize retention and captivate learners, include less teaching content if possible, tell learners what you are teaching and make it relevant and turn on the technology tap to fill the knowledge gap. Overall, the above-mentioned tips offered by front line teachers can be helpful in distributed medical education.

  2. Research on Optimized Torque-Distribution Control Method for Front/Rear Axle Electric Wheel Loader

    Directory of Open Access Journals (Sweden)

    Zhiyu Yang

    2017-01-01

    Full Text Available Optimized torque-distribution control method (OTCM is a critical technology for front/rear axle electric wheel loader (FREWL to improve the operation performance and energy efficiency. In the paper, a longitudinal dynamics model of FREWL is created. Based on the model, the objective functions are that the weighted sum of variance and mean of tire workload is minimal and the total motor efficiency is maximal. Four nonlinear constraint optimization algorithms, quasi-newton Lagrangian multiplier method, sequential quadratic programming, adaptive genetic algorithms, and particle swarm optimization with random weighting and natural selection, which have fast convergent rate and quick calculating speed, are used as solving solutions for objective function. The simulation results show that compared to no-control FREWL, controlled FREWL utilizes the adhesion ability better and slips less. It is obvious that controlled FREWL gains better operation performance and higher energy efficiency. The energy efficiency of FREWL in equipment transferring condition is increased by 13–29%. In addition, this paper discussed the applicability of OTCM and analyzed the reason for different simulation results of four algorithms.

  3. Bayesian methods for hackers probabilistic programming and Bayesian inference

    CERN Document Server

    Davidson-Pilon, Cameron

    2016-01-01

    Bayesian methods of inference are deeply natural and extremely powerful. However, most discussions of Bayesian inference rely on intensely complex mathematical analyses and artificial examples, making it inaccessible to anyone without a strong mathematical background. Now, though, Cameron Davidson-Pilon introduces Bayesian inference from a computational perspective, bridging theory to practice–freeing you to get results using computing power. Bayesian Methods for Hackers illuminates Bayesian inference through probabilistic programming with the powerful PyMC language and the closely related Python tools NumPy, SciPy, and Matplotlib. Using this approach, you can reach effective solutions in small increments, without extensive mathematical intervention. Davidson-Pilon begins by introducing the concepts underlying Bayesian inference, comparing it with other techniques and guiding you through building and training your first Bayesian model. Next, he introduces PyMC through a series of detailed examples a...

  4. The NCI Patient Navigation Research Program Methods, Protocol and Measures

    Science.gov (United States)

    Freund, Karen M; Battaglia, Tracy A; Calhoun, Elizabeth; Dudley, Donald J.; Fiscella, Kevin; Paskett, Electra; Raich, Peter C.; Roetzheim, Richard G.

    2009-01-01

    Background Patient, provider, and systems barriers contribute to delays in cancer care, lower quality of care, and poorer outcomes in vulnerable populations, including low income, underinsured, and racial/ethnic minority populations. Patient navigation is emerging as an intervention to address this problem, but navigation requires a clear definition and a rigorous testing of its effectiveness. Pilot programs have provided some evidence of benefit, but have been limited by evaluation of single-site interventions and varying definitions of navigation. To overcome these limitations, a nine-site National Cancer Institute Patient Navigation Research Program (PNRP) was initiated. Methods The PNRP is charged with designing, implementing and evaluating a generalizable patient navigation program targeting vulnerable populations. Through a formal committee structure, the PNRP has developed a definition of patient navigation and metrics to assess the process and outcomes of patient navigation in diverse settings, compared with concurrent continuous control groups. Results The PNRP defines patient navigation as support and guidance offered to vulnerable persons with abnormal cancer screening or a cancer diagnosis, with the goal of overcoming barriers to timely, quality care. Primary outcomes of the PNRP are (1) time to diagnostic resolution, (2) time to initiation of cancer treatment, (3) patient satisfaction with care, and (4) cost effectiveness, for breast, cervical, colon/rectum, and/or prostate cancer. Conclusions The metrics to assess the processes and outcomes of patient navigation have been developed for the NCI-sponsored Patient Navigator Research Program. If the metrics are found to be valid and reliable, they may prove useful to other investigators. PMID:18951521

  5. Impact of a regional distributed medical education program on an underserved community: perceptions of community leaders.

    Science.gov (United States)

    Toomey, Patricia; Lovato, Chris Y; Hanlon, Neil; Poole, Gary; Bates, Joanna

    2013-06-01

    To describe community leaders' perceptions regarding the impact of a fully distributed undergraduate medical education program on a small, medically underserved host community. The authors conducted semistructured interviews in 2007 with 23 community leaders representing, collectively, the education, health, economic, media, and political sectors. They reinterviewed six participants from a pilot study (2005) and recruited new participants using purposeful and snowball sampling. The authors employed analytic induction to organize content thematically, using the sectors as a framework, and they used open coding to identify new themes. The authors reanalyzed transcripts to identify program outcomes (e.g., increased research capacity) and construct a list of quantifiable indicators (e.g., number of grants and publications). Participants reported their perspectives on the current and anticipated impact of the program on education, health services, the economy, media, and politics. Perceptions of impact were overwhelmingly positive (e.g., increased physician recruitment), though some were negative (e.g., strains on health resources). The authors identified new outcomes and confirmed outcomes described in 2005. They identified 16 quantifiable indicators of impact, which they judged to be plausible and measureable. Participants perceive that the regional undergraduate medical education program in their community has broad, local impacts. Findings suggest that early observed outcomes have been maintained and may be expanding. Results may be applicable to medical education programs with distributed or regional sites in similar rural, remote, and/or underserved regions. The areas of impact, outcomes, and quantifiable indicators identified will be of interest to future researchers and evaluators.

  6. Monitoring Data-Structure Evolution in Distributed Message-Passing Programs

    Science.gov (United States)

    Sarukkai, Sekhar R.; Beers, Andrew; Woodrow, Thomas S. (Technical Monitor)

    1996-01-01

    Monitoring the evolution of data structures in parallel and distributed programs, is critical for debugging its semantics and performance. However, the current state-of-art in tracking and presenting data-structure information on parallel and distributed environments is cumbersome and does not scale. In this paper we present a methodology that automatically tracks memory bindings (not the actual contents) of static and dynamic data-structures of message-passing C programs, using PVM. With the help of a number of examples we show that in addition to determining the impact of memory allocation overheads on program performance, graphical views can help in debugging the semantics of program execution. Scalable animations of virtual address bindings of source-level data-structures are used for debugging the semantics of parallel programs across all processors. In conjunction with light-weight core-files, this technique can be used to complement traditional debuggers on single processors. Detailed information (such as data-structure contents), on specific nodes, can be determined using traditional debuggers after the data structure evolution leading to the semantic error is observed graphically.

  7. Distributed Primal-Dual Subgradient Method for Multiagent Optimization via Consensus Algorithms.

    Science.gov (United States)

    Deming Yuan; Shengyuan Xu; Huanyu Zhao

    2011-12-01

    This paper studies the problem of optimizing the sum of multiple agents' local convex objective functions, subject to global convex inequality constraints and a convex state constraint set over a network. Through characterizing the primal and dual optimal solutions as the saddle points of the Lagrangian function associated with the problem, we propose a distributed algorithm, named the distributed primal-dual subgradient method, to provide approximate saddle points of the Lagrangian function, based on the distributed average consensus algorithms. Under Slater's condition, we obtain bounds on the convergence properties of the proposed method for a constant step size. Simulation examples are provided to demonstrate the effectiveness of the proposed method.

  8. The HACMS program: using formal methods to eliminate exploitable bugs

    Science.gov (United States)

    Launchbury, John; Richards, Raymond

    2017-01-01

    For decades, formal methods have offered the promise of verified software that does not have exploitable bugs. Until recently, however, it has not been possible to verify software of sufficient complexity to be useful. Recently, that situation has changed. SeL4 is an open-source operating system microkernel efficient enough to be used in a wide range of practical applications. Its designers proved it to be fully functionally correct, ensuring the absence of buffer overflows, null pointer exceptions, use-after-free errors, etc., and guaranteeing integrity and confidentiality. The CompCert Verifying C Compiler maps source C programs to provably equivalent assembly language, ensuring the absence of exploitable bugs in the compiler. A number of factors have enabled this revolution, including faster processors, increased automation, more extensive infrastructure, specialized logics and the decision to co-develop code and correctness proofs rather than verify existing artefacts. In this paper, we explore the promise and limitations of current formal-methods techniques. We discuss these issues in the context of DARPA’s HACMS program, which had as its goal the creation of high-assurance software for vehicles, including quadcopters, helicopters and automobiles. This article is part of the themed issue ‘Verified trustworthy software systems’. PMID:28871050

  9. Isotope Production and Distribution Program. Financial statements, September 30, 1994 and 1993

    Energy Technology Data Exchange (ETDEWEB)

    Marwick, P.

    1994-11-30

    The attached report presents the results of the independent certified public accountants` audit of the Isotope Production and Distribution (IP&D) Program`s financial statements as of September 30, 1994. The auditors have expressed an unqualified opinion on IP&D`s 1994 statements. Their reports on IP&D`s internal control structure and on compliance with laws,and regulations are also provided. The charter of the Isotope Program covers the production and sale of radioactive and stable isotopes, byproducts, and related isotope services. Prior to October 1, 1989, the Program was subsidized by the Department of Energy through a combination of appropriated funds and isotope sales revenue. The Fiscal Year 1990 Appropriations Act, Public Law 101-101, authorized a separate Isotope Revolving Fund account for the Program, which was to support itself solely from the proceeds of isotope sales. The initial capitalization was about $16 million plus the value of the isotope assets in inventory or on loan for research and the unexpended appropriation available at the close of FY 1989. During late FY 1994, Public Law 103--316 restructured the Program to provide for supplemental appropriations to cover costs which are impractical to incorporate into the selling price of isotopes. Additional information about the Program is provided in the notes to the financial statements.

  10. A Multi-level Fuzzy Evaluation Method for Smart Distribution Network Based on Entropy Weight

    Science.gov (United States)

    Li, Jianfang; Song, Xiaohui; Gao, Fei; Zhang, Yu

    2017-05-01

    Smart distribution network is considered as the future trend of distribution network. In order to comprehensive evaluate smart distribution construction level and give guidance to the practice of smart distribution construction, a multi-level fuzzy evaluation method based on entropy weight is proposed. Firstly, focus on both the conventional characteristics of distribution network and new characteristics of smart distribution network such as self-healing and interaction, a multi-level evaluation index system which contains power supply capability, power quality, economy, reliability and interaction is established. Then, a combination weighting method based on Delphi method and entropy weight method is put forward, which take into account not only the importance of the evaluation index in the experts’ subjective view, but also the objective and different information from the index values. Thirdly, a multi-level evaluation method based on fuzzy theory is put forward. Lastly, an example is conducted based on the statistical data of some cites’ distribution network and the evaluation method is proved effective and rational.

  11. Size distributions of micro-bubbles generated by a pressurized dissolution method

    Science.gov (United States)

    Taya, C.; Maeda, Y.; Hosokawa, S.; Tomiyama, A.; Ito, Y.

    2012-03-01

    Size of micro-bubbles is widely distributed in the range of one to several hundreds micrometers and depends on generation methods, flow conditions and elapsed times after the bubble generation. Although a size distribution of micro-bubbles should be taken into account to improve accuracy in numerical simulations of flows with micro-bubbles, a variety of the size distribution makes it difficult to introduce the size distribution in the simulations. On the other hand, several models such as the Rosin-Rammler equation and the Nukiyama-Tanazawa equation have been proposed to represent the size distribution of particles or droplets. Applicability of these models to the size distribution of micro-bubbles has not been examined yet. In this study, we therefore measure size distribution of micro-bubbles generated by a pressurized dissolution method by using a phase Doppler anemometry (PDA), and investigate the applicability of the available models to the size distributions of micro-bubbles. Experimental apparatus consists of a pressurized tank in which air is dissolved in liquid under high pressure condition, a decompression nozzle in which micro-bubbles are generated due to pressure reduction, a rectangular duct and an upper tank. Experiments are conducted for several liquid volumetric fluxes in the decompression nozzle. Measurements are carried out at the downstream region of the decompression nozzle and in the upper tank. The experimental results indicate that (1) the Nukiyama-Tanasawa equation well represents the size distribution of micro-bubbles generated by the pressurized dissolution method, whereas the Rosin-Rammler equation fails in the representation, (2) the bubble size distribution of micro-bubbles can be evaluated by using the Nukiyama-Tanasawa equation without individual bubble diameters, when mean bubble diameter and skewness of the bubble distribution are given, and (3) an evaluation method of visibility based on the bubble size distribution and bubble

  12. Branching out to residential lands: Missions and strategies of five tree distribution programs in the U.S.

    Science.gov (United States)

    Vi D. Nguyen; Lara A. Roman; Dexter H. Locke; Sarah K. Mincey; Jessica R. Sanders; Erica Smith Fichman; Mike Duran-Mitchell; Sarah Lumban Tobing

    2017-01-01

    Residential lands constitute a major component of existing and possible tree canopy in many cities in the United States. To expand the urban forest on these lands, some municipalities and nonprofit organizations have launched residential yard tree distribution programs, also known as tree giveaway programs. This paper describes the operations of five tree distribution...

  13. 45 CFR 2517.600 - How are funds for community-based service-learning programs distributed?

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 4 2010-10-01 2010-10-01 false How are funds for community-based service-learning... (Continued) CORPORATION FOR NATIONAL AND COMMUNITY SERVICE COMMUNITY-BASED SERVICE-LEARNING PROGRAMS Distribution of Funds § 2517.600 How are funds for community-based service-learning programs distributed? All...

  14. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks.

    Science.gov (United States)

    Shi, Chaoyang; Chen, Bi Yu; Lam, William H K; Li, Qingquan

    2017-12-06

    Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.

  15. Implementing an overdose education and naloxone distribution program in a health system.

    Science.gov (United States)

    Devries, Jennifer; Rafie, Sally; Polston, Gregory

    To design and implement a health system-wide program increasing provision of take-home naloxone in patients at risk for opioid overdose, with the downstream aim of reducing fatalities. The program includes health care professional education and guidelines, development, and dissemination of patient education materials, electronic health record changes to promote naloxone prescriptions, and availability of naloxone in pharmacies. Academic health system, San Diego, California. University of California, San Diego Health (UCSDH), offers both inpatient and outpatient primary care and specialty services with 563 beds spanning 2 hospitals and 6 pharmacies. UCSDH is part of the University of California health system, and it serves as the county's safety net hospital. In January 2016, a multisite academic health system initiated a system-wide overdose education and naloxone distribution program to prevent opioid overdose and opioid overdose-related deaths. An interdisciplinary, interdepartmental team came together to develop and implement the program. To strengthen institutional support, naloxone prescribing guidelines were developed and approved for the health system. Education on naloxone for physicians, pharmacists, and nurses was provided through departmental trainings, bulletins, and e-mail notifications. Alerts in the electronic health record and preset naloxone orders facilitated co-prescribing of naloxone with opioid prescriptions. Electronic health record reports captured naloxone prescriptions ordered. Summary reports on the electronic health record measured naloxone reminder alerts and response rates. Since the start of the program, the health system has trained 252 physicians, pharmacists, and nurses in overdose education and take-home naloxone. There has been an increase in the number of prescriptions for naloxone from a baseline of 4.5 per month to an average of 46 per month during the 3 months following full implementation of the program including

  16. Determination of the adsorption energy distribution function with the LOGA method.

    NARCIS (Netherlands)

    Koopal, L.K.; Nederlof, M.M.; Riemsdijk, van W.H.

    1990-01-01

    A method to determine the adsorption energy distribution function from adsorption data is presented. The overall isotherm on a heterogeneous surface is the summation of local adsorption contributions. The distribution function of the adsorption energy of affinity can be calculated, provided the

  17. A simple method to ensure homogeneous drug distribution during intrarenal infusion

    DEFF Research Database (Denmark)

    Postnov, Dmitry D; Salomonsson, Max; Sorensen, Charlotte M

    2017-01-01

    Intrarenal drug infusion plays an important role in renal experimental research. Laminar flow of the blood can cause streaming and inhomogeneous intrarenal distribution of infused drugs. We suggest a simple method to achieve a homogeneous intravascular distribution of drugs infused into the renal...

  18. 5 CFR 1600.32 - Methods for transferring eligible rollover distribution to TSP.

    Science.gov (United States)

    2010-01-01

    ... rollover distribution to TSP. 1600.32 Section 1600.32 Administrative Personnel FEDERAL RETIREMENT THRIFT... Retirement Plans § 1600.32 Methods for transferring eligible rollover distribution to TSP. (a) Trustee-to... plan transfer any or all of their account directly to the TSP by executing and submitting a Form TSP-60...

  19. Advanced airflow distribution methods for reduction of personal exposure to indoor pollutants

    DEFF Research Database (Denmark)

    Cao, Guangyu; Kosonen, Risto; Melikov, Arsen

    2016-01-01

    The main objective of this study is to recognize possible airflow distribution methods to protect the occupants from exposure to various indoor pollutants. The fact of the increasing exposure of occupants to various indoor pollutants shows that there is an urgent need to develop advanced airflow ...... distribution methods to reduce indoor exposure to various indoor pollutants. This article presents some of the latest development of advanced airflow distribution methods to reduce indoor exposure in various types of buildings.......The main objective of this study is to recognize possible airflow distribution methods to protect the occupants from exposure to various indoor pollutants. The fact of the increasing exposure of occupants to various indoor pollutants shows that there is an urgent need to develop advanced airflow...

  20. Distribution Route Planning of Clean Coal Based on Nearest Insertion Method

    Science.gov (United States)

    Wang, Yunrui

    2018-01-01

    Clean coal technology has made some achievements for several ten years, but the research in its distribution field is very small, the distribution efficiency would directly affect the comprehensive development of clean coal technology, it is the key to improve the efficiency of distribution by planning distribution route rationally. The object of this paper was a clean coal distribution system which be built in a county. Through the surveying of the customer demand and distribution route, distribution vehicle in previous years, it was found that the vehicle deployment was only distributed by experiences, and the number of vehicles which used each day changed, this resulted a waste of transport process and an increase in energy consumption. Thus, the mathematical model was established here in order to aim at shortest path as objective function, and the distribution route was re-planned by using nearest-insertion method which been improved. The results showed that the transportation distance saved 37 km and the number of vehicles used had also been decreased from the past average of 5 to fixed 4 every day, as well the real loading of vehicles increased by 16.25% while the current distribution volume staying same. It realized the efficient distribution of clean coal, achieved the purpose of saving energy and reducing consumption.

  1. A Study on Real-Time Pricing Method of Reactive Power in Voltage Profile Control Method of Future Distribution Network

    Science.gov (United States)

    Koide, Akira; Tsuji, Takao; Oyama, Tsutomu; Hashiguchi, Takuhei; Goda, Tadahiro; Shinji, Takao; Tsujita, Shinsuke

    It is of prime importance to solve the voltage maintenance problem caused by the introduction of a large number of distributed generators. The authors have proposed “voltage profile control method” using reactive power control of distributed generators and developed new systems which can give economical incentives to DG owners who cooperate the voltage profile management in the previous works. However, it is difficult to apply the proposed economical systems to real-time operation because they are based on the optimization technology and the specific amount of incentive is informed after the control action has finished. Therefore, in this paper, we develop a new method that can determine the amount of incentives in real-time and encourage the costumers to cooperate voltage profile control method. The proposed method is tested in one feeder distribution network and its effectiveness is shown.

  2. Fittino, a program for determining MSSM parameters from collider observables using an iterative method

    Science.gov (United States)

    Bechtle, P.; Desch, K.; Wienemann, P.

    2006-01-01

    Provided that Supersymmetry (SUSY) is realized, the Large Hadron Collider (LHC) and the future International Linear Collider (ILC) may provide a wealth of precise data from SUSY processes. An important task will be to extract the Lagrangian parameters. On this basis the goal is to uncover the underlying symmetry breaking mechanism from the measured observables. In order to determine the SUSY parameters, the program Fittino has been developed. It uses an iterative fitting technique and a Simulated Annealing algorithm to determine the SUSY parameters directly from the observables without any a priori knowledge of the parameters, using all available loop-corrections to masses and couplings. Simulated Annealing is implemented as a stable and efficient method for finding the optimal parameter values. The theoretical predictions can be provided from any program with SUSY Les Houches Accord interface. As fit result, a set of parameters including the full error matrix and two-dimensional uncertainty contours are obtained. Pull distributions can automatically be created and allow an independent cross-check of the fit results and possible systematic shifts in the parameter determination. A determination of the importance of the individual observables for the measurement of each parameter can be performed after the fit. A flexible user interface is implemented, allowing a wide range of different types of observables and a wide range of parameters to be used. Program summaryProgram title: Fittino Catalogue identifier: ADWN Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWN Licensing provisions: GNU General Public License Programming language:C++ Computer: any computer Operating system: Linux and other Unix flavors RAM: ca. 22 MB No. of lines in distributed program, including test data, etc.: 111 962 No. of bytes in distributed program, including test data, etc.: 1 006 727 Distribution format: tar.gz Number of processors used: 1 External routines: The ROOT data analysis

  3. Distributed Solar Incentive Programs: Recent Experience and Best Practices for Design and Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Bird, L.; Reger, A.; Heeter, J.

    2012-12-01

    Based on lessons from recent program experience, this report explores best practices for designing and implementing incentives for small and mid-sized residential and commercial distributed solar energy projects. The findings of this paper are relevant to both new incentive programs as well as those undergoing modifications. The report covers factors to consider in setting and modifying incentive levels over time, differentiating incentives to encourage various market segments, administrative issues such as providing equitable access to incentives and customer protection. It also explores how incentive programs can be designed to respond to changing market conditions while attempting to provide a longer-term and stable environment for the solar industry. The findings are based on interviews with program administrators, regulators, and industry representatives as well as data from numerous incentive programs nationally, particularly the largest and longest-running programs. These best practices consider the perspectives of various stakeholders and the broad objectives of reducing solar costs, encouraging long-term market viability, minimizing ratepayer costs, and protecting consumers.

  4. Hot Water Distribution System Program Documentation and Comparison to Experimental Data

    Energy Technology Data Exchange (ETDEWEB)

    Baskin, Evelyn [GE Infrastructure Energy; Craddick, William G [ORNL; Lenarduzzi, Roberto [ORNL; Wendt, Robert L [ORNL; Woodbury, Professor Keith A. [University of Alabama, Tuscaloosa

    2007-09-01

    In 2003, the California Energy Commission s (CEC s) Public Interest Energy Research (PIER) program funded Oak Ridge National Laboratory (ORNL) to create a computer program to analyze hot water distribution systems for single family residences, and to perform such analyses for a selection of houses. This effort and its results were documented in a report provided to CEC in March, 2004 [1]. The principal objective of effort was to compare the water and energy wasted between various possible hot water distribution systems for various different house designs. It was presumed that water being provided to a user would be considered suitably warm when it reached 105 F. Therefore, what was needed was a tool which could compute the time it takes for water reaching the draw point to reach 105 F, and the energy wasted during this wait. The computer program used to perform the analyses was a combination of a calculational core, produced by Dr. Keith A. Woodbury, Professor of Mechanical Engineering and Director, Alabama Industrial Assessment Center, University of Alabama, and a user interface based on LabVIEW, created by Dr. Roberto Lenarduzzi of ORNL. At that time, the computer program was in a relatively rough and undocumented form adequate to perform the contracted work but not in a condition where it could be readily used by those not involved in its generation. Subsequently, the CEC provided funding through Lawrence Berkeley National Laboratory (LBNL) to improve the program s documentation and user interface to facilitate use by others, and to compare the program s results to experimental data generated by Dr. Carl Hiller. This report describes the program and provides user guidance. It also summarizes the comparisons made to experimental data, along with options built into the program specifically to allow these comparisons. These options were necessitated by the fact that some of the experimental data required options and features not originally included in the program

  5. Light Water Reactor Sustainability Program: Evaluation of Localized Cable Test Methods for Nuclear Power Plant Cable Aging Management Programs

    Energy Technology Data Exchange (ETDEWEB)

    Glass, Samuel W. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Fifield, Leonard S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hartman, Trenton S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-05-30

    This Pacific Northwest National Laboratory (PNNL) milestone report describes progress to date on the investigation of nondestructive test (NDE) methods focusing particularly on local measurements that provide key indicators of cable aging and damage. The work includes a review of relevant literature as well as hands-on experimental verification of inspection capabilities. As NPPs consider applying for second, or subsequent, license renewal (SLR) to extend their operating period from 60 years to 80 years, it important to understand how the materials installed in plant systems and components will age during that time and develop aging management programs (AMPs) to assure continued safe operation under normal and design basis events (DBE). Normal component and system tests typically confirm the cables can perform their normal operational function. The focus of the cable test program is directed toward the more demanding challenge of assuring the cable function under accident or DBE. Most utilities already have a program associated with their first life extension from 40 to 60 years. Regrettably, there is neither a clear guideline nor a single NDE that can assure cable function and integrity for all cables. Thankfully, however, practical implementation of a broad range of tests allows utilities to develop a practical program that assures cable function to a high degree. The industry has adopted 50% elongation at break (EAB) relative to the un-aged cable condition as the acceptability standard. All tests are benchmarked against the cable EAB test. EAB is a destructive test so the test programs must apply an array of other NDE tests to assure or infer the overall set of cable’s system integrity. These cable NDE programs vary in rigor and methodology. As the industry gains experience with the efficacy of these programs, it is expected that implementation practice will converge to a more common approach. This report addresses the range of local NDE cable tests that are

  6. Predictive Distribution of the Dirichlet Mixture Model by the Local Variational Inference Method

    DEFF Research Database (Denmark)

    Ma, Zhanyu; Leijon, Arne; Tan, Zheng-Hua

    2014-01-01

    the predictive likelihood of the new upcoming data, especially when the amount of training data is small. The Bayesian estimation of a Dirichlet mixture model (DMM) is, in general, not analytically tractable. In our previous work, we have proposed a global variational inference-based method for approximately......In Bayesian analysis of a statistical model, the predictive distribution is obtained by marginalizing over the parameters with their posterior distributions. Compared to the frequently used point estimate plug-in method, the predictive distribution leads to a more reliable result in calculating...

  7. Methods of computing vocabulary size for the two-parameter rank distribution

    Science.gov (United States)

    Edmundson, H. P.; Fostel, G.; Tung, I.; Underwood, W.

    1972-01-01

    A summation method is described for computing the vocabulary size for given parameter values in the 1- and 2-parameter rank distributions. Two methods of determining the asymptotes for the family of 2-parameter rank-distribution curves are also described. Tables are computed and graphs are drawn relating paris of parameter values to the vocabulary size. The partial product formula for the Riemann zeta function is investigated as an approximation to the partial sum formula for the Riemann zeta function. An error bound is established that indicates that the partial product should not be used to approximate the partial sum in calculating the vocabulary size for the 2-parameter rank distribution.

  8. A Recourse-Based Type-2 Fuzzy Programming Method for Water Pollution Control under Uncertainty

    Directory of Open Access Journals (Sweden)

    Jing Liu

    2017-11-01

    Full Text Available In this study, a recourse-based type-2 fuzzy programming (RTFP method is developed for supporting water pollution control of basin systems under uncertainty. The RTFP method incorporates type-2 fuzzy programming (TFP within a two-stage stochastic programming with recourse (TSP framework to handle uncertainties expressed as type-2 fuzzy sets (i.e., a fuzzy set in which the membership function is also fuzzy and probability distributions, as well as to reflect the trade-offs between conflicting economic benefits and penalties due to violated policies. The RTFP method is then applied to a real case of water pollution control in the Heshui River Basin (a rural area of China, where chemical oxygen demand (COD, total nitrogen (TN, total phosphorus (TP, and soil loss are selected as major indicators to identify the water pollution control strategies. Solutions of optimal production plans of economic activities under each probabilistic pollutant discharge allowance level and membership grades are obtained. The results are helpful for the authorities in exploring the trade-off between economic objective and pollutant discharge decision-making based on river water pollution control.

  9. Demonstration of a collimated in situ method for determining depth distributions using gamma-ray spectrometry

    CERN Document Server

    Benke, R R

    2002-01-01

    In situ gamma-ray spectrometry uses a portable detector to quantify radionuclides in materials. The main shortcoming of in situ gamma-ray spectrometry has been its inability to determine radionuclide depth distributions. Novel collimator designs were paired with a commercial in situ gamma-ray spectrometry system to overcome this limitation for large area sources. Positioned with their axes normal to the material surface, the cylindrically symmetric collimators limited the detection of un attenuated gamma-rays from a selected range of polar angles (measured off the detector axis). Although this approach does not alleviate the need for some knowledge of the gamma-ray attenuation characteristics of the materials being measured, the collimation method presented in this paper represents an absolute method that determines the depth distribution as a histogram, while other in situ methods require a priori knowledge of the depth distribution shape. Other advantages over previous in situ methods are that this method d...

  10. Description of PDP-9 Software Used for Advanced Modem Experiments -- Time-of-Arrival Words Distribution Display Program (TIMWD),

    Science.gov (United States)

    COMPUTER PROGRAMS, INSTRUCTION MANUALS), (*MODULATORS, TEST METHODS), (*DEMODULATORS, TEST METHODS), CODING, BINARY ARITHMETIC, DATA TRANSMISSION SYSTEMS , MEMORY DEVICES, STATISTICAL ANALYSIS, INTERFACES

  11. On-line core axial power distribution synthesis method from in-core and ex-core neutron detectors

    Energy Technology Data Exchange (ETDEWEB)

    In, Wang Kee; Cho, Byung Oh [Korea Atomic Energy Research Institute, Taejon (Korea)

    1999-10-01

    This document describes the methodology in detail and the synthesis coefficients of the Fourier series expansion and the cubic spline synthesis techniques. A computer program was developed to generate the synthesis coefficients and the core power distribution. For the illustration, various axial power shapes for YGN 3 Cycle 1 and SMART were synthesized using the simulated in-core and/or ex-core detector signals. The results of this study will be useful to select the best synthesis method for the SMART core monitoring and protection systems and to evaluate the accuracy of the synthesized power shape. 4 refs., 13 figs., 5 tabs. (Author)

  12. The Investigation Of A 3-Dimensional Human Hip Joint Subjected To Distributed Load By Using Finite Element Method

    Directory of Open Access Journals (Sweden)

    Mehmet Emin Çetin

    2012-06-01

    Full Text Available In this study, a three dimensionally modeled human hip joint was investigated by using finite element method. During this study, finite element models were prepared for three different prosthesis types namely; Charnley, Muller and Hipokrat and for two different activities as walking and stair climbing motions. Ansys Workbench commercial program was used for finite element analysis by applying distributed load condition. The von- Mises stresses and strains occurred on the cortical and trabecular layers of bone, prosthesis and bone cement which was used to assemble prosthesis into bone's intramedullary canal, were determined at the end of the finite element analysis and compared to each other.

  13. A Data-Driven Distributionally Robust Bound on the Expected Optimal Value of Uncertain Mixed 0-1 Linear Programming

    OpenAIRE

    Xu, Guanglin; Burer, Samuel

    2017-01-01

    This paper studies the expected optimal value of a mixed 0-1 programming problem with uncertain objective coefficients following a joint distribution. We assume that the true distribution is not known exactly, but a set of independent samples can be observed. Using the Wasserstein metric, we construct an ambiguity set centered at the empirical distribution from the observed samples and containing the true distribution with a high statistical guarantee. The problem of interest is to investigat...

  14. A Skeleton Based Programming Paradigm for Mobile Multi-Agents on Distributed Systems and Its Realization within the MAGDA Mobile Agents Platform

    Directory of Open Access Journals (Sweden)

    R. Aversa

    2008-01-01

    Full Text Available Parallel programming effort can be reduced by using high level constructs such as algorithmic skeletons. Within the MAGDA toolset, supporting programming and execution of mobile agent based distributed applications, we provide a skeleton-based parallel programming environment, based on specialization of Algorithmic Skeleton Java interfaces and classes. Their implementation include mobile agent features for execution on heterogeneous systems, such as clusters of WSs and PCs, and support reliability and dynamic workload balancing. The user can thus develop a parallel, mobile agent based application by simply specialising a given set of classes and methods and using a set of added functionalities.

  15. Distributed Generation Islanding Effect on Distribution Networks and End User Loads Using the Load Sharing Islanding Method

    Directory of Open Access Journals (Sweden)

    Maen Z. Kreishan

    2016-11-01

    Full Text Available In this paper a realistic medium voltage (MV network with four different distributed generation technologies (diesel, gas, hydro and wind along with their excitation and governor control systems is modelled and simulated. Moreover, an exponential model was used to represent the loads in the network. The dynamic and steady state behavior of the four distributed generation technologies was investigated during grid-connected operation and two transition modes to the islanding situation, planned and unplanned. This study aims to address the feasibility of planned islanding operation and to investigate the effect of unplanned islanding. The load sharing islanding method has been used for controlling the distributed generation units during grid-connected and islanding operation. The simulation results were validated through various case studies and have shown that properly planned islanding transition could provide support to critical loads at the event of utility outages. However, a reliable protection scheme would be required to mitigate the adverse effect of unplanned islanding as all unplanned sub-cases returned severe negative results.

  16. Uniform distribution and quasi-Monte Carlo methods discrepancy, integration and applications

    CERN Document Server

    Kritzer, Peter; Pillichshammer, Friedrich; Winterhof, Arne

    2014-01-01

    The survey articles in this book focus on number theoretic point constructions, uniform distribution theory, and quasi-Monte Carlo methods. As deterministic versions of the Monte Carlo method, quasi-Monte Carlo rules enjoy increasing popularity, with many fruitful applications in mathematical practice, as for example in finance, computer graphics, and biology.

  17. A Time Delay Estimation Method Based on Wavelet Transform and Speech Envelope for Distributed Microphone Arrays

    Directory of Open Access Journals (Sweden)

    YIN, F.

    2013-08-01

    Full Text Available A time delay estimation method based on wavelet transform and speech envelope is proposed for distributed microphone arrays. This method first extracts the speech envelopes of the signals processed with multi-level discrete wavelet transform, and then makes use of the speech envelopes to estimate a coarse time delay. Finally it searches for the accurate time delay near the coarse time delay by the cross-correlation function calculated in time domain. The simulation results illustrate that the proposed method can accurately estimate the time delay between two distributed microphone array signals.

  18. Research on distributed optical fiber sensing data processing method based on LabVIEW

    Science.gov (United States)

    Li, Zhonghu; Yang, Meifang; Wang, Luling; Wang, Jinming; Yan, Junhong; Zuo, Jing

    2018-01-01

    The pipeline leak detection and leak location problem have gotten extensive attention in the industry. In this paper, the distributed optical fiber sensing system is designed based on the heat supply pipeline. The data processing method of distributed optical fiber sensing based on LabVIEW is studied emphatically. The hardware system includes laser, sensing optical fiber, wavelength division multiplexer, photoelectric detector, data acquisition card and computer etc. The software system is developed using LabVIEW. The software system adopts wavelet denoising method to deal with the temperature information, which improved the SNR. By extracting the characteristic value of the fiber temperature information, the system can realize the functions of temperature measurement, leak location and measurement signal storage and inquiry etc. Compared with traditional negative pressure wave method or acoustic signal method, the distributed optical fiber temperature measuring system can measure several temperatures in one measurement and locate the leak point accurately. It has a broad application prospect.

  19. ALPprolog --- A New Logic Programming Method for Dynamic Domains

    OpenAIRE

    Drescher, Conrad; Thielscher, Michael

    2011-01-01

    Logic programming is a powerful paradigm for programming autonomous agents in dynamic domains, as witnessed by languages such as Golog and Flux. In this work we present ALPprolog, an expressive, yet efficient, logic programming language for the online control of agents that have to reason about incomplete information and sensing actions.

  20. Optimal methods for fitting probability distributions to propagule retention time in studies of zoochorous dispersal.

    Science.gov (United States)

    Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi

    2016-02-01

    Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We

  1. Nanoparticles and metrology: a comparison of methods for the determination of particle size distributions

    Science.gov (United States)

    Coleman, Victoria A.; Jämting, Åsa K.; Catchpoole, Heather J.; Roy, Maitreyee; Herrmann, Jan

    2011-10-01

    Nanoparticles and products incorporating nanoparticles are a growing branch of nanotechnology industry. They have found a broad market, including the cosmetic, health care and energy sectors. Accurate and representative determination of particle size distributions in such products is critical at all stages of the product lifecycle, extending from quality control at point of manufacture to environmental fate at the point of disposal. Determination of particle size distributions is non-trivial, and is complicated by the fact that different techniques measure different quantities, leading to differences in the measured size distributions. In this study we use both mono- and multi-modal dispersions of nanoparticle reference materials to compare and contrast traditional and novel methods for particle size distribution determination. The methods investigated include ensemble techniques such as dynamic light scattering (DLS) and differential centrifugal sedimentation (DCS), as well as single particle techniques such as transmission electron microscopy (TEM) and microchannel resonator (ultra high-resolution mass sensor).

  2. An approximation method for solving the steady-state probability distribution of probabilistic Boolean networks.

    Science.gov (United States)

    Ching, Wai-Ki; Zhang, Shuqin; Ng, Michael K; Akutsu, Tatsuya

    2007-06-15

    Probabilistic Boolean networks (PBNs) have been proposed to model genetic regulatory interactions. The steady-state probability distribution of a PBN gives important information about the captured genetic network. The computation of the steady-state probability distribution usually includes construction of the transition probability matrix and computation of the steady-state probability distribution. The size of the transition probability matrix is 2(n)-by-2(n) where n is the number of genes in the genetic network. Therefore, the computational costs of these two steps are very expensive and it is essential to develop a fast approximation method. In this article, we propose an approximation method for computing the steady-state probability distribution of a PBN based on neglecting some Boolean networks (BNs) with very small probabilities during the construction of the transition probability matrix. An error analysis of this approximation method is given and theoretical result on the distribution of BNs in a PBN with at most two Boolean functions for one gene is also presented. These give a foundation and support for the approximation method. Numerical experiments based on a genetic network are given to demonstrate the efficiency of the proposed method.

  3. AspectKE*:Security Aspects with Program Analysis for Distributed Systems

    DEFF Research Database (Denmark)

    2010-01-01

    AspectKE* is the first distributed AOP language based on a tuple space system. It is designed to enforce security policies to applications containing untrusted processes. One of the key features is the high-level predicates that extract results of static program analysis. These predicates provide...... users an easy way to define aspects by providing information about future behavior of processes, which are shown to be useful to implement security policies such as secrecy and integrity. The users of AspectKE* do not need to write low-level analysis by themselves. In the demonstration, we show basic...... features of AspectKE* and a case study of building a secure distributed chat application that contains a malicious process....

  4. Learning Based Approach for Optimal Clustering of Distributed Program's Call Flow Graph

    Science.gov (United States)

    Abofathi, Yousef; Zarei, Bager; Parsa, Saeed

    Optimal clustering of call flow graph for reaching maximum concurrency in execution of distributable components is one of the NP-Complete problems. Learning automatas (LAs) are search tools which are used for solving many NP-Complete problems. In this paper a learning based algorithm is proposed to optimal clustering of call flow graph and appropriate distributing of programs in network level. The algorithm uses learning feature of LAs to search in state space. It has been shown that the speed of reaching to solution increases remarkably using LA in search process, and it also prevents algorithm from being trapped in local minimums. Experimental results show the superiority of proposed algorithm over others.

  5. Opioid overdose education and naloxone distribution: Development of the Veterans Health Administration's national program.

    Science.gov (United States)

    Oliva, Elizabeth M; Christopher, Melissa L D; Wells, Daina; Bounthavong, Mark; Harvey, Michael; Himstreet, Julianne; Emmendorfer, Thomas; Valentino, Michael; Franchi, Mariano; Goodman, Francine; Trafton, Jodie A

    To prevent opioid-related mortality, the Veterans Health Administration (VHA) developed a national Opioid Overdose Education and Naloxone Distribution (OEND) program. VHA's OEND program sought national implementation of OEND across all medical facilities (n = 142). This paper describes VHA's efforts to facilitate nationwide health care system-based OEND implementation, including the critical roles of VHA's national pharmacy services and academic detailing services. VHA is the first large health care system in the United States to implement OEND nationwide. Launching the national program required VHA to translate a primarily community-based public health approach to OEND into a health care system-based approach that distributed naloxone to patients with opioid use disorders as well as to patients prescribed opioid analgesics. Key innovations included developing steps to implement OEND, pharmacy developing standard naloxone rescue kits, adding those kits to the VHA National Formulary, centralizing kit distribution, developing clinical guidance for issuing naloxone kits, and supporting OEND as a focal campaign of academic detailing. Other innovations included the development of patient and provider education resources (e.g., brochures, videos, accredited training) and implementation and evaluation resources (e.g., technical assistance, clinical decision support tools). Clinical decision support tools that leverage VHA national data are available to clinical staff with appropriate permissions. These tools allow staff and leaders to evaluate OEND implementation and provide actionable next steps to help them identify patients who could benefit from OEND. Through fiscal year 2016, VHA dispensed 45,178 naloxone prescriptions written by 5693 prescribers to 39,328 patients who were primarily prescribed opioids or had opioid use disorder. As of February 2, 2016, there were 172 spontaneously reported opioid overdose reversals with the use of VHA naloxone prescriptions. VHA

  6. Analysis of distribution rule of surface stress on cross wedge rolling contact zone by finite element method

    Science.gov (United States)

    Shu, Xuedao; Li, Lianpeng; Hu, Zhenghuan

    2005-12-01

    Contact surface of cross-wedge rolling is a complicated space surface and distribution rule of contact surface stress is very complicated. So far, its analyzed result was still based on slippery line method. Designing mould and actual production mainly depend on experiential factor. Application and development of cross-wedge rolling was baffled seriously. Based on the forming characteristics of cross-wedge rolling with flat wedge-shape, the ANSYS/DYNA software was developed secondly on the basis of itself, and the corresponding command program was compiled. Rolling process of cross-wedge rolling with flat wedge-shape was simulated successfully. Through simulation, space surface shape of contact surface was achieved, and distribution rule of contact surface stress was analyzed detailed and obtained. The results provide important theoretical foundation for avoiding appearing bug on surface of rolled part, instructing to design cross-wedge mould and confirming force and energy parameter.

  7. Higher moments method for generalized Pareto distribution in flood frequency analysis

    Science.gov (United States)

    Zhou, C. R.; Chen, Y. F.; Huang, Q.; Gu, S. H.

    2017-08-01

    The generalized Pareto distribution (GPD) has proven to be the ideal distribution in fitting with the peak over threshold series in flood frequency analysis. Several moments-based estimators are applied to estimating the parameters of GPD. Higher linear moments (LH moments) and higher probability weighted moments (HPWM) are the linear combinations of Probability Weighted Moments (PWM). In this study, the relationship between them will be explored. A series of statistical experiments and a case study are used to compare their performances. The results show that if the same PWM are used in LH moments and HPWM methods, the parameter estimated by these two methods is unbiased. Particularly, when the same PWM are used, the PWM method (or the HPWM method when the order equals 0) shows identical results in parameter estimation with the linear Moments (L-Moments) method. Additionally, this phenomenon is significant when r ≥ 1 that the same order PWM are used in HPWM and LH moments method.

  8. Load Modeling and State Estimation Methods for Power Distribution Systems: Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Tom McDermott

    2010-05-07

    The project objective was to provide robust state estimation for distribution systems, comparable to what has been available on transmission systems for decades. This project used an algorithm called Branch Current State Estimation (BCSE), which is more effective than classical methods because it decouples the three phases of a distribution system, and uses branch current instead of node voltage as a state variable, which is a better match to current measurement.

  9. A systematic review of community opioid overdose prevention and naloxone distribution programs.

    Science.gov (United States)

    Clark, Angela K; Wilder, Christine M; Winstanley, Erin L

    2014-01-01

    Community-based opioid overdose prevention programs (OOPPs) that include the distribution of naloxone have increased in response to alarmingly high overdose rates in recent years. This systematic review describes the current state of the literature on OOPPs, with particular focus on the effectiveness of these programs. We used systematic search criteria to identify relevant articles, which we abstracted and assigned a quality assessment score. Nineteen articles evaluating OOPPs met the search criteria for this systematic review. Principal findings included participant demographics, the number of naloxone administrations, percentage of survival in overdose victims receiving naloxone, post-naloxone administration outcome measures, OOPP characteristics, changes in knowledge pertaining to overdose responses, and barriers to naloxone administration during overdose responses. The current evidence from nonrandomized studies suggests that bystanders (mostly opioid users) can and will use naloxone to reverse opioid overdoses when properly trained, and that this training can be done successfully through OOPPs.

  10. Mixed-Methods Assessment of Trauma and Acute Care Surgical Quality Improvement Programs in Peru.

    Science.gov (United States)

    LaGrone, Lacey N; Fuhs, Amy K; Egoavil, Eduardo Huaman; Rodriguez Castro, Manuel J A; Valderrama, Roberto; Isquith-Dicker, Leah N; Herrera-Matta, Jaime; Mock, Charles N

    2017-04-01

    Evidence for the positive impact of quality improvement (QI) programs on morbidity, mortality, patient satisfaction, and cost is strong. Data regarding the status of QI programs in low- and middle-income countries, as well as in-depth examination of barriers and facilitators to their implementation, are limited. This cross-sectional, descriptive study employed a mixed-methods design, including distribution of an anonymous quantitative survey and individual interviews with healthcare providers who participate in the care of the injured at ten large hospitals in Lima, Peru. Key areas identified for improvement in morbidity and mortality (M&M) conferences were the standardization of case selection, incorporation of evidence from the medical literature into case presentation and discussion, case documentation, and the development of a clear plan for case follow-up. The key barriers to QI program implementation were a lack of prioritization of QI, lack of sufficient human and administrative resources, lack of political support, and lack of education on QI practices. A national program that makes QI a required part of all health providers' professional training and responsibilities would effectively address a majority of identified barriers to QI programs in Peru. Specifically, the presence of basic QI elements, such as M&M conferences, should be required at hospitals that train pre-graduate physicians. Alternatively, short of this national-level organization, efforts that capitalize on local examples through apprenticeships between institutions or integration of QI into continuing medical education would be expected to build on the facilitators for QI programs that exist in Peru.

  11. PREDICTION OF MEAT PRODUCT QUALITY BY THE MATHEMATICAL PROGRAMMING METHODS

    Directory of Open Access Journals (Sweden)

    A. B. Lisitsyn

    2016-01-01

    Full Text Available Abstract Use of the prediction technologies is one of the directions of the research work carried out both in Russia and abroad. Meat processing is accompanied by the complex physico-chemical, biochemical and mechanical processes. To predict the behavior of meat raw material during the technological processing, a complex of physico-technological and structural-mechanical indicators, which objectively reflects its quality, is used. Among these indicators are pH value, water binding and fat holding capacities, water activity, adhesiveness, viscosity, plasticity and so on. The paper demonstrates the influence of animal proteins (beef and pork on the physico-chemical and functional properties before and after thermal treatment of minced meat made from meat raw material with different content of the connective and fat tissues. On the basis of the experimental data, the model (stochastic dependence parameters linking the quantitative resultant and factor variables were obtained using the regression analysis, and the degree of the correlation with the experimental data was assessed. The maximum allowable levels of meat raw material replacement with animal proteins (beef and pork were established by the methods of mathematical programming. Use of the information technologies will significantly reduce the costs of the experimental search and substantiation of the optimal level of replacement of meat raw material with animal proteins (beef, pork, and will also allow establishing a relationship of product quality indicators with quantity and quality of minced meat ingredients.

  12. Predictors of participant engagement and naloxone utilization in a community-based naloxone distribution program

    Science.gov (United States)

    Rowe, Christopher; Santos, Glenn-Milo; Vittinghoff, Eric; Wheeler, Eliza; Davidson, Peter; Coffin, Philip O.

    2015-01-01

    Aims To describe characteristics of participants and overdose reversals associated with a community-based naloxone distribution program and identify predictors of obtaining naloxone refills and using naloxone for overdose reversal. Design Bivariate statistical tests were used to compare characteristics of participants who obtained refills and reported overdose reversals, versus those who did not. We fitted multiple logistic regression models to identify predictors of refills and reversals; zero-inflated multiple Poisson regression models were used to identify predictors of number of refills and reversals. Setting San Francisco, California, U.S.A. Participants Naloxone program participants registered and reversals reported from 2010-2013. Measurements Baseline characteristics of participants and reported characteristics of reversals. Findings 2500 participants were registered and 702 reversals were reported from 2010-2013. Participants who had witnessed an overdose [AOR=2.02(1.53-2.66); AOR=2.73(1.73-4.30)] or used heroin [AOR=1.85(1.44-2.37); AOR=2.19(1.54-3.13)], or methamphetamine [AOR=1.71(1.37-2.15); AOR=1.61(1.18-2.19)] had higher odds of obtaining a refill and reporting a reversal, respectively. African American [Adjusted Odds Ratio=0.63(95%CI=0.45-0.88)] and Latino [AOR=0.65(0.43-1.00)] participants had lower odds of obtaining a naloxone refill whereas Latino participants who obtained at least one refill reported a higher number of refills [Incidence Rate Ratio=1.33(1.05-1.69)]. Conclusions Community naloxone distribution programs are capable of reaching sizeable populations of high-risk individuals and facilitating large numbers of overdose reversals. Community members most likely to engage with a naloxone program and use naloxone to reverse an overdose are active drug users. PMID:25917125

  13. Providing International Research Experiences in Water Resources Through a Distributed REU Program

    Science.gov (United States)

    Judge, J.; Sahrawat, K.; Mylavarapu, R.

    2012-12-01

    Research experiences for undergraduates offer training in problem solving and critical thinking via hands-on projects. The goal of the distributed Research Experience for Undergraduates (REU) Program in the Agricultural and Biological Engineering Department (ABE) at the University of Florida (UF) is to provide undergraduate students a unique opportunity to conduct research in water resources using interdisciplinary approaches, integrating research and extension, while the cohort is not co-located. The eight-week REU Program utilizes the extensive infrastructure of UF - Institute of Food and Agricultural Sciences (IFAS) through the Research and Education Centers (RECs). To provide international research and extension experience, two students were located at the International Crop Research Institute for the Semi-Arid Tropics (ICRISAT), in India. Prior to the beginning of the Program, the students worked closely with their research mentors at University of Florida and ICRISAT to develop a project plan for understanding the water quality issues in two watersheds. The students were co-located during the Orientation week at the University of Florida. During the Program, they achieved an enriching cohort experience through social networking, daily blogs, and weekly video conferences to share their research and other REU experiences. The group meetings and guest lectures are conducted via synchronously through video conferencing. The students who were distributed across Florida benefited from the research experiences of the students who were located in India, as their project progressed. They described their challenges and achievements during the group meetings and in the blogs. This model of providing integrated research and extension opportunities in hydrology where not all the REU participants are physically co-located, is unique and can be extended to other disciplines.

  14. Comparison of Two Methods Used to Model Shape Parameters of Pareto Distributions

    Science.gov (United States)

    Liu, C.; Charpentier, R.R.; Su, J.

    2011-01-01

    Two methods are compared for estimating the shape parameters of Pareto field-size (or pool-size) distributions for petroleum resource assessment. Both methods assume mature exploration in which most of the larger fields have been discovered. Both methods use the sizes of larger discovered fields to estimate the numbers and sizes of smaller fields: (1) the tail-truncated method uses a plot of field size versus size rank, and (2) the log-geometric method uses data binned in field-size classes and the ratios of adjacent bin counts. Simulation experiments were conducted using discovered oil and gas pool-size distributions from four petroleum systems in Alberta, Canada and using Pareto distributions generated by Monte Carlo simulation. The estimates of the shape parameters of the Pareto distributions, calculated by both the tail-truncated and log-geometric methods, generally stabilize where discovered pool numbers are greater than 100. However, with fewer than 100 discoveries, these estimates can vary greatly with each new discovery. The estimated shape parameters of the tail-truncated method are more stable and larger than those of the log-geometric method where the number of discovered pools is more than 100. Both methods, however, tend to underestimate the shape parameter. Monte Carlo simulation was also used to create sequences of discovered pool sizes by sampling from a Pareto distribution with a discovery process model using a defined exploration efficiency (in order to show how biased the sampling was in favor of larger fields being discovered first). A higher (more biased) exploration efficiency gives better estimates of the Pareto shape parameters. ?? 2011 International Association for Mathematical Geosciences.

  15. The impact of Japan's 2004 postgraduate training program on intra-prefectural distribution of pediatricians in Japan.

    Science.gov (United States)

    Sakai, Rie; Wang, Wei; Yamaguchi, Norihiro; Tamura, Hiroshi; Goto, Rei; Kawachi, Ichiro

    2013-01-01

    Inequity in physician distribution poses a challenge to many health systems. In Japan, a new postgraduate training program for all new medical graduates was introduced in 2004, and researchers have argued that this program has increased inequalities in physician distribution. We examined the trends in the geographic distribution of pediatricians as well as all physicians from 1996 to 2010 to identify the impact of the launch of the new training program. The Gini coefficient was calculated using municipalities as the study unit within each prefecture to assess whether there were significant changes in the intra-prefectural distribution of all physicians and pediatricians before and after the launch of the new training program. The effect of the new program was quantified by estimating the difference in the slope in the time trend of the Gini coefficients before and after 2004 using a linear change-point regression design. We categorized 47 prefectures in Japan into two groups: 1) predominantly urban and 2) others by the definition from OECD to conduct stratified analyses by urban-rural status. The trends in physician distribution worsened after 2004 for all physicians (p valuelaunch of the new training program, which may reflect the impact of the new postgraduate program. In pediatrics, changes in the Gini trend differed significantly before and after the launch of the new training program in others, but not in predominantly urban prefectures. Further observation is needed to explore how this difference in trends affects the health status of the child population.

  16. Generalizations of the Alternating Direction Method of Multipliers for Large-Scale and Distributed Optimization

    Science.gov (United States)

    2014-05-01

    closely related to the dual ascent method (or dual subgradient method) [53] as well as the quadratic penalty method. Compared to these methods, the...t)y) ≤ tf(x) + (1− t)f(y)− 1 2 νt(1− t)‖x− y‖2. (2.28) For a convex function f , we let the subdifferential (i.e., the set of all subgradients ) of f...review A simple distributed algorithm for solving (5.1) is dual decomposition [19], which is essentially a dual ascent method or dual subgradient

  17. Method for Estimating the Charge Density Distribution on a Dielectric Surface.

    Science.gov (United States)

    Nakashima, Takuya; Suhara, Hiroyuki; Murata, Hidekazu; Shimoyama, Hiroshi

    2017-06-01

    High-quality color output from digital photocopiers and laser printers is in strong demand, motivating attempts to achieve fine dot reproducibility and stability. The resolution of a digital photocopier depends on the charge density distribution on the organic photoconductor surface; however, directly measuring the charge density distribution is impossible. In this study, we propose a new electron optical instrument that can rapidly measure the electrostatic latent image on an organic photoconductor surface, which is a dielectric surface, as well as a novel method to quantitatively estimate the charge density distribution on a dielectric surface by combining experimental data obtained from the apparatus via a computer simulation. In the computer simulation, an improved three-dimensional boundary charge density method (BCM) is used for electric field analysis in the vicinity of the dielectric material with a charge density distribution. This method enables us to estimate the profile and quantity of the charge density distribution on a dielectric surface with a resolution of the order of microns. Furthermore, the surface potential on the dielectric surface can be immediately calculated using the obtained charge density. This method enables the relation between the charge pattern on the organic photoconductor surface and toner particle behavior to be studied; an understanding regarding the same may lead to the development of a new generation of higher resolution photocopiers.

  18. The QUELCE Method: Using Change Drivers to Estimate Program Costs

    Science.gov (United States)

    2016-08-01

    TRADEMARK , OR COPYRIGHT INFRINGEMENT. [Distribution Statement A] This material has been approved for public release and unlimited distribution...Policies, and Standards These change drivers are unanticipated conditions or events that could change the practices, poli- cies, standards, laws , and...Contracting These change drivers are unanticipated conditions or events related to the program’s process of setting up a mutually binding legal

  19. Integrating Software-Architecture-Centric Methods into Extreme Programming (XP)

    National Research Council Canada - National Science Library

    Nord, Robert L; Tomayko, James E; Wojcik, Rob

    2004-01-01

    ...). These methods include the Architecture Tradeoff Analysis Method (Registered Tradename), the SEI Quality Attribute Workshop, the SE Attribute-Driven Design method, the SE Cost Benefit Analysis Method, and SEI Active Reviews for Intermediate Design...

  20. An iterative method for tri-level quadratic fractional programming problems using fuzzy goal programming approach

    Science.gov (United States)

    Kassa, Semu Mitiku; Tsegay, Teklay Hailay

    2017-08-01

    Tri-level optimization problems are optimization problems with three nested hierarchical structures, where in most cases conflicting objectives are set at each level of hierarchy. Such problems are common in management, engineering designs and in decision making situations in general, and are known to be strongly NP-hard. Existing solution methods lack universality in solving these types of problems. In this paper, we investigate a tri-level programming problem with quadratic fractional objective functions at each of the three levels. A solution algorithm has been proposed by applying fuzzy goal programming approach and by reformulating the fractional constraints to equivalent but non-fractional non-linear constraints. Based on the transformed formulation, an iterative procedure is developed that can yield a satisfactory solution to the tri-level problem. The numerical results on various illustrative examples demonstrated that the proposed algorithm is very much promising and it can also be used to solve larger-sized as well as n-level problems of similar structure.

  1. Networked and Distributed Control Method with Optimal Power Dispatch for Islanded Microgrids

    DEFF Research Database (Denmark)

    Li, Qiang; Peng, Congbo; Chen, Minyou

    2017-01-01

    of controllable agents. The distributed control laws derived from the first subgraph guarantee the supply-demand balance, while further control laws from the second subgraph reassign the outputs of controllable distributed generators, which ensure active and reactive power are dispatched optimally. However...... according to our proposition. Finally, the method is evaluated over seven cases via simulation. The results show that the system performs as desired, even if environmental conditions and load demand fluctuate significantly. In summary, the method can rapidly respond to fluctuations resulting in optimal...

  2. Review and possible development direction of the methods for modeling of soil pollutants spatial distribution

    Science.gov (United States)

    Tarasov, D. A.; Medvedev, A. N.; Sergeev, A. P.; Buevich, A. G.

    2017-07-01

    Forecasting of environmental pollutants spatial distribution is a significant field of research in view of the current concerns regarding environment all over the world. Due to the danger to health and environment associated with an increase in pollution of air, soil, water and biosphere, it is very important to have the models that are capable to describe the modern distribution of contaminants and to forecast the dynamic of their spreading in future at different territories. This article addresses the methods, which applied the most often in this field, with an accent on soil pollution. The possible direction of such methods further development is suggested.

  3. A uniform measurement expression for cross method comparison of nanoparticle aggregate size distributions

    DEFF Research Database (Denmark)

    Dudkiewicz, Agnieszka; Wagner, Stephan; Lehner, Angela

    2015-01-01

    plasma mass spectrometry detection (AF4-ICP-MS). Transformed size distributions are then compared between the methods and conclusions drawn on methods’ measurement accuracy, limits of detection and quantification related to the synthetic amorphous silca’s size. Two out of the six tested methods (GEMMA...... and AF4-ICP-MS) cross validate the MED distributions between each other, providing a true measurement. The measurement accuracy of other four techniques is shown to be compromised either by the high limit of detection and quantification (CLS, NTA, Wet-SEM) or the sample preparation that is biased...

  4. An efficient Markov chain Monte Carlo method for distributions with intractable normalising constants

    DEFF Research Database (Denmark)

    Møller, Jesper; Pettitt, A. N.; Reeves, R.

    2006-01-01

    Maximum likelihood parameter estimation and sampling from Bayesian posterior distributions are problematic when the probability density for the parameter of interest involves an intractable normalising constant which is also a function of that parameter. In this paper, an auxiliary variable method...... is presented which requires only that independent samples can be drawn from the unnormalised density at any particular parameter value. The proposal distribution is constructed so that the normalising constant cancels from the Metropolis–Hastings ratio. The method is illustrated by producing posterior samples...

  5. An improved method for calculating force distributions in moment-stiff timber connections

    DEFF Research Database (Denmark)

    Ormarsson, Sigurdur; Blond, Mette

    2012-01-01

    the slip modulus varies with the angle between the direction of the dowel forces and the fibres in question, as well as how the orthotropic stiffness behaviour of the wood material affects the direction and the size of the forces. It was assumed that the force distribution generated by the moment action......An improved method for calculating force distributions in moment-stiff metal dowel-type timber connections is presented, a method based on use of three-dimensional finite element simulations of timber connections subjected to moment action. The study that was carried out aimed at determining how...

  6. An improved method for calculating force distributions in moment-stiff timber connections

    DEFF Research Database (Denmark)

    Ormarsson, Sigurdur; Blond, Mette

    2012-01-01

    the slip modulus varies with the angle between the direction of the dowel forces and the fibres in question, as well as how the orthotropic stiffness behaviour of the wood material affects the direction and the size of the forces. It was assumed that the force distribution generated by the moment action......An improved method for calculating force distributions in moment-stiff multi-dowel timber connections is presented, a method based on use of three-dimensional finite element simulations of timber connections subjected to moment action. The study that was carried out aimed at determining how...

  7. Methods for fitting a parametric probability distribution to most probable number data.

    Science.gov (United States)

    Williams, Michael S; Ebel, Eric D

    2012-07-02

    Every year hundreds of thousands, if not millions, of samples are collected and analyzed to assess microbial contamination in food and water. The concentration of pathogenic organisms at the end of the production process is low for most commodities, so a highly sensitive screening test is used to determine whether the organism of interest is present in a sample. In some applications, samples that test positive are subjected to quantitation. The most probable number (MPN) technique is a common method to quantify the level of contamination in a sample because it is able to provide estimates at low concentrations. This technique uses a series of dilution count experiments to derive estimates of the concentration of the microorganism of interest. An application for these data is food-safety risk assessment, where the MPN concentration estimates can be fitted to a parametric distribution to summarize the range of potential exposures to the contaminant. Many different methods (e.g., substitution methods, maximum likelihood and regression on order statistics) have been proposed to fit microbial contamination data to a distribution, but the development of these methods rarely considers how the MPN technique influences the choice of distribution function and fitting method. An often overlooked aspect when applying these methods is whether the data represent actual measurements of the average concentration of microorganism per milliliter or the data are real-valued estimates of the average concentration, as is the case with MPN data. In this study, we propose two methods for fitting MPN data to a probability distribution. The first method uses a maximum likelihood estimator that takes average concentration values as the data inputs. The second is a Bayesian latent variable method that uses the counts of the number of positive tubes at each dilution to estimate the parameters of the contamination distribution. The performance of the two fitting methods is compared for two

  8. A heuristic nonlinear constructive method for electric power distribution system reconfiguration

    Science.gov (United States)

    McDermott, Thomas E.

    1998-12-01

    The electric power distribution system usually operates in a radial configuration, with tie switches between circuits to provide alternate feeds. The losses would be minimized if all switches were closed, but this is not done because it complicates the system's protection against overcurrents. Whenever a component fails, some of the switches must be operated to restore power to as many customers as possible. As loads vary with time, switch operations may reduce losses in the system. Both of these are applications for reconfiguration. The problem is combinatorial, which precludes algorithms that guarantee a global optimum. Most existing reconfiguration algorithms fall into two categories. In the first, branch exchange, the system operates in a feasible radial configuration and the algorithm opens and closes candidate switches in pairs. In the second, loop cutting, the system is completely meshed and the algorithm opens candidate switches to reach a feasible radial configuration. Reconfiguration algorithms based on linearized transshipment, neural networks, heuristics, genetic algorithms, and simulated annealing have also been reported, but not widely used. These existing reconfiguration algorithms work with a simplified model of the power system, and they handle voltage and current constraints approximately, if at all. The algorithm described here is a constructive method, using a full nonlinear power system model that accurately handles constraints. The system starts with all switches open and all failed components isolated. An optional network power flow provides a lower bound on the losses. Then the algorithm closes one switch at a time to minimize the increase in a merit figure, which is the real loss divided by the apparent load served. The merit figure increases with each switch closing. This principle, called discrete ascent optimal programming (DAOP), has been applied to other power system problems, including economic dispatch and phase balancing. For

  9. A Hybrid Water Distribution Networks Design Optimization Method Based on a Search Space Reduction Approach and a Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Juan Reca

    2017-11-01

    Full Text Available This work presents a new approach to increase the efficiency of the heuristics methods applied to the optimal design of water distribution systems. The approach is based on reducing the search space by bounding the diameters that can be used for every network pipe. To reduce the search space, two opposite extreme flow distribution scenarios are analyzed and velocity restrictions to the pipe flow are then applied. The first scenario produces the most uniform flow distribution in the network. The opposite scenario is represented by the network with the maximum flow accumulation. Both extreme flow distributions are calculated by solving a quadratic programming problem, which is a very robust and efficient procedure. This approach has been coupled to a Genetic Algorithm (GA. The GA has an integer coding scheme and variable number of alleles depending on the number of diameters comprised within the velocity restrictions. The methodology has been applied to several benchmark networks and its performance has been compared to a classic GA formulation with a non-bounded search space. It considerably reduced the search space and provided a much faster and more accurate convergence than the GA formulation. This approach can also be coupled to other metaheuristics.

  10. Incorporation of poison center services in a state-wide overdose education and naloxone distribution program.

    Science.gov (United States)

    Doyon, Suzanne; Benton, Carleigh; Anderson, Bruce A; Baier, Michael; Haas, Erin; Hadley, Lisa; Maehr, Jennifer; Rebbert-Franklin, Kathleen; Olsen, Yngvild; Welsh, Christopher

    2016-06-01

    To help curb the opioid overdose epidemic, many states are implementing overdose education and naloxone distribution (OEND) programs. Few evaluations of these programs exist. Maryland's OEND program incorporated the services of the poison center. It asked bystanders to call the poison center within 2 hours of administration of naloxone. Bystanders included law enforcement (LE). Description of the initial experience with this unique OEND program component. Retrospective case series of all cases of bystander-administered naloxone reported to the Maryland Poison Center over 16 months. Cases were followed to final outcome, for example, hospital discharge or death. Indications for naloxone included suspected opioid exposure and unresponsiveness, respiratory depression, or cyanosis. Naloxone response was defined as person's ability to breathe, talk, or walk within minutes of administration. Seventy-eight cases of bystander-administered naloxone were reported. Positive response to naloxone was observed in 75.6% of overall cases. Response rates were 86.1% and 70.9% for suspected exposures to heroin and prescription opioids, respectively. Two individuals failed to respond to naloxone and died. Naloxone response rates were higher and admission to the intensive care unit rates were lower in heroin overdoses than prescription opioid overdoses. This retrospective case series of 78 cases of bystander-administered naloxone reports a 75.6% overall rate of reversal. The findings of this study may be more generalizable. Incorporation of poison center services facilitated the capture of more timely data not usually available to OEND programs. (Am J Addict 2016;25:301-306). © 2016 American Academy of Addiction Psychiatry.

  11. Application of DVC-FISH method in tracking Escherichia coli in drinking water distribution networks

    Directory of Open Access Journals (Sweden)

    L. Mezule

    2013-04-01

    Full Text Available Sporadic detection of live (viable Escherichia coli in drinking water and biofilm with molecular methods but not with standard plate counts has raised concerns about the reliability of this indicator in the surveillance of drinking water safety. The aim of this study was to determine spatial distribution of different viability forms of E. coli in a drinking water distribution system which complies with European Drinking Water Directive (98/83/EC. For two years coupons (two week old and pre-concentrated (100 times with ultrafilters water samples were collected after treatment plants and from four sites in the distribution network at several distances. The samples were analyzed for total, viable (able to divide as DVC-FISH positive and cultivable E. coli. The results showed that low numbers of E. coli enters the distribution sytem from the treatment plants and tend to accumulate in the biofilm of water distribution system. Almost all of the samples contained metabolically active E. coli in the range of 1 to 50 cells per litre or cm2 which represented approximately 53% of all E. coli detected. The amount of viable E. coli significantly increased into the network irrespective of the season. The study has shown that DVC-FISH method in combination with water pre-concentration and biofilm sampling allows to better understand the behaviour of E. coli in water distribution networks, thus, it provides new evidences for water safety control.

  12. Design method of freeform light distribution lens for LED automotive headlamp based on DMD

    Science.gov (United States)

    Ma, Jianshe; Huang, Jianwei; Su, Ping; Cui, Yao

    2018-01-01

    We propose a new method to design freeform light distribution lens for light-emitting diode (LED) automotive headlamp based on digital micro mirror device (DMD). With the Parallel optical path architecture, the exit pupil of the illuminating system is set in infinity. Thus the principal incident rays of micro lens in DMD is parallel. DMD is made of high speed digital optical reflection array, the function of distribution lens is to distribute the emergent parallel rays from DMD and get a lighting pattern that fully comply with the national regulation GB 25991-2010.We use DLP 4500 to design the light distribution lens, mesh the target plane regulated by the national regulation GB 25991-2010 and correlate the mesh grids with the active mirror array of DLP4500. With the mapping relations and the refraction law, we can build the mathematics model and get the parameters of freeform light distribution lens. Then we import its parameter into the three-dimensional (3D) software CATIA to construct its 3D model. The ray tracing results using Tracepro demonstrate that the Illumination value of target plane is easily adjustable and fully comply with the requirement of the national regulation GB 25991-2010 by adjusting the exit brightness value of DMD. The theoretical optical efficiencies of the light distribution lens designed using this method could be up to 92% without any other auxiliary lens.

  13. A method to describe inelastic gamma field distribution in neutron gamma density logging.

    Science.gov (United States)

    Zhang, Feng; Zhang, Quanying; Liu, Juntao; Wang, Xinguang; Wu, He; Jia, Wenbao; Ti, Yongzhou; Qiu, Fei; Zhang, Xiaoyang

    2017-11-01

    Pulsed neutron gamma density logging (NGD) is of great significance for radioprotection and density measurement in LWD, however, the current methods have difficulty in quantitative calculation and single factor analysis for the inelastic gamma field distribution. In order to clarify the NGD mechanism, a new method is developed to describe the inelastic gamma field distribution. Based on the fast-neutron scattering and gamma attenuation, the inelastic gamma field distribution is characterized by the inelastic scattering cross section, fast-neutron scattering free path, formation density and other parameters. And the contribution of formation parameters on the field distribution is quantitatively analyzed. The results shows the contribution of density attenuation is opposite to that of inelastic scattering cross section and fast-neutron scattering free path. And as the detector-spacing increases, the density attenuation gradually plays a dominant role in the gamma field distribution, which means large detector-spacing is more favorable for the density measurement. Besides, the relationship of density sensitivity and detector spacing was studied according to this gamma field distribution, therefore, the spacing of near and far gamma ray detector is determined. The research provides theoretical guidance for the tool parameter design and density determination of pulsed neutron gamma density logging technique. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. A Generalized Fluid System Simulation Program to Model Flow Distribution in Fluid Networks

    Science.gov (United States)

    Majumdar, Alok; Bailey, John W.; Schallhorn, Paul; Steadman, Todd

    1998-01-01

    This paper describes a general purpose computer program for analyzing steady state and transient flow in a complex network. The program is capable of modeling phase changes, compressibility, mixture thermodynamics and external body forces such as gravity and centrifugal. The program's preprocessor allows the user to interactively develop a fluid network simulation consisting of nodes and branches. Mass, energy and specie conservation equations are solved at the nodes; the momentum conservation equations are solved in the branches. The program contains subroutines for computing "real fluid" thermodynamic and thermophysical properties for 33 fluids. The fluids are: helium, methane, neon, nitrogen, carbon monoxide, oxygen, argon, carbon dioxide, fluorine, hydrogen, parahydrogen, water, kerosene (RP-1), isobutane, butane, deuterium, ethane, ethylene, hydrogen sulfide, krypton, propane, xenon, R-11, R-12, R-22, R-32, R-123, R-124, R-125, R-134A, R-152A, nitrogen trifluoride and ammonia. The program also provides the options of using any incompressible fluid with constant density and viscosity or ideal gas. Seventeen different resistance/source options are provided for modeling momentum sources or sinks in the branches. These options include: pipe flow, flow through a restriction, non-circular duct, pipe flow with entrance and/or exit losses, thin sharp orifice, thick orifice, square edge reduction, square edge expansion, rotating annular duct, rotating radial duct, labyrinth seal, parallel plates, common fittings and valves, pump characteristics, pump power, valve with a given loss coefficient, and a Joule-Thompson device. The system of equations describing the fluid network is solved by a hybrid numerical method that is a combination of the Newton-Raphson and successive substitution methods. This paper also illustrates the application and verification of the code by comparison with Hardy Cross method for steady state flow and analytical solution for unsteady flow.

  15. A multiple distributed representation method based on neural network for biomedical event extraction.

    Science.gov (United States)

    Wang, Anran; Wang, Jian; Lin, Hongfei; Zhang, Jianhai; Yang, Zhihao; Xu, Kan

    2017-12-20

    Biomedical event extraction is one of the most frontier domains in biomedical research. The two main subtasks of biomedical event extraction are trigger identification and arguments detection which can both be considered as classification problems. However, traditional state-of-the-art methods are based on support vector machine (SVM) with massive manually designed one-hot represented features, which require enormous work but lack semantic relation among words. In this paper, we propose a multiple distributed representation method for biomedical event extraction. The method combines context consisting of dependency-based word embedding, and task-based features represented in a distributed way as the input of deep learning models to train deep learning models. Finally, we used softmax classifier to label the example candidates. The experimental results on Multi-Level Event Extraction (MLEE) corpus show higher F-scores of 77.97% in trigger identification and 58.31% in overall compared to the state-of-the-art SVM method. Our distributed representation method for biomedical event extraction avoids the problems of semantic gap and dimension disaster from traditional one-hot representation methods. The promising results demonstrate that our proposed method is effective for biomedical event extraction.

  16. Thermodynamic method for generating random stress distributions on an earthquake fault

    Science.gov (United States)

    Barall, Michael; Harris, Ruth A.

    2012-01-01

    This report presents a new method for generating random stress distributions on an earthquake fault, suitable for use as initial conditions in a dynamic rupture simulation. The method employs concepts from thermodynamics and statistical mechanics. A pattern of fault slip is considered to be analogous to a micro-state of a thermodynamic system. The energy of the micro-state is taken to be the elastic energy stored in the surrounding medium. Then, the Boltzmann distribution gives the probability of a given pattern of fault slip and stress. We show how to decompose the system into independent degrees of freedom, which makes it computationally feasible to select a random state. However, due to the equipartition theorem, straightforward application of the Boltzmann distribution leads to a divergence which predicts infinite stress. To avoid equipartition, we show that the finite strength of the fault acts to restrict the possible states of the system. By analyzing a set of earthquake scaling relations, we derive a new formula for the expected power spectral density of the stress distribution, which allows us to construct a computer algorithm free of infinities. We then present a new technique for controlling the extent of the rupture by generating a random stress distribution thousands of times larger than the fault surface, and selecting a portion which, by chance, has a positive stress perturbation of the desired size. Finally, we present a new two-stage nucleation method that combines a small zone of forced rupture with a larger zone of reduced fracture energy.

  17. Regression modeling of particle size distributions in urban storm water: advancements through improved sample collection methods

    Science.gov (United States)

    Fienen, Michael N.; Selbig, William R.

    2012-01-01

    A new sample collection system was developed to improve the representation of sediment entrained in urban storm water by integrating water quality samples from the entire water column. The depth-integrated sampler arm (DISA) was able to mitigate sediment stratification bias in storm water, thereby improving the characterization of suspended-sediment concentration and particle size distribution at three independent study locations. Use of the DISA decreased variability, which improved statistical regression to predict particle size distribution using surrogate environmental parameters, such as precipitation depth and intensity. The performance of this statistical modeling technique was compared to results using traditional fixed-point sampling methods and was found to perform better. When environmental parameters can be used to predict particle size distributions, environmental managers have more options when characterizing concentrations, loads, and particle size distributions in urban runoff.

  18. ON ESTIMATION AND HYPOTHESIS TESTING OF THE GRAIN SIZE DISTRIBUTION BY THE SALTYKOV METHOD

    Directory of Open Access Journals (Sweden)

    Yuri Gulbin

    2011-05-01

    Full Text Available The paper considers the problem of validity of unfolding the grain size distribution with the back-substitution method. Due to the ill-conditioned nature of unfolding matrices, it is necessary to evaluate the accuracy and precision of parameter estimation and to verify the possibility of expected grain size distribution testing on the basis of intersection size histogram data. In order to review these questions, the computer modeling was used to compare size distributions obtained stereologically with those possessed by three-dimensional model aggregates of grains with a specified shape and random size. Results of simulations are reported and ways of improving the conventional stereological techniques are suggested. It is shown that new improvements in estimating and testing procedures enable grain size distributions to be unfolded more efficiently.

  19. A distributed computing approach to improve the performance of the Parallel Ocean Program (v2.1

    Directory of Open Access Journals (Sweden)

    B. van Werkhoven

    2014-02-01

    Full Text Available The Parallel Ocean Program (POP is used in many strongly eddying ocean circulation simulations. Ideally it would be desirable to be able to do thousand-year-long simulations, but the current performance of POP prohibits these types of simulations. In this work, using a new distributed computing approach, two methods to improve the performance of POP are presented. The first is a block-partitioning scheme for the optimization of the load balancing of POP such that it can be run efficiently in a multi-platform setting. The second is the implementation of part of the POP model code on graphics processing units (GPUs. We show that the combination of both innovations also leads to a substantial performance increase when running POP simultaneously over multiple computational platforms.

  20. A modified weighted function method for parameter estimation of Pearson type three distribution

    Science.gov (United States)

    Liang, Zhongmin; Hu, Yiming; Li, Binquan; Yu, Zhongbo

    2014-04-01

    In this paper, an unconventional method called Modified Weighted Function (MWF) is presented for the conventional moment estimation of a probability distribution function. The aim of MWF is to estimate the coefficient of variation (CV) and coefficient of skewness (CS) from the original higher moment computations to the first-order moment calculations. The estimators for CV and CS of Pearson type three distribution function (PE3) were derived by weighting the moments of the distribution with two weight functions, which were constructed by combining two negative exponential-type functions. The selection of these weight functions was based on two considerations: (1) to relate weight functions to sample size in order to reflect the relationship between the quantity of sample information and the role of weight function and (2) to allocate more weights to data close to medium-tail positions in a sample series ranked in an ascending order. A Monte-Carlo experiment was conducted to simulate a large number of samples upon which statistical properties of MWF were investigated. For the PE3 parent distribution, results of MWF were compared to those of the original Weighted Function (WF) and Linear Moments (L-M). The results indicate that MWF was superior to WF and slightly better than L-M, in terms of statistical unbiasness and effectiveness. In addition, the robustness of MWF, WF, and L-M were compared by designing the Monte-Carlo experiment that samples are obtained from Log-Pearson type three distribution (LPE3), three parameter Log-Normal distribution (LN3), and Generalized Extreme Value distribution (GEV), respectively, but all used as samples from the PE3 distribution. The results show that in terms of statistical unbiasness, no one method possesses the absolutely overwhelming advantage among MWF, WF, and L-M, while in terms of statistical effectiveness, the MWF is superior to WF and L-M.

  1. Tide forecasting method based on dynamic weight distribution for operational evaluation

    Directory of Open Access Journals (Sweden)

    Shao-wei Qiu

    2009-03-01

    Full Text Available Through analysis of operational evaluation factors for tide forecasting, the relationship between the evaluation factors and the weights of forecasters was examined. A tide forecasting method based on dynamic weight distribution for operational evaluation was developed, and multiple-forecaster synchronous forecasting was realized while avoiding the instability cased by only one forecaster. Weights were distributed to the forecasters according to each one's forecast precision. An evaluation criterion for the professional level of the forecasters was also built. The eligibility rates of forecast results demonstrate the skill of the forecasters and the stability of their forecasts. With the developed tide forecasting method, the precision and reasonableness of tide forecasting are improved. The application of the present method to tide forecasting at the Huangpu Park tidal station demonstrates the validity of the method.

  2. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    of generic run-time error types, design of methods of observing application software behaviorduring execution and design of methods of evaluating run time constraints. In the definition of error types it is attempted to cover all relevant aspects of the application softwaree behavior. Methods of observation...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design......In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...

  3. Prestressing force monitoring method for a box girder through distributed long-gauge FBG sensors

    Science.gov (United States)

    Chen, Shi-Zhi; Wu, Gang; Xing, Tuo; Feng, De-Cheng

    2018-01-01

    Monitoring prestressing forces is essential for prestressed concrete box girder bridges. However, the current monitoring methods used for prestressing force were not applicable for a box girder neither because of the sensor’s setup being constrained or shear lag effect not being properly considered. Through combining with the previous analysis model of shear lag effect in the box girder, this paper proposed an indirect monitoring method for on-site determination of prestressing force in a concrete box girder utilizing the distributed long-gauge fiber Bragg grating sensor. The performance of this method was initially verified using numerical simulation for three different distribution forms of prestressing tendons. Then, an experiment involving two concrete box girders was conducted to study the feasibility of this method under different prestressing levels preliminarily. The results of both numerical simulation and lab experiment validated this method’s practicability in a box girder.

  4. IFNA approved Chinese Anaesthesia Nurse Education Program: A Delphi method.

    Science.gov (United States)

    Hu, Jiale; Fallacaro, Michael D; Jiang, Lili; Wu, Junyan; Jiang, Hong; Shi, Zhen; Ruan, Hong

    2017-09-01

    Numerous nurses work in operating rooms and recovery rooms or participate in the performance of anaesthesia in China. However, the scope of practice and the education for Chinese Anaesthesia Nurses is not standardized, varying from one geographic location to another. Furthermore, most nurses are not trained sufficiently to provide anaesthesia care. This study aimed to develop the first Anaesthesia Nurse Education Program in Mainland China based on the Educational Standards of the International Federation of Nurse Anaesthetists. The Delphi technique was applied to develop the scope of practice, competencies for Chinese Anaesthesia Nurses and education program. In 2014 the Anaesthesia Nurse Education Program established by the hospital applied for recognition by the International Federation of Nurse Anaesthetists. The Program's curriculum was evaluated against the IFNA Standards and recognition was awarded in 2015. The four-category, 50-item practice scope, and the three-domain, 45-item competency list were identified for Chinese Anaesthesia Nurses. The education program, which was established based on the International Federation of Nurse Anaesthetists educational standards and Chinese context, included nine curriculum modules. In March 2015, 13 candidates received and passed the 21-month education program. The Anaesthesia Nurse Education Program became the first program approved by the International Federation of Nurse Anaesthetists in China. Policy makers and hospital leaders can be confident that anaesthesia nurses graduating from this Chinese program will be prepared to demonstrate high level patient care as reflected in the recognition by IFNA of their adoption of international nurse anaesthesia education standards. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Cellular Neural Network-Based Methods for Distributed Network Intrusion Detection

    Directory of Open Access Journals (Sweden)

    Kang Xie

    2015-01-01

    Full Text Available According to the problems of current distributed architecture intrusion detection systems (DIDS, a new online distributed intrusion detection model based on cellular neural network (CNN was proposed, in which discrete-time CNN (DTCNN was used as weak classifier in each local node and state-controlled CNN (SCCNN was used as global detection method, respectively. We further proposed a new method for design template parameters of SCCNN via solving Linear Matrix Inequality. Experimental results based on KDD CUP 99 dataset show its feasibility and effectiveness. Emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI implementation which allows the distributed intrusion detection to be performed better.

  6. Voltage Based Detection Method for High Impedance Fault in a Distribution System

    Science.gov (United States)

    Thomas, Mini Shaji; Bhaskar, Namrata; Prakash, Anupama

    2016-09-01

    High-impedance faults (HIFs) on distribution feeders cannot be detected by conventional protection schemes, as HIFs are characterized by their low fault current level and waveform distortion due to the nonlinearity of the ground return path. This paper proposes a method to identify the HIFs in distribution system and isolate the faulty section, to reduce downtime. This method is based on voltage measurements along the distribution feeder and utilizes the sequence components of the voltages. Three models of high impedance faults have been considered and source side and load side breaking of the conductor have been studied in this work to capture a wide range of scenarios. The effect of neutral grounding of the source side transformer is also accounted in this study. The results show that the algorithm detects the HIFs accurately and rapidly. Thus, the faulty section can be isolated and service can be restored to the rest of the consumers.

  7. New method to estimate the sample size for calculation of a proportion assuming binomial distribution.

    Science.gov (United States)

    Vallejo, Adriana; Muniesa, Ana; Ferreira, Chelo; de Blas, Ignacio

    2013-10-01

    Nowadays the formula to calculate the sample size for estimate a proportion (as prevalence) is based on the Normal distribution, however it would be based on a Binomial distribution which confidence interval was possible to be calculated using the Wilson Score method. By comparing the two formulae (Normal and Binomial distributions), the variation of the amplitude of the confidence intervals is relevant in the tails and the center of the curves. In order to calculate the needed sample size we have simulated an iterative sampling procedure, which shows an underestimation of the sample size for values of prevalence closed to 0 or 1, and also an overestimation for values closed to 0.5. Attending to these results we proposed an algorithm based on Wilson Score method that provides similar values for the sample size than empirically obtained by simulation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. High definition in minimally invasive surgery: a review of methods for recording, editing, and distributing video.

    Science.gov (United States)

    Kelly, Christopher R; Hogle, Nancy J; Landman, Jaime; Fowler, Dennis L

    2008-09-01

    The use of high-definition cameras and monitors during minimally invasive procedures can provide the surgeon and operating team with more than twice the resolution of standard definition systems. Although this dramatic improvement in visualization offers numerous advantages, the adoption of high definition cameras in the operating room can be challenging because new recording equipment must be purchased, and several new technologies are required to edit and distribute video. The purpose of this review article is to provide an overview of the popular methods for recording, editing, and distributing high-definition video. This article discusses the essential technical concepts of high-definition video, reviews the different kinds of equipment and methods most often used for recording, and describes several options for video distribution.

  9. Methods and Patterns for User-Friendly Quantum Programming.

    Science.gov (United States)

    Singh, Alexandros; Giannakis, Konstantinos; Kastampolidou, Kalliopi; Papalitsas, Christos

    2017-01-01

    The power and efficiency of particular quantum algorithms over classical ones has been proved. The rise of quantum computing and algorithms has highlighted the need for appropriate programming means and tools. Here, we present a brief overview of some techniques and a proposed methodology in writing quantum programs and designing languages. Our approach offers "user-friendly" features to ease the development of such programs. We also give indicative snippets in an untyped fragment of the Qumin language, describing well-known quantum algorithms.

  10. Numerical calculation of elastohydrodynamic lubrication methods and programs

    CERN Document Server

    Huang, Ping

    2015-01-01

    The book not only offers scientists and engineers a clear inter-disciplinary introduction and orientation to all major EHL problems and their solutions but, most importantly, it also provides numerical programs on specific application in engineering. A one-stop reference providing equations and their solutions to all major elastohydrodynamic lubrication (EHL) problems, plus numerical programs on specific applications in engineering offers engineers and scientists a clear inter-disciplinary introduction and a concise program for practical engineering applications to most important EHL problems

  11. Calculating thermal radiation of a vibrational nonequilibrium gas flow using the method of k-distribution

    Science.gov (United States)

    Molchanov, A. M.; Bykov, L. V.; Yanyshev, D. S.

    2017-05-01

    The method has been developed to calculate infrared radiation of vibrational nonequilibrium gas based on k-distribution. A comparison of the data on the calculated nonequilibrium radiation with results of other authors and with experimental data has shown satisfactory agreement. It is shown that the results of calculation of radiation intensity using nonequilibrium and equilibrium methods significantly differ from each other. The discrepancy increases with increasing height (decreasing pressure) and can exceed an order of magnitude.

  12. Conventional and Alternative Disinfection Methods of Legionella in Water Distribution Systems – Review

    Directory of Open Access Journals (Sweden)

    Pūle Daina

    2016-12-01

    Full Text Available Prevalence of Legionella in drinking water distribution systems is a widespread problem. Outbreaks of Legionella caused diseases occur despite various disinfectants are used in order to control Legionella. Conventional methods like thermal disinfection, silver/copper ionization, ultraviolet irradiation or chlorine-based disinfection have not been effective in the long term for control of biofilm bacteria. Therefore, research to develop more effective disinfection methods is still necessary.

  13. Practical method for radioactivity distribution analysis in small-animal PET cancer studies

    OpenAIRE

    Slavine, Nikolai V; Antich, Peter P.

    2008-01-01

    We present a practical method for radioactivity distribution analysis in small-animal tumors and organs using positron emission tomography imaging with a calibrated source of known activity and size in the field of view. We reconstruct the imaged mouse together with a source under the same conditions, using an iterative method, Maximum Likelihood Expectation-Maximization with System Modeling, capable of delivering high resolution images. Corrections for the ratios of geometrical efficiencies,...

  14. Method for Quantitative Determination of Spatial Polymer Distribution in Alginate Beads Using Raman Spectroscopy

    NARCIS (Netherlands)

    Heinemann, Matthias; Meinberg, Holger; Büchs, Jochen; Koß, Hans-Jürgen; Ansorge-Schumacher, Marion B.

    2005-01-01

    A new method based on Raman spectroscopy is presented for non-invasive, quantitative determination of the spatial polymer distribution in alginate beads of approximately 4 mm diameter. With the experimental setup, a two-dimensional image is created along a thin measuring line through the bead

  15. Air method measurements of apple vessel length distributions with improved apparatus and theory

    Science.gov (United States)

    Shabtal Cohen; John Bennink; Mel Tyree

    2003-01-01

    Studies showing that rootstock dwarfing potential is related to plant hydraulic conductance led to the hypothesis that xylem properties are also related. Vessel length distribution and other properties of apple wood from a series of varieties were measured using the 'air method' in order to test this hypothesis. Apparatus was built to measure and monitor...

  16. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...

  17. Multisite-multivariable sensitivity analysis of distributed watershed models: enhancing the perceptions from computationally frugal methods

    Science.gov (United States)

    This paper assesses the impact of different likelihood functions in identifying sensitive parameters of the highly parameterized, spatially distributed Soil and Water Assessment Tool (SWAT) watershed model for multiple variables at multiple sites. The global one-factor-at-a-time (OAT) method of Morr...

  18. Comparison of "E-Rater"[R] Automated Essay Scoring Model Calibration Methods Based on Distributional Targets

    Science.gov (United States)

    Zhang, Mo; Williamson, David M.; Breyer, F. Jay; Trapani, Catherine

    2012-01-01

    This article describes two separate, related studies that provide insight into the effectiveness of "e-rater" score calibration methods based on different distributional targets. In the first study, we developed and evaluated a new type of "e-rater" scoring model that was cost-effective and applicable under conditions of absent human rating and…

  19. A method for generating permutation distribution of ranks in a k ...

    African Journals Online (AJOL)

    sample experiment is presented. This provides a methodology for constructing exact test of significance of a rank statistic. The proposed method is linked to the partition of integers and in a combinatorial sense the distribution of the ranks is ...

  20. The simulation of skin temperature distributions by means of a relaxation method (applied to IR thermography)

    NARCIS (Netherlands)

    Vermey, G.F.

    1975-01-01

    To solve the differential equation for the heat in a two-layer, rectangular piece of skin tissue, a relaxation method, based on a finite difference technique, is used. The temperature distributions on the skin surface are calculated. The results are used to derive a criterion for the resolution for

  1. Methods to estimate distribution and range extent of grizzly bears in the Greater Yellowstone Ecosystem

    Science.gov (United States)

    Haroldson, Mark A.; Schwartz, Charles C.; , Daniel D. Bjornlie; , Daniel J. Thompson; , Kerry A. Gunther; , Steven L. Cain; , Daniel B. Tyers; Frey, Kevin L.; Aber, Bryan C.

    2014-01-01

    The distribution of the Greater Yellowstone Ecosystem grizzly bear (Ursus arctos) population has expanded into areas unoccupied since the early 20th century. Up-to-date information on the area and extent of this distribution is crucial for federal, state, and tribal wildlife and land managers to make informed decisions regarding grizzly bear management. The most recent estimate of grizzly bear distribution (2004) utilized fixed-kernel density estimators to describe distribution. This method was complex and computationally time consuming and excluded observations of unmarked bears. Our objective was to develop a technique to estimate grizzly bear distribution that would allow for the use of all verified grizzly bear location data, as well as provide the simplicity to be updated more frequently. We placed all verified grizzly bear locations from all sources from 1990 to 2004 and 1990 to 2010 onto a 3-km × 3-km grid and used zonal analysis and ordinary kriging to develop a predicted surface of grizzly bear distribution. We compared the area and extent of the 2004 kriging surface with the previous 2004 effort and evaluated changes in grizzly bear distribution from 2004 to 2010. The 2004 kriging surface was 2.4% smaller than the previous fixed-kernel estimate, but more closely represented the data. Grizzly bear distribution increased 38.3% from 2004 to 2010, with most expansion in the northern and southern regions of the range. This technique can be used to provide a current estimate of grizzly bear distribution for management and conservation applications.

  2. Distance Determination Method for Normally Distributed Obstacle Avoidance of Mobile Robots in Stochastic Environments

    Directory of Open Access Journals (Sweden)

    Jinhong Noh

    2016-04-01

    Full Text Available Obstacle avoidance methods require knowledge of the distance between a mobile robot and obstacles in the environment. However, in stochastic environments, distance determination is difficult because objects have position uncertainty. The purpose of this paper is to determine the distance between a robot and obstacles represented by probability distributions. Distance determination for obstacle avoidance should consider position uncertainty, computational cost and collision probability. The proposed method considers all of these conditions, unlike conventional methods. It determines the obstacle region using the collision probability density threshold. Furthermore, it defines a minimum distance function to the boundary of the obstacle region with a Lagrange multiplier method. Finally, it computes the distance numerically. Simulations were executed in order to compare the performance of the distance determination methods. Our method demonstrated a faster and more accurate performance than conventional methods. It may help overcome position uncertainty issues pertaining to obstacle avoidance, such as low accuracy sensors, environments with poor visibility or unpredictable obstacle motion.

  3. (I) A Declarative Framework for ERP Systems(II) Reactors: A Data-Driven Programming Model for Distributed Applications

    DEFF Research Database (Denmark)

    Stefansen, Christian Oskar Erik

    . • Using Soft Constraints to Guide Users in Flexible Business Process Management Systems. The paper shows how the inability of a process language to express soft constraints—constraints that can be violated occasionally, but are closely monitored—leads to a loss of intentional information in process....../Asynchronous Programming Model for Distributed Applications. The paper motivates, explains, and defines a distributed data-driven programming model. In the model a reactor is a stateful unit of distribution. A reactor specifies constructive, declarative constraints on its data and the data of other reactors in the style...

  4. The spatial distribution of leprosy cases during 15 years of a leprosy control program in Bangladesh: An observational study

    Directory of Open Access Journals (Sweden)

    Chowdhury SK

    2008-09-01

    Full Text Available Abstract Background An uneven spatial distribution of leprosy can be caused by the influence of geography on the distribution of risk factors over the area, or by population characteristics that are heterogeneously distributed over the area. We studied the distribution of leprosy cases detected by a control program to identify spatial and spatio-temporal patterns of occurrence and to search for environmental risk factors for leprosy. Methods The houses of 11,060 leprosy cases registered in the control area during a 15-year period (1989–2003 were traced back, added to a geographic database (GIS, and plotted on digital maps. We looked for clusters of cases in space and time. Furthermore, relationships with the proximity to geographic features, such as town center, roads, rivers, and clinics, were studied. Results Several spatio-temporal clusters were observed for voluntarily reported cases. The cases within and outside clusters did not differ in age at detection, percentage with multibacillary leprosy, or sex ratio. There was no indication of the spread from one point to other parts of the district, indicating a spatially stable endemic situation during the study period. The overall risk of leprosy in the district was not associated with roads, rivers, and leprosy clinics. The risk was highest within 1 kilometer of town centers and decreased with distance from town centers. Conclusion The association of a risk of leprosy with the proximity to towns indicates that rural towns may play an important role in the epidemiology of leprosy in this district. Further research on the role of towns, particularly in rural areas, is warranted.

  5. Evaluation of the Environmental DNA Method for Estimating Distribution and Biomass of Submerged Aquatic Plants.

    Science.gov (United States)

    Matsuhashi, Saeko; Doi, Hideyuki; Fujiwara, Ayaka; Watanabe, Sonoko; Minamoto, Toshifumi

    2016-01-01

    The environmental DNA (eDNA) method has increasingly been recognized as a powerful tool for monitoring aquatic animal species; however, its application for monitoring aquatic plants is limited. To evaluate eDNA analysis for estimating the distribution of aquatic plants, we compared its estimated distributions with eDNA analysis, visual observation, and past distribution records for the submerged species Hydrilla verticillata. Moreover, we conducted aquarium experiments using H. verticillata and Egeria densa and analyzed the relationships between eDNA concentrations and plant biomass to investigate the potential for biomass estimation. The occurrences estimated by eDNA analysis closely corresponded to past distribution records, and eDNA detections were more frequent than visual observations, indicating that the method is potentially more sensitive. The results of the aquarium experiments showed a positive relationship between plant biomass and eDNA concentration; however, the relationship was not always significant. The eDNA concentration peaked within three days of the start of the experiment in most cases, suggesting that plants do not release constant amounts of DNA. These results showed that eDNA analysis can be used for distribution surveys, and has the potential to estimate the biomass of aquatic plants.

  6. A novel and practical approach to distribution system performance enhancement using a fuzzy capacitor allocation method

    Science.gov (United States)

    Ng, Hok-Nin

    As the electrical utility business enters into a deregulated environment, distribution companies will strive to operate at the utmost economic efficiency. Loss reduction through the use of capacitor placement is an effective means of decreasing the operating costs of a utility. This thesis details a practical and flexible approach for distribution system loss reduction and performance enhancement using a fuzzy capacitor allocation technique. The proposed method takes advantage of the concepts of fuzzy set theory to model uncertain parameters of the distribution system, and to represent knowledge and heuristics that can be used to optimize the operation of the distribution system. A fuzzy expert system is used to determine suitable locations for capacitor installations and a multi-objective fuzzy optimization approach is used to determine the proper sizes of the capacitors. Effective control of capacitors is performed by another fuzzy expert system. Computer simulations performed, have clearly demonstrated the advantages and the significant contributions of the methods in this thesis for distribution system loss reduction and performance enhancement.

  7. An evaluation method of voltage sag using a risk assessment model in power distribution systems

    Energy Technology Data Exchange (ETDEWEB)

    Sang Yun Yun [LG Industrial Systems, Cheongju (Korea). Electrotechnology R and D Center; Jae Chul Kim [Soongsil Univ., Seoul (Korea). Dept. of Electrical Engineering

    2003-12-01

    In this paper we explore a new method for evaluating the impact of voltage sags in the power distribution systems. The proposed method incorporates the generalization of the evaluation method as well as the effects of voltage sags on customers. To generalize the evaluation method, historical reliability data will be used. We take into account the impact of voltage sags on customers using a representative power acceptability curve and a fuzzy model. The final result of the evaluation model yields on a yearly basis the magnitude of customers' risk caused by voltage sags. The evaluation methodology is divided into the analytic and probabilistic method. The time sequential Monte Carlo simulation is used as probabilistic method. The proposed method is tested using the modified Roy Billinton Test System (RBTS) form and the reliability data of Korea Electric Power Corporation (KEPCO) system. Through the case study, we verify that the proposed method evaluate the actual impact of voltage sags and can be effectively applied to the real system using the historical reliability data as the conventional reliability indices in power distribution systems. (author)

  8. The distribution of cataract surgery services in a public health eye care program in Nepal.

    Science.gov (United States)

    Marseille, E; Brand, R

    1997-11-01

    The cost-effectiveness of public health cataract programs in low-income countries has been well documented. Equity, another important dimension of program quality which has received less attention is analyzed here by comparisons of surgical coverage rates for major sub-groups within the intended beneficiary population of the Nepal blindness program (NBP). Substantial differences in surgical coverage were found between males and females and between different age groups of the same gender. Among the cataract blind, the surgical coverage of males was 70% higher than that of females. For both genders, the cataract blind over 55 received proportionately fewer services than younger people blind from cataract. Blind males aged 45-54 had a 500% higher rate of surgical coverage than blind males over 65. Blind females aged 35-44 had nearly a 600% higher rate of surgical coverage than blind females over 65. There was wide variation in overall surgical coverage between geographic zones, but little variation by terrain type, an indicator of the logistical difficulties in delivery of services. Members of the two highest caste groupings had somewhat lower surgical coverage than members of lower castes. Program managers should consider developing methods to increase services to women and to those over 65. Reaching these populations will become increasingly important as those most readily served receive surgery and members of the under-served groups form a growing portion of the remaining cataract backlog.

  9. An Improved Distributed Secondary Control Method for DC Microgrids With Enhanced Dynamic Current Sharing Performance

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Panbao; Lu, Xiaonan; Yang, Xu; Wang, Wei; Xu, Dianguo

    2016-09-01

    This paper proposes an improved distributed secondary control scheme for dc microgrids (MGs), aiming at overcoming the drawbacks of conventional droop control method. The proposed secondary control scheme can remove the dc voltage deviation and improve the current sharing accuracy by using voltage-shifting and slope-adjusting approaches simultaneously. Meanwhile, the average value of droop coefficients is calculated, and then it is controlled by an additional controller included in the distributed secondary control layer to ensure that each droop coefficient converges at a reasonable value. Hence, by adjusting the droop coefficient, each participating converter has equal output impedance, and the accurate proportional load current sharing can be achieved with different line resistances. Furthermore, the current sharing performance in steady and transient states can be enhanced by using the proposed method. The effectiveness of the proposed method is verified by detailed experimental tests based on a 3 × 1 kW prototype with three interface converters.

  10. Extension of the Accurate Voltage-Sag Fault Location Method in Electrical Power Distribution Systems

    Directory of Open Access Journals (Sweden)

    Youssef Menchafou

    2016-03-01

    Full Text Available Accurate Fault location in an Electric Power Distribution System (EPDS is important in maintaining system reliability. Several methods have been proposed in the past. However, the performances of these methods either show to be inefficient or are a function of the fault type (Fault Classification, because they require the use of an appropriate algorithm for each fault type. In contrast to traditional approaches, an accurate impedance-based Fault Location (FL method is presented in this paper. It is based on the voltage-sag calculation between two measurement points chosen carefully from the available strategic measurement points of the line, network topology and current measurements at substation. The effectiveness and the accuracy of the proposed technique are demonstrated for different fault types using a radial power flow system. The test results are achieved from the numerical simulation using the data of a distribution line recognized in the literature.

  11. Study of QTL Effects Distribution on Accuracy of Genomic Breeding values Estimated Using Bayesian Method

    Directory of Open Access Journals (Sweden)

    nazanin mahmoudi

    2016-04-01

    Full Text Available Introduction Genetic evaluation and estimation of breeding value are one of the most fundamental elements of breeding programmes for genetic improvement. Recently, genomic selection has become an efficient method to approach this aim. The accuracy of estimated Genomic breeding value is the most important factor in genomic selection. Different studies have been performed addressing the factors affecting the accuracy of estimated Genomic breeding value. The aim of this study was to evaluate the effect of beta and gamma distributions on the accuracy of genetic evaluation. Materials and Methods A genome consisted of 10 chromosomes with 200 cm length was simulated. Markers were spaced on 0.2 cm intervals and different numbers of QTL with random distribution were simulated. Only additive gene effects were considered. The base population was simulated with an effective size of 100 animals and this structure continued up to generation 50 to creating linkage disequilibrium between the markers and QTL. The population size was increased to 1000 animals in generation 51 (reference generation. Marker effects were calculated from the genomic and phenotypic information. Genomic breeding value was computed in generations 52 to 57 (training generation. Effects of gamma 1 distribution (shape=0.4, scale=1.66, gamma 2 distribution (shape=0.4, scale=1 and beta distribution (shape1=3.11, shape2=1.16 were studied in the reference and training groups. The heritability values were 0.2 and 0.05. Results and Discussion The results showed that accuracy of genomic breeding value reduced with passing generation (from 51 to 57 for two gamma distributions and beta distribution; this decrease may be due to two factors: recombination has negative impact on accuracy of genomic breeding value and selection reduces genetic variance as the number of generations increases. Accuracy of genomic estimated breeding value increased as the heritability increased so that the high

  12. Distributed Solutions for Loosely Coupled Feasibility Problems Using Proximal Splitting Methods

    DEFF Research Database (Denmark)

    Pakazad, Sina Khoshfetrat; Andersen, Martin Skovgaard; Hansson, Anders

    2014-01-01

    In this paper,we consider convex feasibility problems (CFPs) where the underlying sets are loosely coupled, and we propose several algorithms to solve such problems in a distributed manner. These algorithms are obtained by applying proximal splitting methods to convex minimization reformulations ...... in terms of the distance of the iterates to the feasible set, which are similar to those of classical projection methods. In case the feasibility problem is infeasible, we provide convergence rate results that concern the convergence of certain error bounds.......In this paper,we consider convex feasibility problems (CFPs) where the underlying sets are loosely coupled, and we propose several algorithms to solve such problems in a distributed manner. These algorithms are obtained by applying proximal splitting methods to convex minimization reformulations...... of CFPs. We also put forth distributed convergence tests which enable us to establish feasibility or infeasibility of the problem distributedly, and we provide convergence rate results. Under the assumption that the problem is feasible and boundedly linearly regular, these convergence results are given...

  13. Design method for a laser line beam shaper of a general 1D angular power distribution

    Science.gov (United States)

    Oved, E.; Oved, A.

    2016-05-01

    Laser line is a beam of laser, spanned in one direction using a beam shaper to form a fan of light. This illumination tool is important in laser aided machine vision, 3D scanners, and remote sensing. For some applications the laser line should have a specific angular power distribution. If the distribution is nonsymmetrical, the beam shaper is required to be nonsymmetrical freeform, and its design process using optical design software is time consuming due to the long optimization process which usually converges to some local minimum. In this paper we introduce a new design method of a single element refractive beam shaper of any predefined general 1D angular power distribution. The method makes use of a notion of "prism space", a geometrical representation of all double refraction prisms, and any 1D beam shaper can be described by a continuous curve in this space. It is shown that infinitely many different designs are possible for any given power distribution, and it is explained how an optimal design is selected among them, based on criteria such as high transmission, low surface slopes, robustness to manufacturing errors etc. The method is non-parametric and hence does not require initial guess of a functional form, and the resultant optical surfaces are described by a sequence of points, rather than by an analytic function.

  14. A method for quantitative evaluation of uranium distribution in dispersion fuels

    Energy Technology Data Exchange (ETDEWEB)

    Ferrufino, Felipe Bonito Jaldin; Saliba-Silva, Adonis Marcelo; Carvalho, Elita Fontenele Urano de, E-mail: fbferr@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo (Brazil); Riella, Humberto Gracher; Durazzo, Michelangelo [Brazilian Institute of Science and Technology for Innovating Nuclear Reactors (INCT), Rio de Janeiro, RJ (Brazil)

    2011-07-01

    IPEN/CNEN-SP produces the fuel to supply its nuclear research reactor IEA-R1. The fuel is assembled with fuel plates containing an U{sub 3}Si{sub 2}-Al composite meat. A good homogeneity in the uranium distribution inside the fuel plate meat is important from the standpoint of irradiation performance. Considering the lower power of reactor IEA-R1, the uranium distribution in the fuel plate has been evaluated only by visual inspection of radiographs. However, with the possibility of IPEN to manufacture the fuel for the new Brazilian Multipurpose Reactor (RMB), with higher power, it urges to develop a methodology to determine quantitatively the uranium distribution into the fuel. This paper presents a methodology based on X-ray attenuation, in order to quantify the uranium concentration distribution in the meat of the fuel plate by using optical densities in radiographs and comparison with standards. The results demonstrated the inapplicability of the method, considering the current specification for the fuel plates due to the high intrinsic error to the method. However, the study of the errors involved in the methodology, seeking to increase their accuracy and precision, can enable the application of the method to qualify the final product. (author)

  15. Distributed cognition and process management enabling individualized translational research: The NIH Undiagnosed Diseases Program experience

    Directory of Open Access Journals (Sweden)

    Amanda E Links

    2016-10-01

    Full Text Available The National Institutes of Health Undiagnosed Diseases Program (NIH UDP applies translational research systematically to diagnose patients with undiagnosed diseases. The challenge is to implement an information system enabling scalable translational research. The authors hypothesized that similarly complex problems are resolvable through process management and the distributed cognition of communities. The team therefore built the NIH UDP Integrated Collaboration System (UDPICS to form virtual collaborative multidisciplinary research networks or communities. UDPICS supports these communities through integrated process management, ontology-based phenotyping, biospecimen management, cloud-based genomic analysis, and an electronic laboratory notebook. UDPICS provided a mechanism for efficient, transparent, and scalable translational research and thereby addressed many of the complex and diverse research and logistical problems of the NIH UDP. Full definition of the strengths and deficiencies of UDPICS will require formal qualitative and quantitative usability and process improvement measurement.

  16. Electron velocity distribution in a diffusion plasma system by resonance probe method

    Energy Technology Data Exchange (ETDEWEB)

    Baykal, A.; Sezer, Z. (Cekmece Nuclear Research and Training Center, Istanbul (Turkey))

    1983-06-01

    In this study, electron-velocity distribution in low density plasma (nsub(e) = 8.9 x 10/sup 7/ cm/sup -3/, Tsub(e) = 2.8 x 10/sup 4/ /sup 0/K) has been measured by resonance probe method. The curves of the probe rectified current have been plotted when both direct and alternating potential were applied to the probe. The second derivation of electron current for each applied potential has been found experimentally. Using this result and Druyvesteyn Formula, electron-velocity distribution has been measured.

  17. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...

  18. A hierarchical voltage control method for multi-terminal AC/DC distribution system

    Science.gov (United States)

    Ma, Zhoujun; Zhu, Hong; Zhou, Dahong; Wang, Chunning; Tang, Renquan; Xu, Honghua

    2017-08-01

    A hierarchical control system is proposed in this paper to control the voltage of multi-terminal AC/DC distribution system. The hierarchical control system consists of PCC voltage control system, DG voltage control system and voltage regulator control system. The functions of three systems are to control the voltage of DC distribution network, AC bus voltage and area voltage. A method is proposed to deal with the whole control system. And a case study indicates that when voltage fluctuating, three layers of power flow control system is running orderly, and can maintain voltage stability.

  19. Comparison of impedance based fault location methods for power distribution systems

    Energy Technology Data Exchange (ETDEWEB)

    Mora-Florez, J. [Electrical Engineering School, Technological University of Pereira (UTP), La Julita, Pereira (Colombia); Melendez, J. [Electronics, Computer Engineering and Automatics Department, University of Girona (UdG), EPS, Campus Montilivi, 17071 Girona (Spain); Carrillo-Caicedo, G. [Electrical Engineering School, Industrial University of Santander (UIS), Cra 27-Cll 9, Bucaramanga (Colombia)

    2008-04-15

    Performance of 10 fault location methods for power distribution systems has been compared. The analyzed methods use only measurements of voltage and current at the substation. Fundamental component during pre-fault and fault are used in these methods to estimate the apparent impedance viewed from the measurement point. Deviation between pre-fault and fault impedance together with the system parameters are used to estimate the distance to the fault point. Fundamental aspects of each method have been considered in the analysis. Power system topology, line and load models and the necessity of additional information are relevant aspects that differentiate one method from another. The 10 selected methods have been implemented, tested and compared in a simulated network. The paper reports the results for several scenarios defined by significant values of the fault location and impedance. The estimated error has been used as a performance index in the comparison. (author)

  20. Analysis of the distribution of pitch angles in model galactic disks - Numerical methods and algorithms

    Science.gov (United States)

    Russell, William S.; Roberts, William W., Jr.

    1993-01-01

    An automated mathematical method capable of successfully isolating the many different features in prototype and observed spiral galaxies and of accurately measuring the pitch angles and lengths of these individual features is developed. The method is applied to analyze the evolution of specific features in a prototype galaxy exhibiting flocculent spiral structure. The mathematical-computational method was separated into two components. Initially, the galaxy was partitioned into dense regions constituting features using two different methods. The results obtained using these two partitioning algorithms were very similar, from which it is inferred that no numerical biasing was evident and that capturing of the features was consistent. Standard least-squares methods underestimated the true slope of the cloud distribution and were incapable of approximating an orientation of 45 deg. The problems were overcome by introducing a superior fit least-squares method, developed with the intention of calculating true orientation rather than a regression line.

  1. Application of graftech method for pro-gramming of discrete technological process

    Directory of Open Access Journals (Sweden)

    Pigiel, M.

    2007-01-01

    Full Text Available This article contains description of a new programming method of PLC controllers, presently a fundamental tool in discrete process auto-mation. A PLC user, applying standard languages for programming of sequence processes, is forced to rely on intuitive methods as well as his own experience. For this reason, the authors attempted to work out a method, which would allow for simple execution of the program-ming process with no limits regarding number of steps and input and output signals. The result of these studies is a method named Graftech by its authors. The method consists in determination of a functional program in LD language, basing on process algorithm written with SFC network. The rules concerning determination of the functional program are also described. Application of Graftech method is illustrated with an example of automatic ejector of casting molds.

  2. A method for selecting the CIE standard general sky model with regard to calculating luminance distributions

    Science.gov (United States)

    Ferraro, Vittorio; Marinelli, Valerio; Mele, Marilena

    2013-04-01

    It is known that the best predictions of sky luminances are obtainable by the CIE 15 standard skies model, but the predictions by this model need knowledge of the measured luminance distributions themselves, since a criterion for selecting the type of sky starting from the irradiance values has not found until now. The authors propose a new simple method of applying the CIE model, based on the use of the sky index Si. A comparison between calculated luminance data and data measured in Arcavacata of Rende (Italy), Lyon (France) and Pamplona (Spain) show a good performance of this method in comparison with other methods of calculation of luminance existing in the literature.

  3. Controllability analysis as a pre-selection method for sensor placement in water distribution systems.

    Science.gov (United States)

    Diao, Kegong; Rauch, Wolfgang

    2013-10-15

    Detection of contamination events in water distribution systems is a crucial task for maintaining water security. Online monitoring is considered as the most cost-effective technology to protect against the impacts of contaminant intrusions. Optimization methods for sensor placement enable automated sensor layout design based on hydraulic and water quality simulation. However, this approach results in an excessive computational burden. In this paper we outline the application of controllability analysis as preprocessing method for sensor placement. Based on case studies we demonstrate that the method decreases the number of decision variables for subsequent optimization dramatically to app. 30 to 40 percent. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. On modified finite difference method to obtain the electron energy distribution functions in Langmuir probes

    Science.gov (United States)

    Kang, Hyun-Ju; Choi, Hyeok; Kim, Jae-Hyun; Lee, Se-Hun; Yoo, Tae-Ho; Chung, Chin-Wook

    2016-06-01

    A modified central difference method (MCDM) is proposed to obtain the electron energy distribution functions (EEDFs) in single Langmuir probes. Numerical calculation of the EEDF with MCDM is simple and has less noise. This method provides the second derivatives at a given point as the weighted average of second order central difference derivatives calculated at different voltage intervals, weighting each by the square of the interval. In this paper, the EEDFs obtained from MCDM are compared to those calculated via the averaged central difference method. It is found that MCDM effectively suppresses the noises in the EEDF, while the same number of points are used to calculate of the second derivative.

  5. On modified finite difference method to obtain the electron energy distribution functions in Langmuir probes

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Hyun-Ju; Chung, Chin-Wook, E-mail: joykang@hanyang.ac.kr [Department of Electrical Engineering, Hanyang University, 222, Wangsimni-ro, Seongdong-gu, Seoul 133-791 (Korea, Republic of); Choi, Hyeok; Kim, Jae-Hyun; Lee, Se-Hun; Yoo, Tae-Ho [Seoul Science High School, 63, Hyehwa-ro, Jongno-gu, Seoul 110-530 (Korea, Republic of)

    2016-06-15

    A modified central difference method (MCDM) is proposed to obtain the electron energy distribution functions (EEDFs) in single Langmuir probes. Numerical calculation of the EEDF with MCDM is simple and has less noise. This method provides the second derivatives at a given point as the weighted average of second order central difference derivatives calculated at different voltage intervals, weighting each by the square of the interval. In this paper, the EEDFs obtained from MCDM are compared to those calculated via the averaged central difference method. It is found that MCDM effectively suppresses the noises in the EEDF, while the same number of points are used to calculate of the second derivative.

  6. Group-Based Alternating Direction Method of Multipliers for Distributed Linear Classification.

    Science.gov (United States)

    Wang, Huihui; Gao, Yang; Shi, Yinghuan; Wang, Ruili

    2017-11-01

    The alternating direction method of multipliers (ADMM) algorithm has been widely employed for distributed machine learning tasks. However, it suffers from several limitations, e.g., a relative low convergence speed, and an expensive time cost. To this end, in this paper, a novel method, namely the group-based ADMM (GADMM), is proposed for distributed linear classification. In particular, to accelerate the convergence speed and improve global consensus, a group layer is first utilized in GADMM to divide all the slave nodes into several groups. Then, all the local variables (from the slave nodes) are gathered in the group layer to generate different group variables. Finally, by using a weighted average method, the group variables are coordinated to update the global variable (from the master node) until the solution of the global problem is reached. According to the theoretical analysis, we found that: 1) GADMM can mathematically converge at the rate , where is the number of outer iterations and 2) by using the grouping methods, GADMM can improve the convergence speed compared with the distributed ADMM framework without grouping methods. Moreover, we systematically evaluate GADMM on four publicly available LIBSVM datasets. Compared with disADMM and stochastic dual coordinate ascent with alternating direction method of multipliers-ADMM, for distributed classification, GADMM is able to reduce the number of outer iterations, which leads to faster convergence speed and better global consensus. In particular, the statistical significance test has been experimentally conducted and the results validate that GADMM can significantly save up to 30% of the total time cost (with less than 0.6% accuracy loss) compared with disADMM on large-scale datasets, e.g., webspam and epsilon.

  7. Graphical programming interface: A development environment for MRI methods.

    Science.gov (United States)

    Zwart, Nicholas R; Pipe, James G

    2015-11-01

    To introduce a multiplatform, Python language-based, development environment called graphical programming interface for prototyping MRI techniques. The interface allows developers to interact with their scientific algorithm prototypes visually in an event-driven environment making tasks such as parameterization, algorithm testing, data manipulation, and visualization an integrated part of the work-flow. Algorithm developers extend the built-in functionality through simple code interfaces designed to facilitate rapid implementation. This article shows several examples of algorithms developed in graphical programming interface including the non-Cartesian MR reconstruction algorithms for PROPELLER and spiral as well as spin simulation and trajectory visualization of a FLORET example. The graphical programming interface framework is shown to be a versatile prototyping environment for developing numeric algorithms used in the latest MR techniques. © 2014 Wiley Periodicals, Inc.

  8. Assessing protein conformational sampling methods based on bivariate lag-distributions of backbone angles

    KAUST Repository

    Maadooliat, Mehdi

    2012-08-27

    Despite considerable progress in the past decades, protein structure prediction remains one of the major unsolved problems in computational biology. Angular-sampling-based methods have been extensively studied recently due to their ability to capture the continuous conformational space of protein structures. The literature has focused on using a variety of parametric models of the sequential dependencies between angle pairs along the protein chains. In this article, we present a thorough review of angular-sampling-based methods by assessing three main questions: What is the best distribution type to model the protein angles? What is a reasonable number of components in a mixture model that should be considered to accurately parameterize the joint distribution of the angles? and What is the order of the local sequence-structure dependency that should be considered by a prediction method? We assess the model fits for different methods using bivariate lag-distributions of the dihedral/planar angles. Moreover, the main information across the lags can be extracted using a technique called Lag singular value decomposition (LagSVD), which considers the joint distribution of the dihedral/planar angles over different lags using a nonparametric approach and monitors the behavior of the lag-distribution of the angles using singular value decomposition. As a result, we developed graphical tools and numerical measurements to compare and evaluate the performance of different model fits. Furthermore, we developed a web-tool (http://www.stat.tamu. edu/~madoliat/LagSVD) that can be used to produce informative animations. © The Author 2012. Published by Oxford University Press.

  9. Overview of Evaluation Methods for R&D Programs. A Directory of Evaluation Methods Relevant to Technology Development Programs

    Energy Technology Data Exchange (ETDEWEB)

    Ruegg, Rosalie [TIA Consulting, Inc., Emeral Isle, NC (United States); Jordan, Gretchen B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2007-03-01

    This document provides guidance for evaluators who conduct impact assessments to determine the “realized” economic benefits and costs, energy, environmental benefits, and other impacts of the Office of Energy Efficiency and Renewable Energy’s (EERE) R&D programs. The focus of this Guide is on realized outcomes or impacts of R&D programs actually experienced by American citizens, industry, and others.

  10. An introduction to meshfree methods and their programming

    CERN Document Server

    Liu, GR

    2005-01-01

    Friendly and straightforward presentation and beginner orientated Provides the fundamentals of numerical analysis that are particularly important to meshfree methods. Wide coverage of meshfree methods: EFG, RPIM, MLPG, LRPIM, MWS and collocation methods Detailed comparison case studies for many existing meshfree methods Well-tested computer source codes are attached with useful descriptions and readily test examples Soft copy of these source codes are available also at http://www.nus.edu.sg/ACES.

  11. Demand Response Programs Design and Use Considering Intensive Penetration of Distributed Generation

    Directory of Open Access Journals (Sweden)

    Pedro Faria

    2015-06-01

    Full Text Available Further improvements in demand response programs implementation are needed in order to take full advantage of this resource, namely for the participation in energy and reserve market products, requiring adequate aggregation and remuneration of small size resources. The present paper focuses on SPIDER, a demand response simulation that has been improved in order to simulate demand response, including realistic power system simulation. For illustration of the simulator’s capabilities, the present paper is proposes a methodology focusing on the aggregation of consumers and generators, providing adequate tolls for the demand response program’s adoption by evolved players. The methodology proposed in the present paper focuses on a Virtual Power Player that manages and aggregates the available demand response and distributed generation resources in order to satisfy the required electrical energy demand and reserve. The aggregation of resources is addressed by the use of clustering algorithms, and operation costs for the VPP are minimized. The presented case study is based on a set of 32 consumers and 66 distributed generation units, running on 180 distinct operation scenarios.

  12. PROCESS CAPABILITY ESTIMATION FOR NON-NORMALLY DISTRIBUTED DATA USING ROBUST METHODS - A COMPARATIVE STUDY

    Directory of Open Access Journals (Sweden)

    Yerriswamy Wooluru

    2016-06-01

    Full Text Available Process capability indices are very important process quality assessment tools in automotive industries. The common process capability indices (PCIs Cp, Cpk, Cpm are widely used in practice. The use of these PCIs based on the assumption that process is in control and its output is normally distributed. In practice, normality is not always fulfilled. Indices developed based on normality assumption are very sensitive to non- normal processes. When distribution of a product quality characteristic is non-normal, Cp and Cpk indices calculated using conventional methods often lead to erroneous interpretation of process capability. In the literature, various methods have been proposed for surrogate process capability indices under non normality but few literature sources offer their comprehensive evaluation and comparison of their ability to capture true capability in non-normal situation. In this paper, five methods have been reviewed and capability evaluation is carried out for the data pertaining to resistivity of silicon wafer. The final results revealed that the Burr based percentile method is better than Clements method. Modelling of non-normal data and Box-Cox transformation method using statistical software (Minitab 14 provides reasonably good result as they are very promising methods for non - normal and moderately skewed data (Skewness <= 1.5.

  13. Simple Calculation Programs for Biology Methods in Molecular ...

    Indian Academy of Sciences (India)

    GMAP: A program for mapping potential restriction sites. RE sites in ambiguous and non-ambiguous DNA sequence; Minimum number of silent mutations required for introducing a RE sites; Set theory for searching RE sites; Raghava and Sahni (1994) Biotechniques 16:1116 ...

  14. Content and Method in a Thanatology Training Program for Paraprofessionals.

    Science.gov (United States)

    Harris, Audrey P.

    1980-01-01

    A training program of paraprofessionals was developed in a university teaching hospital. Trainees were exposed to seminars and a supervised practicum. The objectives of the experience included sensitization of persons in the natural helping network to psychosocial needs of seriously ill persons and their families. (Author)

  15. Modelling of LPG Ship Distribution in Western of Indonesia using Discrete Simulation Method

    Directory of Open Access Journals (Sweden)

    Trika Pitana

    2017-06-01

    Full Text Available The result of data from the Energy Outlook Indonesia issued by the National Energy Board, mentioned the demand of LPG every year continues to rise, and there is a regions has high increased still at western part of Indonesia, precisely in the Sumatra and Java Island. Because of that, so effort to necessary anassesment for remake case study on the distribution pattern of vesseles with the thechincal data on the loading port and discharging port. The data has affecting distribution pattern of vessels, will be used to replicate previously existing transport system currently operated by using discrete simulation method, evaluated, and scenario building improvements to variations number and size of the capacity of vessels to get distribution pattern of effective and efficient. The result of this research obtained scenario capable to meet the demands of each destination terminal port with a case study during the next 5 years and also which has a vesseles operating expenses are the most economical

  16. Control Method of Single-phase Inverter Based Grounding System in Distribution Networks

    DEFF Research Database (Denmark)

    Wang, Wen; Yan, L.; Zeng, X.

    2016-01-01

    The asymmetry of the inherent distributed capacitances causes the rise of neutral-to-ground voltage in ungrounded system or high resistance grounded system. Overvoltage may occur in resonant grounded system if Petersen coil is resonant with the distributed capacitances. Thus, the restraint...... of neutral-to-ground voltage is critical for the safety of distribution networks. An active grounding system based on single-phase inverter is proposed to achieve this objective. Relationship between output current of the system and neutral-to-ground voltage is derived to explain the principle of neutral-to-ground...... voltage compensation. Then, a current control method consisting of proportional resonant (PR) and proportional integral (PI) with capacitive current feedback is then proposed to guarantee sufficient output current accuracy and stability margin subjecting to large range of load change. The performance...

  17. Methods to Regulate Unbundled Transmission and Distribution Business on Electricity Markets

    Energy Technology Data Exchange (ETDEWEB)

    Forsberg, Kaj; Fritz, Peter

    2003-11-01

    The regulation of distribution utilities is evolving from the traditional approach based on a cost of service or rate of return remuneration, to ways of regulation more specifically focused on providing incentives for improving efficiency, known as performance-based regulation or ratemaking. Modern regulation systems are also, to a higher degree than previously, intended to simulate competitive market conditions. The Market Design 2003-conference gathered people from 18 countries to discuss 'Methods to regulate unbundled transmission and distribution business on electricity markets'. Speakers from nine different countries and backgrounds (academics, industry and regulatory) presented their experiences and most recent works on how to make the regulation of unbundled distribution business as accurate as possible. This paper does not claim to be a fully representative summary of everything that was presented or discussed during the conference. Rather, it is a purposely restricted document where we focus on a few central themes and experiences from different countries.

  18. Comparison of biofilm cell quantification methods for drinking water distribution systems.

    Science.gov (United States)

    Waller, Sharon A; Packman, Aaron I; Hausner, Martina

    2018-01-01

    Drinking water quality typically degrades after treatment during conveyance through the distribution system. Potential causes include biofilm growth in distribution pipes which may result in pathogen retention, inhibited disinfectant diffusion, and proliferation of bad tastes and odors. However, there is no standard method for direct measurement of biofilms or quantification of biofilm cells in drinking water distribution systems. Three methods are compared here for quantification of biofilm cells grown in pipe loops samplers: biofilm heterotrophic plate count (HPC), biofilm biovolume by confocal laser scanning microscopy (CLSM) and biofilm total cell count by flow cytometry (FCM) paired with Syto 9. Both biofilm biovolume by CLSM and biofilm total cell count by FCM were evaluated for quantification of the whole biofilms (including non-viable cells and viable but not culturable cells). Signal-to-background ratios and overall performance of biofilm biovolume by CLSM and biofilm total cell count by FCM were found to vary with the pipe material. Biofilm total cell count by FCM had a low signal-to-background ratio on all materials, indicating that further development is recommended before application in drinking water environments. Biofilm biovolume by CLSM showed the highest signal-to-background ratio for cement and cast iron, which suggests promise for wider application in full-scale systems. Biofilm biovolume by CLSM and Syto 9 staining allowed in-situ biofilm cell quantification thus elimination variable associated with cell detachment for quantification but had limitations associated with non-specific staining of cement and, to a lesser degree, auto-fluorescence of both cement and polyvinyl chloride materials. Due to variability in results obtained from each method, multiple methods are recommended to assess biofilm growth in drinking water distribution systems. Of the methods investigated here, HPC and CLSM and recommended for further development towards

  19. Improving programming skills of Mechanical Engineering students by teaching in C# multi-objective optimizations methods

    National Research Council Canada - National Science Library

    Adrian Florea; Ileana Ioana Cofaru

    2017-01-01

    .... This paper represents a software development guide for designers of suspension systems with less programming skills that will enable them to implement their own optimization methods that improve...

  20. An introduction to nonlinear programming. IV - Numerical methods for constrained minimization

    Science.gov (United States)

    Sorenson, H. W.; Koble, H. M.

    1976-01-01

    An overview is presented of the numerical solution of constrained minimization problems. Attention is given to both primal and indirect (linear programs and unconstrained minimizations) methods of solution.

  1. Cable Overheating Risk Warning Method Based on Impedance Parameter Estimation in Distribution Network

    Science.gov (United States)

    Yu, Zhang; Xiaohui, Song; Jianfang, Li; Fei, Gao

    2017-05-01

    Cable overheating will lead to the cable insulation level reducing, speed up the cable insulation aging, even easy to cause short circuit faults. Cable overheating risk identification and warning is nessesary for distribution network operators. Cable overheating risk warning method based on impedance parameter estimation is proposed in the paper to improve the safty and reliability operation of distribution network. Firstly, cable impedance estimation model is established by using least square method based on the data from distribiton SCADA system to improve the impedance parameter estimation accuracy. Secondly, calculate the threshold value of cable impedance based on the historical data and the forecast value of cable impedance based on the forecasting data in future from distribiton SCADA system. Thirdly, establish risks warning rules library of cable overheating, calculate the cable impedance forecast value and analysis the change rate of impedance, and then warn the overheating risk of cable line based on the overheating risk warning rules library according to the variation relationship between impedance and line temperature rise. Overheating risk warning method is simulated in the paper. The simulation results shows that the method can identify the imedance and forecast the temperature rise of cable line in distribution network accurately. The result of overheating risk warning can provide decision basis for operation maintenance and repair.

  2. Method for Determining the Activation Energy Distribution Function of Complex Reactions by Sieving and Thermogravimetric Measurements.

    Science.gov (United States)

    Bufalo, Gennaro; Ambrosone, Luigi

    2016-01-14

    A method for studying the kinetics of thermal degradation of complex compounds is suggested. Although the method is applicable to any matrix whose grain size can be measured, herein we focus our investigation on thermogravimetric analysis, under a nitrogen atmosphere, of ground soft wheat and ground maize. The thermogravimetric curves reveal that there are two well-distinct jumps of mass loss. They correspond to volatilization, which is in the temperature range 298-433 K, and decomposition regions go from 450 to 1073 K. Thermal degradation is schematized as a reaction in the solid state whose kinetics is analyzed separately in each of the two regions. By means of a sieving analysis different size fractions of the material are separated and studied. A quasi-Newton fitting algorithm is used to obtain the grain size distribution as best fit to experimental data. The individual fractions are thermogravimetrically analyzed for deriving the functional relationship between activation energy of the degradation reactions and the particle size. Such functional relationship turns out to be crucial to evaluate the moments of the activation energy distribution, which is unknown in terms of the distribution calculated by sieve analysis. From the knowledge of moments one can reconstruct the reaction conversion. The method is applied first to the volatilization region, then to the decomposition region. The comparison with the experimental data reveals that the method reproduces the experimental conversion with an accuracy of 5-10% in the volatilization region and of 3-5% in the decomposition region.

  3. An analytical method based on multipole moment expansion to calculate the flux distribution in Gammacell-220

    Science.gov (United States)

    Rezaeian, P.; Ataenia, V.; Shafiei, S.

    2017-12-01

    In this paper, the flux of photons inside the irradiation cell of the Gammacell-220 is calculated using an analytical method based on multipole moment expansion. The flux of the photons inside the irradiation cell is introduced as the function of monopole, dipoles and quadruples in the Cartesian coordinate system. For the source distribution of the Gammacell-220, the values of the multipole moments are specified by direct integrating. To confirm the validation of the presented methods, the flux distribution inside the irradiation cell was determined utilizing MCNP simulations as well as experimental measurements. To measure the flux inside the irradiation cell, Amber dosimeters were employed. The calculated values of the flux were in agreement with the values obtained by simulations and measurements, especially in the central zones of the irradiation cell. In order to show that the present method is a good approximation to determine the flux in the irradiation cell, the values of the multipole moments were obtained by fitting the simulation and experimental data using Levenberg-Marquardt algorithm. The present method leads to reasonable results for the all source distribution even without any symmetry which makes it a powerful tool for the source load planning.

  4. A New Method for the 2D DOA Estimation of Coherently Distributed Sources

    Directory of Open Access Journals (Sweden)

    Liang Zhou

    2014-03-01

    Full Text Available The purpose of this paper is to develop a new technique for estimating the two- dimensional (2D direction-of-arrivals (DOAs of coherently distributed (CD sources, which can estimate effectively the central azimuth and central elevation of CD sources at the cost of less computational cost. Using the special L-shape array, a new approach for parametric estimation of CD sources is proposed. The proposed method is based on two rotational invariance relations under small angular approximation, and estimates two rotational matrices which depict the relations, using propagator technique. And then the central DOA estimations are obtained by utilizing the primary diagonal elements of two rotational matrices. Simulation results indicate that the proposed method can exhibit a good performance under small angular spread and be applied to the multisource scenario where different sources may have different angular distribution shapes. Without any peak-finding search and the eigendecomposition of the high-dimensional sample covariance matrix, the proposed method has significantly reduced the computational cost compared with the existing methods, and thus is beneficial to real-time processing and engineering realization. In addition, our approach is also a robust estimator which does not depend on the angular distribution shape of CD sources.

  5. Identification of a core TP53 transcriptional program with highly distributed tumor suppressive activity.

    Science.gov (United States)

    Andrysik, Zdenek; Galbraith, Matthew D; Guarnieri, Anna L; Zaccara, Sara; Sullivan, Kelly D; Pandey, Ahwan; MacBeth, Morgan; Inga, Alberto; Espinosa, Joaquín M

    2017-10-01

    The tumor suppressor TP53 is the most frequently mutated gene product in human cancer. Close to half of all solid tumors carry inactivating mutations in the TP53 gene, while in the remaining cases, TP53 activity is abrogated by other oncogenic events, such as hyperactivation of its endogenous repressors MDM2 or MDM4. Despite identification of hundreds of genes regulated by this transcription factor, it remains unclear which direct target genes and downstream pathways are essential for the tumor suppressive function of TP53. We set out to address this problem by generating multiple genomic data sets for three different cancer cell lines, allowing the identification of distinct sets of TP53-regulated genes, from early transcriptional targets through to late targets controlled at the translational level. We found that although TP53 elicits vastly divergent signaling cascades across cell lines, it directly activates a core transcriptional program of ∼100 genes with diverse biological functions, regardless of cell type or cellular response to TP53 activation. This core program is associated with high-occupancy TP53 enhancers, high levels of paused RNA polymerases, and accessible chromatin. Interestingly, two different shRNA screens failed to identify a single TP53 target gene required for the anti-proliferative effects of TP53 during pharmacological activation in vitro. Furthermore, bioinformatics analysis of thousands of cancer genomes revealed that none of these core target genes are frequently inactivated in tumors expressing wild-type TP53. These results support the hypothesis that TP53 activates a genetically robust transcriptional program with highly distributed tumor suppressive functions acting in diverse cellular contexts. © 2017 Andrysik et al.; Published by Cold Spring Harbor Laboratory Press.

  6. Study of Parameters And Methods of LL-Ⅳ Distributed Hydrological Model in DMIP2

    Science.gov (United States)

    Li, L.; Wu, J.; Wang, X.; Yang, C.; Zhao, Y.; Zhou, H.

    2008-05-01

    : The Physics-based distributed hydrological model is considered as an important developing period from the traditional experience-hydrology to the physical hydrology. The Hydrology Laboratory of the NOAA National Weather Service proposes the first and second phase of the Distributed Model Intercomparison Project (DMIP),that it is a great epoch-making work. LL distributed hydrological model has been developed to the fourth generation since it was established in 1997 on the Fengman-I district reservoir area (11000 km2).The LL-I distributed hydrological model was born with the applications of flood control system in the Fengman-I in China. LL-II was developed under the DMIP-I support, it is combined with GIS, RS, GPS, radar rainfall measurement.LL-III was established along with Applications of LL Distributed Model on Water Resources which was supported by the 973-projects of The Ministry of Science and Technology of the People's Republic of China. LL-Ⅳ was developed to face China's water problem. Combined with Blue River and the Baron Fork River basin of DMIP-II, the convection-diffusion equation of non-saturated and saturated seepage was derived from the soil water dynamics and continuous equation. In view of the technical characteristics of the model, the advantage of using convection-diffusion equation to compute confluence overall is longer period of predictable, saving memory space, fast budgeting, clear physical concepts, etc. The determination of parameters of hydrological model is the key, including experience coefficients and parameters of physical parameters. There are methods of experience, inversion, and the optimization to determine the model parameters, and each has advantages and disadvantages. This paper briefly introduces the LL-Ⅳ distribution hydrological model equations, and particularly introduces methods of parameters determination and simulation results on Blue River and Baron Fork River basin for DMIP-II. The soil moisture diffusion

  7. A multi-objective possibilistic programming approach for locating distribution centers and allocating customers demands in supply chains

    Directory of Open Access Journals (Sweden)

    Seyed Ahmad Yazdian

    2011-01-01

    Full Text Available In this paper, we present a multi-objective possibilistic programming model to locate distribution centers (DCs and allocate customers' demands in a supply chain network design (SCND problem. The SCND problem deals with determining locations of facilities (DCs and/or plants, and also shipment quantities between each two consecutive tier of the supply chain. The primary objective of this study is to consider different risk factors which are involved in both locating DCs and shipping products as an objective function. The risk consists of various components: the risks related to each potential DC location, the risk associated with each arc connecting a plant to a DC and the risk of shipment from a DC to a customer. The proposed method of this paper considers the risk phenomenon in fuzzy forms to handle the uncertainties inherent in these factors. A possibilistic programming approach is proposed to solve the resulted multi-objective problem and a numerical example for three levels of possibility is conducted to analyze the model.

  8. Reconstruction method for inversion problems in an acoustic tomography based temperature distribution measurement

    Science.gov (United States)

    Liu, Sha; Liu, Shi; Tong, Guowei

    2017-11-01

    In industrial areas, temperature distribution information provides a powerful data support for improving system efficiency, reducing pollutant emission, ensuring safety operation, etc. As a noninvasive measurement technology, acoustic tomography (AT) has been widely used to measure temperature distribution where the efficiency of the reconstruction algorithm is crucial for the reliability of the measurement results. Different from traditional reconstruction techniques, in this paper a two-phase reconstruction method is proposed to ameliorate the reconstruction accuracy (RA). In the first phase, the measurement domain is discretized by a coarse square grid to reduce the number of unknown variables to mitigate the ill-posed nature of the AT inverse problem. By taking into consideration the inaccuracy of the measured time-of-flight data, a new cost function is constructed to improve the robustness of the estimation, and a grey wolf optimizer is used to solve the proposed cost function to obtain the temperature distribution on the coarse grid. In the second phase, the Adaboost.RT based BP neural network algorithm is developed for predicting the temperature distribution on the refined grid in accordance with the temperature distribution data estimated in the first phase. Numerical simulations and experiment measurement results validate the superiority of the proposed reconstruction algorithm in improving the robustness and RA.

  9. Simulation of product distribution at PT Anugrah Citra Boga by using capacitated vehicle routing problem method

    Science.gov (United States)

    Lamdjaya, T.; Jobiliong, E.

    2017-01-01

    PT Anugrah Citra Boga is a food processing industry that produces meatballs as their main product. The distribution system of the products must be considered, because it needs to be more efficient in order to reduce the shipment cost. The purpose of this research is to optimize the distribution time by simulating the distribution channels with capacitated vehicle routing problem method. Firstly, the distribution route is observed in order to calculate the average speed, time capacity and shipping costs. Then build the model using AIMMS software. A few things that are required to simulate the model are customer locations, distances, and the process time. Finally, compare the total distribution cost obtained by the simulation and the historical data. It concludes that the company can reduce the shipping cost around 4.1% or Rp 529,800 per month. By using this model, the utilization rate can be more optimal. The current value for the first vehicle is 104.6% and after the simulation it becomes 88.6%. Meanwhile, the utilization rate of the second vehicle is increase from 59.8% to 74.1%. The simulation model is able to produce the optimal shipping route with time restriction, vehicle capacity, and amount of vehicle.

  10. Influence of substrate magnetism of coated conductors on critical current distribution measurement using magnetic knife method

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Z. [Department of Electrical and Computer Engineering, Faculty of Engineering, Yokohama National University, 79-5 Tokiwadai, Hodogaya, Yokohama 240-8501 (Japan)], E-mail: kyo@rain.dnj.ynu.ac.jp; Amemiya, N.; Onuma, T. [Department of Electrical and Computer Engineering, Faculty of Engineering, Yokohama National University, 79-5 Tokiwadai, Hodogaya, Yokohama 240-8501 (Japan); Kato, T.; Ueyama, M. [Sumitomo Electric Ind., Ltd., Electric Power and Energy Research Laboratories, 1-1-3, Shimaya, Konohana, Osaka 554-0024 (Japan); Kashima, N.; Nagaya, S. [Chubu Electric Power Co., Inc., 20-1 Kita-Sekiyama, Ohdaka, Midori, Nagoya 459-8522 (Japan); Shiohara, Y. [Superconductivity Research Laboratory, ISTEC, 1-10-13 Shinonome, Koto, Tokyo 136-0062 (Japan)

    2008-09-15

    A YBCO coated conductor with non-magnetic substrate and a magnetic Ni alloy tape were prepared to investigate the influence of the substrate magnetism on the J{sub c} distribution measurement. We measured the J{sub c} distribution of the YBCO coated conductor and that of the same YBCO coated conductor with the magnetic tape over-lied on its face (the space between the superconducting layer and the magnetic tape is 20 {mu}m which is the thickness of protecting Ag layer), and compared the measured results with each other. The measured results agreed well with each other, and there was little influence of the tape magnetism on the J{sub c} distribution measurement. Based on this fact, the J{sub c} distribution in a HoBCO coated conductor with magnetic substrate was measured using the magnetic knife method. Twenty-two voltage taps were attached to the conductor with 5 mm separation along the conductor axis. The lateral J{sub c} distributions in the sections were generally in the shape of trapezoid.

  11. Decoupled Estimation of 2D DOA for Coherently Distributed Sources Using 3D Matrix Pencil Method

    Directory of Open Access Journals (Sweden)

    Tang Bin

    2008-08-01

    Full Text Available A new 2D DOA estimation method for coherently distributed (CD source is proposed. CD sources model is constructed by using Taylor approximation to the generalized steering vector (GSV, whereas the angular and angular spread are separated from signal pattern. The angular information is in the phase part of the GSV, and the angular spread information is in the module part of the GSV, thus enabling to decouple the estimation of 2D DOA from that of the angular spread. The array received data is used to construct three-dimensional (3D enhanced data matrix. The 2D DOA for coherently distributed sources could be estimated from the enhanced matrix by using 3D matrix pencil method. Computer simulation validated the efficiency of the algorithm.

  12. Feature Extraction Method for High Impedance Ground Fault Localization in Radial Power Distribution Networks

    DEFF Research Database (Denmark)

    Jensen, Kåre Jean; Munk, Steen M.; Sørensen, John Aasted

    1998-01-01

    A new approach to the localization of high impedance ground faults in compensated radial power distribution networks is presented. The total size of such networks is often very large and a major part of the monitoring of these is carried out manually. The increasing complexity of industrial...... of three phase voltages and currents. The method consists of a feature extractor, based on a grid description of the feeder by impulse responses, and a neural network for ground fault localization. The emphasis of this paper is the feature extractor, and the detection of the time instance of a ground fault...... processes and communication systems lead to demands for improved monitoring of power distribution networks so that the quality of power delivery can be kept at a controlled level. The ground fault localization method for each feeder in a network is based on the centralized frequency broadband measurement...

  13. An intergenerational program for persons with dementia using Montessori methods.

    Science.gov (United States)

    Camp, C J; Judge, K S; Bye, C A; Fox, K M; Bowden, J; Bell, M; Valencic, K; Mattern, J M

    1997-10-01

    An intergenerational program bringing together older adults with dementia and preschool children in one-on-one interactions is described. Montessori activities, which have strong ties to physical and occupational therapy, as well as to theories of developmental and cognitive psychology, are used as the context for these interactions. Our experience indicates that older adults with dementia can still serve as effective mentors and teachers to children in an appropriately structured setting.

  14. Evaluating Bayesian spatial methods for modelling species distributions with clumped and restricted occurrence data.

    Directory of Open Access Journals (Sweden)

    David W Redding

    Full Text Available Statistical approaches for inferring the spatial distribution of taxa (Species Distribution Models, SDMs commonly rely on available occurrence data, which is often clumped and geographically restricted. Although available SDM methods address some of these factors, they could be more directly and accurately modelled using a spatially-explicit approach. Software to fit models with spatial autocorrelation parameters in SDMs are now widely available, but whether such approaches for inferring SDMs aid predictions compared to other methodologies is unknown. Here, within a simulated environment using 1000 generated species' ranges, we compared the performance of two commonly used non-spatial SDM methods (Maximum Entropy Modelling, MAXENT and boosted regression trees, BRT, to a spatial Bayesian SDM method (fitted using R-INLA, when the underlying data exhibit varying combinations of clumping and geographic restriction. Finally, we tested how any recommended methodological settings designed to account for spatially non-random patterns in the data impact inference. Spatial Bayesian SDM method was the most consistently accurate method, being in the top 2 most accurate methods in 7 out of 8 data sampling scenarios. Within high-coverage sample datasets, all methods performed fairly similarly. When sampling points were randomly spread, BRT had a 1-3% greater accuracy over the other methods and when samples were clumped, the spatial Bayesian SDM method had a 4%-8% better AUC score. Alternatively, when sampling points were restricted to a small section of the true range all methods were on average 10-12% less accurate, with greater variation among the methods. Model inference under the recommended settings to account for autocorrelation was not impacted by clumping or restriction of data, except for the complexity of the spatial regression term in the spatial Bayesian model. Methods, such as those made available by R-INLA, can be successfully used to account

  15. Integrating Program Assessment and a Career Focus into a Research Methods Course

    Science.gov (United States)

    Senter, Mary Scheuer

    2017-01-01

    Sociology research methods students in 2013 and 2016 implemented a series of "real world" data gathering activities that enhanced their learning while assisting the department with ongoing program assessment and program review. In addition to the explicit collection of program assessment data on both students' development of sociological…

  16. Controllability analysis as a pre-selection method for sensor placement in water distribution systems

    OpenAIRE

    Diao, K.; Rauch, W.

    2013-01-01

    Detection of contamination events in water distribution systems is a crucial task for maintaining water security. Online monitoring is considered as the most cost-effective technology to protect against the impacts of contaminant intrusions. Optimization methods for sensor placement enable automated sensor layout design based on hydraulic and water quality simulation. However, this approach results in an excessive computational burden. In this paper we outline the application of controllabili...

  17. Sensitivity Weaknesses in Application of some Statistical Distribution in First Order Reliability Methods

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Enevoldsen, I.

    1993-01-01

    a stochastic variable is modelled by an asymmetrical density function. For lognormally, Gumbel and Weibull distributed stochastic variables it is shown for which combinations of the/3-point, the expected value and standard deviation the weakness can occur. In relation to practical application the behaviour...... is probably rather infrequent. A simple example is shown as illustration and to exemplify that for second order reliability methods and for exact calculations of the probability of failure this behaviour is much more infrequent....

  18. S-curve networks and a new method for estimating degree distributions of complex networks

    CERN Document Server

    Guo, Jin-Li

    2010-01-01

    In the study of complex networks almost all theoretical models are infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 addresses, we propose a forecasting model by using S curve (Logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference value for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, we propose a finite network model with the bulk growth. The model is called S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barab\\'asi-Albert method) is not suitable for the network. We develop a new method to predict the growth dynamics of the individual nodes, and use this to calculate analytically the connectivity distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-...

  19. Quantification of the spatial strain distribution of scoliosis using a thin-plate spline method.

    Science.gov (United States)

    Kiriyama, Yoshimori; Watanabe, Kota; Matsumoto, Morio; Toyama, Yoshiaki; Nagura, Takeo

    2014-01-03

    The objective of this study was to quantify the three-dimensional spatial strain distribution of a scoliotic spine by nonhomogeneous transformation without using a statistically averaged reference spine. The shape of the scoliotic spine was determined from computed tomography images from a female patient with adolescent idiopathic scoliosis. The shape of the scoliotic spine was enclosed in a rectangular grid, and symmetrized using a thin-plate spline method according to the node positions of the grid. The node positions of the grid were determined by numerical optimization to satisfy symmetry. The obtained symmetric spinal shape was enclosed within a new rectangular grid and distorted back to the original scoliotic shape using a thin-plate spline method. The distorted grid was compared to the rectangular grid that surrounded the symmetrical spine. Cobb's angle was reduced from 35° in the scoliotic spine to 7° in the symmetrized spine, and the scoliotic shape was almost fully symmetrized. The scoliotic spine showed a complex Green-Lagrange strain distribution in three dimensions. The vertical and transverse compressive/tensile strains in the frontal plane were consistent with the major scoliotic deformation. The compressive, tensile and shear strains on the convex side of the apical vertebra were opposite to those on the concave side. These results indicate that the proposed method can be used to quantify the three-dimensional spatial strain distribution of a scoliotic spine, and may be useful in quantifying the deformity of scoliosis. © 2013 Elsevier Ltd. All rights reserved.

  20. Distributed Cooperative Search Control Method of Multiple UAVs for Moving Target

    Directory of Open Access Journals (Sweden)

    Chang-jian Ru

    2015-01-01

    Full Text Available To reduce the impact of uncertainties caused by unknown motion parameters on searching plan of moving targets and improve the efficiency of UAV’s searching, a novel distributed Multi-UAVs cooperative search control method for moving target is proposed in this paper. Based on detection results of onboard sensors, target probability map is updated using Bayesian theory. A Gaussian distribution of target transition probability density function is introduced to calculate prediction probability of moving target existence, and then target probability map can be further updated in real-time. A performance index function combining with target cost, environment cost, and cooperative cost is constructed, and the cooperative searching problem can be transformed into a central optimization problem. To improve computational efficiency, the distributed model predictive control method is presented, and thus the control command of each UAV can be obtained. The simulation results have verified that the proposed method can avoid the blindness of UAV searching better and improve overall efficiency of the team effectively.

  1. MoDOT pavement preservation research program volume IV, pavement evaluation tools-data collection methods.

    Science.gov (United States)

    2015-10-01

    The overarching goal of the MoDOT Pavement Preservation Research Program, Task 3: Pavement Evaluation Tools Data : Collection Methods was to identify and evaluate methods to rapidly obtain network-level and project-level information relevant to :...

  2. KURTOSIS CORRECTION METHOD FOR VARIABLE CONTROL CHARTS - A COMPARISON IN LAPLACE DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    Metlapalli Chaitanya Priya

    2010-12-01

    Full Text Available A variable quality characteristic is assumed to follow the well known Laplace Distribution. Control chart constants for the process mean, process dispersion based on a number of sub group statistics including sub group mean and range are evaluated from the first principles. Limits obtained through kurtosis correction method are borrowed from Tadikamalla and Popescu (2003. The performance of these sets of control limits is compared through a simulation study and the relative preferences are arrived at. The methods are illustrated by an example.

  3. Methods and Strategies for Overvoltage Prevention in Low Voltage Distribution Systems with PV

    DEFF Research Database (Denmark)

    Hashemi Toghroljerdi, Seyedmostafa; Østergaard, Jacob

    2016-01-01

    absorption by PV inverters, application of active medium voltage to low voltage (MV/LV) transformers, active power curtailment, and demand response (DR). Coordination between voltage control units by localized, distributed, and centralized voltage control methods is compared using the voltage sensitivity...... to handle a high share of PV power. This paper provides an in-depth review of methods and strategies proposed to prevent overvoltage in LV grids with PV, and discusses the effectiveness, advantages, and disadvantages of them in detail. Based on the mathematical framework presented in the paper...

  4. Comparison of Threshold Detection Methods for the Generalized Pareto Distribution (GPD): Application to the NOAA-NCDC Daily Rainfall Dataset

    Science.gov (United States)

    Deidda, Roberto; Mamalakis, Antonis; Langousis, Andreas

    2015-04-01

    One of the most crucial issues in statistical hydrology is the estimation of extreme rainfall from data. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a Generalized Pareto Distribution (GPD) model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches that can be grouped into three basic classes: a) non-parametric methods that locate the changing point between extreme and non-extreme regions of the data, b) graphical methods where one studies the dependence of the GPD parameters (or related metrics) to the threshold level u, and c) Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GPD model is applicable. In this work, we review representative methods for GPD threshold detection, discuss fundamental differences in their theoretical bases, and apply them to daily rainfall records from the NOAA-NCDC open-access database (http://www.ncdc.noaa.gov/oa/climate/ghcn-daily/). We find that non-parametric methods that locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while graphical methods and GoF metrics that rely on limiting arguments for the upper distribution tail lead to unrealistically high thresholds u. The latter is expected, since one checks the validity of the limiting arguments rather than the applicability of a GPD distribution model. Better performance is demonstrated by graphical methods and GoF metrics that rely on GPD properties. Finally, we discuss the effects of data quantization (common in hydrologic applications) on the estimated thresholds. Acknowledgments: The research project is implemented within the framework of the Action «Supporting Postdoctoral Researchers» of the Operational Program "Education and Lifelong Learning" (Action's Beneficiary: General

  5. Pressure distribution evaluation of different filling methods for deposition of powders in dies: Measurement and modeling

    Science.gov (United States)

    Sayyar Roudsari, Saed

    The aim of this research was to measure, analyze, and model the pressure distribution characteristics of powder deposition into rectangular and circular shallow dies using four filling methods. The feed shoe, the rotational rainy, the point feed, and the pneumatic filling methods were used to investigate the deposition characteristics into shallow dies. In order to evaluate the pressure distribution during filling of shallow dies, factors influencing powder deposition were studied. The factors included particle size and shape, particle size distribution, feed shoe speed, and tube cross-section (in case of feed shoe filling) and deposition rates (in case of rotational rainy, point feed, and pneumatic filling). A battery powder mixture (BPM) and microcrystalline cellulose (Avicel PH102) with median size of 84 and 600mum, respectively, were used to fill a shallow rectangular die 32x30 mm and 6.5 mm deep and a shallow circular die 35 mm in diameter and 6.5 mm deep. The second generation of pressure deposition tester (PDT-II) with circular and square feed shoe tube cross-sections was used to measure the two powders' pressure distribution characteristics. An innovative rotational rainy filling device was designed and fabricated. This versatile device can be used to measure filling characteristics at different rotational speeds (1-10 rpm) for various powders. The point feed (funnel fill) method with a funnel of 30 mm inlet diameter and 4.2 mm outlet diameter opening was used to fill the rectangular and circular shallow dies. The pneumatic filling method was designed and fabricated to fill the die using air as the conveying medium in a rectangular cross-section tube. The pneumatic filling device was limited to using only the BPM powder, since the Avicel powder generated substantial quantity of airborne dust during the test. Symmetry analysis, variance metrics, and uniformity analysis were used to quantify the deposition characteristics. The results showed that: (1) filled

  6. The NASA/industry Design Analysis Methods for Vibrations (DAMVIBS) program: A government overview

    Science.gov (United States)

    Kvaternik, Raymond G.

    1993-01-01

    NASA-Langley, under the Design Analysis Methods for Vibrations (DAMVIBS) Program, set out in 1984 to establish the technology base needed by the rotorcraft industry for developing an advanced finite-element-based dynamics design analysis capability for vibrations. Considerable work has been done by the industry participants in the program since that time. Because the DAMVIBS Program is being phased out, a government/industry assessment of the program has been made to identify those accomplishments and contributions which may be ascribed to the program. The purpose of this paper is to provide an overview of the program and its accomplishments and contributions from the perspective of the government sponsoring organization.

  7. Distribution functions of magnetic nanoparticles determined by a numerical inversion method

    Science.gov (United States)

    Bender, P.; Balceris, C.; Ludwig, F.; Posth, O.; Bogart, L. K.; Szczerba, W.; Castro, A.; Nilsson, L.; Costo, R.; Gavilán, H.; González-Alonso, D.; de Pedro, I.; Fernández Barquín, L.; Johansson, C.

    2017-07-01

    In the present study, we applied a regularized inversion method to extract the particle size, magnetic moment and relaxation-time distribution of magnetic nanoparticles from small-angle x-ray scattering (SAXS), DC magnetization (DCM) and AC susceptibility (ACS) measurements. For the measurements the particles were colloidally dispersed in water. At first approximation the particles could be assumed to be spherically shaped and homogeneously magnetized single-domain particles. As model functions for the inversion, we used the particle form factor of a sphere (SAXS), the Langevin function (DCM) and the Debye model (ACS). The extracted distributions exhibited features/peaks that could be distinctly attributed to the individually dispersed and non-interacting nanoparticles. Further analysis of these peaks enabled, in combination with a prior characterization of the particle ensemble by electron microscopy and dynamic light scattering, a detailed structural and magnetic characterization of the particles. Additionally, all three extracted distributions featured peaks, which indicated deviations of the scattering (SAXS), magnetization (DCM) or relaxation (ACS) behavior from the one expected for individually dispersed, homogeneously magnetized nanoparticles. These deviations could be mainly attributed to partial agglomeration (SAXS, DCM, ACS), uncorrelated surface spins (DCM) and/or intra-well relaxation processes (ACS). The main advantage of the numerical inversion method is that no ad hoc assumptions regarding the line shape of the extracted distribution functions are required, which enabled the detection of these contributions. We highlighted this by comparing the results with the results obtained by standard model fits, where the functional form of the distributions was a priori assumed to be log-normal shaped.

  8. Multi-camera digital image correlation method with distributed fields of view

    Science.gov (United States)

    Malowany, Krzysztof; Malesa, Marcin; Kowaluk, Tomasz; Kujawinska, Malgorzata

    2017-11-01

    A multi-camera digital image correlation (DIC) method and system for measurements of large engineering objects with distributed, non-overlapping areas of interest are described. The data obtained with individual 3D DIC systems are stitched by an algorithm which utilizes the positions of fiducial markers determined simultaneously by Stereo-DIC units and laser tracker. The proposed calibration method enables reliable determination of transformations between local (3D DIC) and global coordinate systems. The applicability of the method was proven during in-situ measurements of a hall made of arch-shaped (18 m span) self-supporting metal-plates. The proposed method is highly recommended for 3D measurements of shape and displacements of large and complex engineering objects made from multiple directions and it provides the suitable accuracy of data for further advanced structural integrity analysis of such objects.

  9. Limitations of Calculating Field Distributions and Magnetic Susceptibilities in MRI using a Fourier Based Method

    Science.gov (United States)

    Cheng, Yu-Chung N.; Neelavalli, Jaladhar; Haacke, E. Mark

    2010-01-01

    A discrete Fourier based method for calculating field distributions and local magnetic susceptibility in MRI is carefully studied. Simulations suggest that the method based on discrete Greens functions in both 2D and 3D spaces has less error than the method based on continuous Greens functions. The 2D field calculations require the correction of the “Lorentz disk, which is similar to the Lorentz sphere term in the 3D case. A standard least squares fit is proposed for the extraction of susceptibility for a single object from MR images. Simulations and a phantom study confirm both the discrete method and the feasibility of the least squares fit approach. Finding accurate susceptibility values of local structures in the brain from MR images may be possible with this approach in the future. PMID:19182322

  10. A comparative study on optimization methods for the constrained nonlinear programming problems

    Directory of Open Access Journals (Sweden)

    Yeniay Ozgur

    2005-01-01

    Full Text Available Constrained nonlinear programming problems often arise in many engineering applications. The most well-known optimization methods for solving these problems are sequential quadratic programming methods and generalized reduced gradient methods. This study compares the performance of these methods with the genetic algorithms which gained popularity in recent years due to advantages in speed and robustness. We present a comparative study that is performed on fifteen test problems selected from the literature.

  11. Comparing the Maximum Likelihood Method and a Modified Moment Method to Fit a Weibull Distribution to Aircraft Engine Failure Time Data

    National Research Council Canada - National Science Library

    Gueimil, Fernando

    1997-01-01

    .... The other method is a Modified Method of Moments procedure and uses the fact that if time to failure T has a Weibull distribution with scale parameter lambda and shape parameter beta, then T(beta...

  12. Bias correction methods for regional climate model simulations considering the distributional parametric uncertainty underlying the observations

    Science.gov (United States)

    Kim, Kue Bum; Kwon, Hyun-Han; Han, Dawei

    2015-11-01

    In this paper, we present a comparative study of bias correction methods for regional climate model simulations considering the distributional parametric uncertainty underlying the observations/models. In traditional bias correction schemes, the statistics of the simulated model outputs are adjusted to those of the observation data. However, the model output and the observation data are only one case (i.e., realization) out of many possibilities, rather than being sampled from the entire population of a certain distribution due to internal climate variability. This issue has not been considered in the bias correction schemes of the existing climate change studies. Here, three approaches are employed to explore this issue, with the intention of providing a practical tool for bias correction of daily rainfall for use in hydrologic models ((1) conventional method, (2) non-informative Bayesian method, and (3) informative Bayesian method using a Weather Generator (WG) data). The results show some plausible uncertainty ranges of precipitation after correcting for the bias of RCM precipitation. The informative Bayesian approach shows a narrower uncertainty range by approximately 25-45% than the non-informative Bayesian method after bias correction for the baseline period. This indicates that the prior distribution derived from WG may assist in reducing the uncertainty associated with parameters. The implications of our results are of great importance in hydrological impact assessments of climate change because they are related to actions for mitigation and adaptation to climate change. Since this is a proof of concept study that mainly illustrates the logic of the analysis for uncertainty-based bias correction, future research exploring the impacts of uncertainty on climate impact assessments and how to utilize uncertainty while planning mitigation and adaptation strategies is still needed.

  13. A new method to model thickness distribution and estimate volume and extent of tephra fall deposits

    Science.gov (United States)

    Yang, Q.; Bursik, M. I.

    2016-12-01

    The most straightforward way to understand tephra fall deposits is through isopach maps. Hand-drawn mapping and interpolation are common tools in depicting the thickness distribution. Hand-drawn methods tend to increase the smoothness of the isopachs, while the local variations in the thickness measurements, which may be generated from important but subtle processes during and after eruptions, are neglected. Here we present a GIS-based method for modeling tephra thickness distribution with less subjectivity. This method assumes that under a log-scale transformation, the tephra thickness distribution is the sum of an exponential trend and local variations. The trend assumes a stable wind field during eruption, and is characterized by both distance and a measure of downwind distance, which is used to denote the influence of wind during tephra transport. The local variations are modeled through ordinary kriging, using the residuals from fitting the trend. This method has been applied to the published thickness datasets of Fogo Member A and Bed 1 of North Mono eruption (Fig. 1). The resultant contours and volume estimations are in general consistent with previous studies; differences between results from hand-drawn maps and model highlight inconsistencies in hand-drawing, and provide a quantitative basis for interpretation. Divergences from a stable wind field as reflected in isopach data are readily noticed. In this respect, wind direction was stable during North Mono Bed 1 deposition, and, although weak in the case of Fogo A, was not unidirectional. The multiple lobes of Fogo A are readily distinguished in the model isopachs, suggesting that separate lobes can in general be distinguished given sufficient data. A "plus-one" transformation based on this method is used to estimate fall deposit extent, which should prove useful in hypothesizing where one should find a particular tephra deposit. A limitation is that one must initialize the algorithm with an estimate of

  14. The Historical Method of Inquiry in a Teacher Training Program: Theory and Metatheory.

    Science.gov (United States)

    Kimmons, Ron

    A historical method of inquiry can be applied to an experimental teacher training program, specifically, the Ford Training and Preparation Program (FTPP). The historical method requires gathering a lot of loose ideas and events that have been part of the project and hanging them together in an integrated way. To achieve this, two organizing…

  15. Fundamental solution of the problem of linear programming and method of its determination

    Science.gov (United States)

    Petrunin, S. V.

    1978-01-01

    The idea of a fundamental solution to a problem in linear programming is introduced. A method of determining the fundamental solution and of applying this method to the solution of a problem in linear programming is proposed. Numerical examples are cited.

  16. Sensitivity analysis and parameter estimation for distributed hydrological modeling: potential of variational methods

    Science.gov (United States)

    Castaings, W.; Dartus, D.; Le Dimet, F.-X.; Saulnier, G.-M.

    2009-04-01

    Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised) with respect to model inputs. In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations) but didactic application case. It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run) and the singular value decomposition (SVD) of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation. For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers) is adopted. Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting.

  17. Gravimetric water distribution assessment from geoelectrical methods (ERT and EMI) in municipal solid waste landfill.

    Science.gov (United States)

    Dumont, Gaël; Pilawski, Tamara; Dzaomuho-Lenieregue, Phidias; Hiligsmann, Serge; Delvigne, Frank; Thonart, Philippe; Robert, Tanguy; Nguyen, Frédéric; Hermans, Thomas

    2016-09-01

    The gravimetric water content of the waste material is a key parameter in waste biodegradation. Previous studies suggest a correlation between changes in water content and modification of electrical resistivity. This study, based on field work in Mont-Saint-Guibert landfill (Belgium), aimed, on one hand, at characterizing the relationship between gravimetric water content and electrical resistivity and on the other hand, at assessing geoelectrical methods as tools to characterize the gravimetric water distribution in a landfill. Using excavated waste samples obtained after drilling, we investigated the influences of the temperature, the liquid phase conductivity, the compaction and the water content on the electrical resistivity. Our results demonstrate that Archie's law and Campbell's law accurately describe these relationships in municipal solid waste (MSW). Next, we conducted a geophysical survey in situ using two techniques: borehole electromagnetics (EM) and electrical resistivity tomography (ERT). First, in order to validate the use of EM, EM values obtained in situ were compared to electrical resistivity of excavated waste samples from corresponding depths. The petrophysical laws were used to account for the change of environmental parameters (temperature and compaction). A rather good correlation was obtained between direct measurement on waste samples and borehole electromagnetic data. Second, ERT and EM were used to acquire a spatial distribution of the electrical resistivity. Then, using the petrophysical laws, this information was used to estimate the water content distribution. In summary, our results demonstrate that geoelectrical methods represent a pertinent approach to characterize spatial distribution of water content in municipal landfills when properly interpreted using ground truth data. These methods might therefore prove to be valuable tools in waste biodegradation optimization projects. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Sensitivity analysis and parameter estimation for distributed hydrological modeling: potential of variational methods

    Directory of Open Access Journals (Sweden)

    W. Castaings

    2009-04-01

    Full Text Available Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised with respect to model inputs.

    In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations but didactic application case.

    It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run and the singular value decomposition (SVD of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation.

    For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers is adopted.

    Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting.

  19. A Method for Calculating the Induced Pressure Distribution Associated with a Jet in a Crossflow. M.S. Thesis; [for a flat plate

    Science.gov (United States)

    Dietz, W. E., Jr.

    1975-01-01

    A model is presented which can be used to study the loss of lift during hovering and horizontal flight of the VTOL aircraft. The model numerically predicts the pressure distribution induced by a round, turbulent, unheated, subsonic jet exhausting normally through a flat plate into a subsonic crossflow. The complete model assumes that the predominant features of the flow are jet entrainment and a pair of contrarotating vortices which form downstream of the jet. Experimentally determined vortex properties and a reasonable assumption concerning jet entrainment were used. Potential flow considerations were used except in the wake region, where a simple method for approximating the pressure distribution was suggested. The calculated pressure distribution, lift, and pitching moments on the flat plate are presented for a jet to crossflow velocity ratio of 8 and were compared with experimental results. A computer program is given which was used to calculate the pressure distribution across the flat plate.

  20. Using geostatistical methods to estimate snow water equivalence distribution in a mountain watershed

    Science.gov (United States)

    Balk, B.; Elder, K.; Baron, Jill S.

    1998-01-01

    Knowledge of the spatial distribution of snow water equivalence (SWE) is necessary to adequately forecast the volume and timing of snowmelt runoff.  In April 1997, peak accumulation snow depth and density measurements were independently taken in the Loch Vale watershed (6.6 km2), Rocky Mountain National Park, Colorado.  Geostatistics and classical statistics were used to estimate SWE distribution across the watershed.  Snow depths were spatially distributed across the watershed through kriging interpolation methods which provide unbiased estimates that have minimum variances.  Snow densities were spatially modeled through regression analysis.  Combining the modeled depth and density with snow-covered area (SCA produced an estimate of the spatial distribution of SWE.  The kriged estimates of snow depth explained 37-68% of the observed variance in the measured depths.  Steep slopes, variably strong winds, and complex energy balance in the watershed contribute to a large degree of heterogeneity in snow depth.

  1. US Global Change Research Program Distributed Cost Budget Interagency Funds Transfer from DOE to NSF

    Energy Technology Data Exchange (ETDEWEB)

    Uhle, Maria [National Science Foundation (NSF), Washington, DC (United States)

    2016-09-22

    These funds were transferred from DOE to NSF as DOE's contribution to the U.S. Global Change Research Program in support of 4 internationalnactivities/programs as approved by the U.S. Global Change Research Program on 14 March 2014. The programs are the International Geosphere-Biosphere Programme, the DIVERSITAS programme, and the World Climate Research Program. All program awards ended as of 09-23-2015.

  2. Optimization of enrichment distributions in nuclear fuel assemblies loaded with Uranium and Plutonium via a modified linear programming technique

    Energy Technology Data Exchange (ETDEWEB)

    Cuevas Vivas, Gabriel Francisco

    1999-12-01

    A methodology to optimize enrichment distributions in Light Water Reactor (LWR) fuel assemblies is developed and tested. The optimization technique employed is the linear programming revised simplex method, and the fuel assembly's performance is evaluated with a neutron transport code that is also utilized in the calculation of sensitivity coefficients. The enrichment distribution optimization procedure begins from a single-value (flat) enrichment distribution until a target, maximum local power peaking factor, is achieved. The optimum rod enrichment distribution, with 1.00 for the maximum local power peaking factor and with each rod having its own enrichment, is calculated at an intermediate stage of the analysis. Later, the best locations and values for a reduced number of rod enrichments is obtained as a function of a target maximum local power peaking factor by applying sensitivity to change techniques. Finally, a shuffling process that assigns individual rod enrichments among the enrichment groups is performed. The relative rod power distribution is then slightly modified and the rod grouping redefined until the optimum configuration is attained. To verify the accuracy of the relative rod power distribution, a full computation with the neutron transport code using the optimum enrichment distribution is carried out. The results are compared and tested for assembly designs loaded with fresh Low Enriched Uranium (LEU) and plutonium Mixed Oxide (MOX) isotopics for both reactor-grade and weapons-grade plutonium were utilized to demonstrate the wide range of applicability of the optimization technique. The feature of the assembly designs used for evaluation purposes included burnable absorbers and internal water regions, and were prepared to resemble the configurations of modern assemblies utilized in commercial Boiling Water Reactor (BWRs) and Pressurized Water Reactors (PWRs). In some cases, a net improvement in the relative rod power distribution or in the

  3. A New Method for Defuzzification and Ranking of Fuzzy Numbers Based on the Statistical Beta Distribution

    Directory of Open Access Journals (Sweden)

    A. Rahmani

    2016-01-01

    Full Text Available Granular computing is an emerging computing theory and paradigm that deals with the processing of information granules, which are defined as a number of information entities grouped together due to their similarity, physical adjacency, or indistinguishability. In most aspects of human reasoning, these granules have an uncertain formation, so the concept of granularity of fuzzy information could be of special interest for the applications where fuzzy sets must be converted to crisp sets to avoid uncertainty. This paper proposes a novel method of defuzzification based on the mean value of statistical Beta distribution and an algorithm for ranking fuzzy numbers based on the crisp number ranking system on R. The proposed method is quite easy to use, but the main reason for following this approach is the equality of left spread, right spread, and mode of Beta distribution with their corresponding values in fuzzy numbers within (0,1 interval, in addition to the fact that the resulting method can satisfy all reasonable properties of fuzzy quantity ordering defined by Wang et al. The algorithm is illustrated through several numerical examples and it is then compared with some of the other methods provided by literature.

  4. [Research on particle size and size distribution of nanocrystals in urines by laser light scattering method].

    Science.gov (United States)

    Wan, Mu-Hua; Zhao, Mei-Xia; Ouyang, Jian-Ming

    2009-01-01

    In the present paper laser light scattering method was used to investigate the particle size and size distribution of nanoparticles simultaneously in urines of lithogenic patients and healthy persons. This method is economic, rapid, accurate and easy to operate. The results showed that healthy urines are more stable than lithogenic urines. In urines of healthy human, the ultrafine crystals were well scattered and not aggregated with a smaller size. However, the ultrafine crystals in lithogenic urine have a broad size distribution, which increases the aggregation trend of nanocrystals. Based on the intensity-autocorrelation curve, the stability of urine samples of both healthy human and lithogenic patients was comparatively investigated. The relationship between the measurement results and the methods of handling sample was studied. The results show that a stable urine sample can be obtained by diluting the urine with a ratio of 20%, then centrifuging it at 4,000 round per minute for 15 minutes or filtrating it with 1.2 microm cellulose acetate filter. The results of laser light scattering method are consistent with that obtained by transmission electron microscopy (TEM). The reasons for the stability of urines are explained from the points of Van der Waals force, urine viscosity, pH value, ionic strength, surface charge and zeta potential of the ultrafine crystals, and so on. The results in this paper provide a new thought for preventing formation and recurrence of urinary stones.

  5. Extended Distributed State Estimation: A Detection Method against Tolerable False Data Injection Attacks in Smart Grids

    Directory of Open Access Journals (Sweden)

    Dai Wang

    2014-03-01

    Full Text Available False data injection (FDI is considered to be one of the most dangerous cyber-attacks in smart grids, as it may lead to energy theft from end users, false dispatch in the distribution process, and device breakdown during power generation. In this paper, a novel kind of FDI attack, named tolerable false data injection (TFDI, is constructed. Such attacks exploit the traditional detector’s tolerance of observation errors to bypass the traditional bad data detection. Then, a method based on extended distributed state estimation (EDSE is proposed to detect TFDI in smart grids. The smart grid is decomposed into several subsystems, exploiting graph partition algorithms. Each subsystem is extended outward to include the adjacent buses and tie lines, and generate the extended subsystem. The Chi-squares test is applied to detect the false data in each extended subsystem. Through decomposition, the false data stands out distinctively from normal observation errors and the detection sensitivity is increased. Extensive TFDI attack cases are simulated in the Institute of Electrical and Electronics Engineers (IEEE 14-, 39-, 118- and 300-bus systems. Simulation results show that the detection precision of the EDSE-based method is much higher than that of the traditional method, while the proposed method significantly reduces the associated computational costs.

  6. A Network Reconfiguration Method Considering Data Uncertainties in Smart Distribution Networks

    Directory of Open Access Journals (Sweden)

    Ke-yan Liu

    2017-05-01

    Full Text Available This work presents a method for distribution network reconfiguration with the simultaneous consideration of distributed generation (DG allocation. The uncertainties of load fluctuation before the network reconfiguration are also considered. Three optimal objectives, including minimal line loss cost, minimum Expected Energy Not Supplied, and minimum switch operation cost, are investigated. The multi-objective optimization problem is further transformed into a single-objective optimization problem by utilizing weighting factors. The proposed network reconfiguration method includes two periods. The first period is to create a feasible topology network by using binary particle swarm optimization (BPSO. Then the DG allocation problem is solved by utilizing sensitivity analysis and a Harmony Search algorithm (HSA. In the meanwhile, interval analysis is applied to deal with the uncertainties of load and devices parameters. Test cases are studied using the standard IEEE 33-bus and PG&E 69-bus systems. Different scenarios and comparisons are analyzed in the experiments. The results show the applicability of the proposed method. The performance analysis of the proposed method is also investigated. The computational results indicate that the proposed network reconfiguration algorithm is feasible.

  7. A Fast Method to Predict Distributions of Binary Black Hole Masses Based on Gaussian Process Regression

    Science.gov (United States)

    Yun, Yuqi; Zevin, Michael; Sampson, Laura; Kalogera, Vassiliki

    2017-01-01

    With more observations from LIGO in the upcoming years, we will be able to construct an observed mass distribution of black holes to compare with binary evolution simulations. This will allow us to investigate the physics of binary evolution such as the effects of common envelope efficiency and wind strength, or the properties of the population such as the initial mass function.However, binary evolution codes become computationally expensive when running large populations of binaries over a multi-dimensional grid of input parameters, and may simulate accurately only for a limited combination of input parameter values. Therefore we developed a fast machine-learning method that utilizes Gaussian Mixture Model (GMM) and Gaussian Process (GP) regression, which together can predict distributions over the entire parameter space based on a limited number of simulated models. Furthermore, Gaussian Process regression naturally provides interpolation errors in addition to interpolation means, which could provide a means of targeting the most uncertain regions of parameter space for running further simulations.We also present a case study on applying this new method to predicting chirp mass distributions for binary black hole systems (BBHs) in Milky-way like galaxies of different metallicities.

  8. An Analytical Method for Determining the Load Distribution of Single-Column Multibolt Connection

    Directory of Open Access Journals (Sweden)

    Nirut Konkong

    2017-01-01

    Full Text Available The purpose of this research was to investigate the effect of geometric variables on the bolt load distributions of a cold-formed steel bolt connection. The study was conducted using an experimental test, finite element analysis, and an analytical method. The experimental study was performed using single-lap shear testing of a concentrically loaded bolt connection fabricated from G550 cold-formed steel. Finite element analysis with shell elements was used to model the cold-formed steel plate while solid elements were used to model the bolt fastener for the purpose of studying the structural behavior of the bolt connections. Material nonlinearities, contact problems, and a geometric nonlinearity procedure were used to predict the failure behavior of the bolt connections. The analytical method was generated using the spring model. The bolt-plate interaction stiffness was newly proposed which was verified by the experiment and finite element model. It was applied to examine the effect of geometric variables on the single-column multibolt connection. The effects were studied of varying bolt diameter, plate thickness, and the plate thickness ratio (t2/t1 on the bolt load distribution. The results of the parametric study showed that the t2/t1 ratio controlled the efficiency of the bolt load distribution more than the other parameters studied.

  9. Optimization of axial enrichment distribution for BWR fuels using scoping libraries and block coordinate descent method

    Energy Technology Data Exchange (ETDEWEB)

    Tung, Wu-Hsiung, E-mail: wstong@iner.gov.tw; Lee, Tien-Tso; Kuo, Weng-Sheng; Yaur, Shung-Jung

    2017-03-15

    Highlights: • An optimization method for axial enrichment distribution in a BWR fuel was developed. • Block coordinate descent method is employed to search for optimal solution. • Scoping libraries are used to reduce computational effort. • Optimization search space consists of enrichment difference parameters. • Capability of the method to find optimal solution is demonstrated. - Abstract: An optimization method has been developed to search for the optimal axial enrichment distribution in a fuel assembly for a boiling water reactor core. The optimization method features: (1) employing the block coordinate descent method to find the optimal solution in the space of enrichment difference parameters, (2) using scoping libraries to reduce the amount of CASMO-4 calculation, and (3) integrating a core critical constraint into the objective function that is used to quantify the quality of an axial enrichment design. The objective function consists of the weighted sum of core parameters such as shutdown margin and critical power ratio. The core parameters are evaluated by using SIMULATE-3, and the cross section data required for the SIMULATE-3 calculation are generated by using CASMO-4 and scoping libraries. The application of the method to a 4-segment fuel design (with the highest allowable segment enrichment relaxed to 5%) demonstrated that the method can obtain an axial enrichment design with improved thermal limit ratios and objective function value while satisfying the core design constraints and core critical requirement through the use of an objective function. The use of scoping libraries effectively reduced the number of CASMO-4 calculation, from 85 to 24, in the 4-segment optimization case. An exhausted search was performed to examine the capability of the method in finding the optimal solution for a 4-segment fuel design. The results show that the method found a solution very close to the optimum obtained by the exhausted search. The number of

  10. Nevada Test Site Radionuclide Inventory and Distribution Program: Report No. 2. Areas 2 and 4

    Energy Technology Data Exchange (ETDEWEB)

    McArthur, R.D.; Kordas, J.F.

    1985-09-01

    Radionuclide activity was measured by in situ spectrometry at 349 locations in Areas 2 and 4 of the Nevada Test Site. The data were analyzed by kriging and other methods to estimate the total inventory and distribution of six man-made radionuclides that were present in measurable amounts. Isotope ratios in soil samples were then used to infer the inventories of three other radionuclides. The estimated inventories were: /sup 241/Am, 8 curies; /sup 238/Pu, 18 curies; /sup 239,240/Pu, 51 curies; /sup 60/Co, 7 curies; /sup 137/Cs, 34 curies; /sup 90/Sr, 71 curies; /sup 152/Eu, 35 curies; /sup 154/Eu, 6 curies; and /sup 155/Eu, 3 curies.

  11. Contraceptive method-mix and family planning program in Vietnam.

    Science.gov (United States)

    Hardjanti, K

    1995-01-01

    In Vietnam between 1989 and 1993, the modern contraceptive prevalence rate stopped at 38%. In 1984, the government implemented economic renovation (Doi Moi). This closed agricultural cooperatives which had supported commune health centers. Health workers received either low or no wages, resulting in low morale, absenteeism, and moving to the private sector or agriculture. Most women began using the IUD because it was low cost and easy to monitor, provided long-term protection against pregnancy, and there was a limited supply of oral contraceptives (OCs) and condoms. Condom use fell from 13% in 1984 to 1.4% in 1993. More than 80% of contraceptive users used the IUD. The IUD is not appropriate for many women because of health problems: 60-70% of pregnant women and 80% of parturient women have anemia, 40-60% of women have reproductive tract infections, and sexually transmitted diseases are rising. Vietnam's Prime Minister and the Communist Party are committed to expanding the range of the contraceptive method-mix and choice. Limited method choice is especially a problem in rural areas. It increases the abortion rate. About 38% of abortions supplant modern and traditional family planning methods. Improper counseling, insufficient knowledge, and low promotion of OCs account for the low use of OCs. Inferior quality, aversion by couples, and inaccessibility in most rural areas limit condom use. Women's fear and husband's objection outweigh the government's promotion of sterilization. Providers have limited comprehensive accurate and current knowledge of contraceptives. Health service facilities are concentrated in urban and semiurban areas. The quality of care in rural areas, where there is no clean water supply, is inferior. An annual target used to forecast contraceptive needs risks contraceptive stocks expiring during storage and/or disruptions in supply of users. Consecutive actions to eliminate constraints to use of other methods, developing a community level service

  12. Quantifying Uncertainties from Presence Data Sampling Methods for Species Distribution Modeling: Focused on Vegetation.

    Science.gov (United States)

    Sung, S.; Kim, H. G.; Lee, D. K.; Park, J. H.; Mo, Y.; Kil, S.; Park, C.

    2016-12-01

    The impact of climate change has been observed throughout the globe. The ecosystem experiences rapid changes such as vegetation shift, species extinction. In these context, Species Distribution Model (SDM) is one of the popular method to project impact of climate change on the ecosystem. SDM basically based on the niche of certain species with means to run SDM present point data is essential to find biological niche of species. To run SDM for plants, there are certain considerations on the characteristics of vegetation. Normally, to make vegetation data in large area, remote sensing techniques are used. In other words, the exact point of presence data has high uncertainties as we select presence data set from polygons and raster dataset. Thus, sampling methods for modeling vegetation presence data should be carefully selected. In this study, we used three different sampling methods for selection of presence data of vegetation: Random sampling, Stratified sampling and Site index based sampling. We used one of the R package BIOMOD2 to access uncertainty from modeling. At the same time, we included BioCLIM variables and other environmental variables as input data. As a result of this study, despite of differences among the 10 SDMs, the sampling methods showed differences in ROC values, random sampling methods showed the lowest ROC value while site index based sampling methods showed the highest ROC value. As a result of this study the uncertainties from presence data sampling methods and SDM can be quantified.

  13. A Communication Based Islanding Detection Method for Photovoltaic Distributed Generation Systems

    Directory of Open Access Journals (Sweden)

    Gökay Bayrak

    2014-01-01

    Full Text Available PV based distributed generation (DG systems must have some electrical connection standards while they connected to an electrical grid. One of these electrical conditions and the most important one is unplanned islanding condition. Islanding is a very dangerous condition because it could damage the PV system and related electrical systems and also working people have been at risk in islanding condition. In this application study, a new communication based islanding detection method was introduced for grid tied PV systems. A real time controller was developed with Labview for detecting islanding condition. Developed method is a hybrid method which uses the effective ways of communication based and passive methods. The results obtained from the proposed real time islanding detection method show that proposed method is reliable, robust, and independent from load and inverter. Nondetection zone (NDZ is almost zero and islanding detection time is approximately 1-2 cycles indicated in experimental results so this time has a significant short response time according to IEEE 929-2000 standard. The proposed method is effective and presents a realistic solution to islanding so it could be implemented easily to grid tied PV systems and could be used in real system applications.

  14. Uranium distribution in Baikal sediments using SSNTD method for paleoclimate reconstruction

    CERN Document Server

    Zhmodik, S M; Nemirovskaya, N A; Zhatnuev, N S

    1999-01-01

    First data on local distribution of uranium in the core of Lake Baikal floor sediments (Academician ridge, VER-95-2, St 3 BC, 53 deg. 113'12'N/108 deg. 25'01'E) are presented in this paper. They have been obtained using (n,f)-radiography. Various forms of U-occurrence in floor sediments are shown, i.e. evenly disseminated, associated with clayey and diatomaceous components; micro- and macroinclusions of uranium bearing minerals - microlocations with uranium content 10-50 times higher than U-concentrations associated with clayey and diatomaceous components. Relative and absolute U-concentration can be determined for every mineral. Signs of various order periodicity of U-distribution in the core of Lake Baikal floor sediments have been found. Using (n,f)-radiography method of the study of Baikal floor sediment permits gathering of new information that can be used at paleoclimate reconstruction.

  15. A practical method for in-situ thickness determination using energy distribution of beta particles

    Energy Technology Data Exchange (ETDEWEB)

    Yalcin, S., E-mail: syalcin@kastamonu.edu.tr [Kastamonu University, Education Faculty, 37200 Kastamonu (Turkey); Gurler, O. [Physics Department, Faculty of Arts and Sciences, Uludag University, Gorukle Campus, 16059 Bursa (Turkey); Gundogdu, O. [Kocaeli University, Umuttepe Campus, 41380 Kocaeli (Turkey); Bradley, D.A. [CNRP, Department of Physics, Faculty of Engineering and Physical Sciences, University of Surrey, Guildford GU2 7XH (United Kingdom)

    2012-01-15

    This paper discusses a method to determine the thickness of an absorber using the energy distribution of beta particles. An empirical relationship was obtained between the absorber thickness and the energy distribution of beta particles transmitted through. The thickness of a polyethylene radioactive source cover was determined by exploiting this relationship, which has largely been left unexploited allowing us to determine the in-situ cover thickness of beta sources in a fast, cheap and non-destructive way. - Highlights: Black-Right-Pointing-Pointer A practical and in-situ unknown cover thickness determination Black-Right-Pointing-Pointer Cheap and readily available compared to other techniques. Black-Right-Pointing-Pointer Beta energy spectrum.

  16. Enhanced Local Grid Voltage Support Method for High Penetration of Distributed Generators

    DEFF Research Database (Denmark)

    Demirok, Erhan; Sera, Dezso; Rodriguez, Pedro

    2011-01-01

    Grid voltage rise and thermal loading of network components are the most remarkable barriers to allow high number of distributed generator (DG) connections on the medium voltage (MV) and low voltage (LV) electricity networks. The other barriers such as grid power quality (harmonics, voltage...... unbalance, flicker etc.) and network protection mechanisms can be figured out once the maximum DG connection capacity of the network is reached. In this paper, additional reactive power reserve of inverter interfaced DGs is exploited to lower the grid voltage level by means of location-adaptive Q(U) droop...... function. The proposed method aims to achieve less grid voltage violations thus more DG connection on the electricity distribution networks can be allowed....

  17. Dataset for Testing Contamination Source Identification Methods for Water Distribution Networks

    Science.gov (United States)

    This dataset includes the results of a simulation study using the source inversion techniques available in the Water Security Toolkit. The data was created to test the different techniques for accuracy, specificity, false positive rate, and false negative rate. The tests examined different parameters including measurement error, modeling error, injection characteristics, time horizon, network size, and sensor placement. The water distribution system network models that were used in the study are also included in the dataset. This dataset is associated with the following publication:Seth, A., K. Klise, J. Siirola, T. Haxton , and C. Laird. Testing Contamination Source Identification Methods for Water Distribution Networks. Journal of Environmental Division, Proceedings of American Society of Civil Engineers. American Society of Civil Engineers (ASCE), Reston, VA, USA, ., (2016).

  18. Teaching methods and surgical training in North American graduate periodontics programs: exploring the landscape.

    Science.gov (United States)

    Ghiabi, Edmond; Taylor, K Lynn

    2010-06-01

    This project aimed at documenting the surgical training curricula offered by North American graduate periodontics programs. A survey consisting of questions on teaching methods employed and the content of the surgical training program was mailed to directors of all fifty-eight graduate periodontics programs in Canada and the United States. The chi-square test was used to assess whether the residents' clinical experience was significantly (Pperiodontal plastic procedures, hard tissue grafts, and implants. Furthermore, residents in programs offering a structured preclinical component performed significantly more procedures (P=0.012) using lasers than those in programs not offering a structured preclinical program. Devising new and innovative teaching methods is a clear avenue for future development in North American graduate periodontics programs.

  19. [Monitoring microbiological safety of small systems of water distribution. Comparison of two sampling programs in a town in central Italy].

    Science.gov (United States)

    Papini, Paolo; Faustini, Annunziata; Manganello, Rosa; Borzacchi, Giancarlo; Spera, Domenico; Perucci, Carlo A

    2005-01-01

    To determine the frequency of sampling in small water distribution systems (distribution. We carried out two sampling programs to monitor the water distribution system in a town in Central Italy between July and September 1992; the Poisson distribution assumption implied 4 water samples, the assumption of negative binomial distribution implied 21 samples. Coliform organisms were used as indicators of water safety. The network consisted of two pipe rings and two wells fed by the same water source. The number of summer customers varied considerably from 3,000 to 20,000. The mean density was 2.33 coliforms/100 ml (sd= 5.29) for 21 samples and 3 coliforms/100 ml (sd= 6) for four samples. However the hypothesis of homogeneity was rejected (p-value network, determining the samples' size according to heterogeneity hypothesis strengthens the statement that water is drinkable compared with homogeneity assumption.

  20. Liquid rocket combustion computer model with distributed energy release. DER computer program documentation and user's guide, volume 1

    Science.gov (United States)

    Combs, L. P.

    1974-01-01

    A computer program for analyzing rocket engine performance was developed. The program is concerned with the formation, distribution, flow, and combustion of liquid sprays and combustion product gases in conventional rocket combustion chambers. The capabilities of the program to determine the combustion characteristics of the rocket engine are described. Sample data code sheets show the correct sequence and formats for variable values and include notes concerning options to bypass the input of certain data. A seperate list defines the variables and indicates their required dimensions.

  1. Minimally important change determined by a visual method integrating an anchor-based and a distribution-based approach.

    NARCIS (Netherlands)

    de Vet, H.C.W.; Ostelo, R.W.J.G.; Terwee, C.B.; van der Roer, N.; Knol, D.L.; Beckerman, H.; Boers, M.; Bouter, L.M.

    2007-01-01

    Background: Minimally important changes (MIC) in scores help interpret results from health status instruments. Various distribution-based and anchor-based approaches have been proposed to assess MIC. Objectives: To describe and apply a visual method, called the anchor-based MIC distribution method,

  2. Temperature Programmed Reduction/Oxidation (TPR/TPO) Methods

    Science.gov (United States)

    Gervasini, Antonella

    The redox properties of the metal oxides impart them peculiar catalytic activity which is exploited in reactions of oxidation and reduction of high applicative importance. It is possible to measure the extent of oxidation/reduction of given metal oxide by thermal methods which are become very popular: TPR and TPO analyses. By successive experiments of reduction and oxidation (TPR-TPO cycles) it is possible to control the reversible redox ability of a given oxide in view of its use as catalyst. The two methods are here presented with explanation on some possibility of exploitation of kinetic study to derive quantitative information on the reduction/oxidation of the oxide. Examples of selected metal oxides with well-established redox properties which have been used in catalytic processes are shown.

  3. Acquisition Program Problem Detection Using Text Mining Methods

    Science.gov (United States)

    2012-03-01

    this method into their practices (Berry & Kogan, 2010). Latent Semantic Analysis (LSA), Probabilistic Latent Semantic Analysis (PLSA), Latent...also known as Latent Semantic Indexing, uses a series of three matrices (document eigenvector, eigenvalue, and term eigenvector) to approximate the...Estimate at Complete • EVM: Earned Value Management • HTML: Hyper Text Markup Language • LDA: Latent Dirichlet Allocation • LSA: Latent Semantic Analysis

  4. ADVANCING THE STUDY OF VIOLENCE AGAINST WOMEN USING MIXED METHODS: INTEGRATING QUALITATIVE METHODS INTO A QUANTITATIVE RESEARCH PROGRAM

    Science.gov (United States)

    Testa, Maria; Livingston, Jennifer A.; VanZile-Tamsen, Carol

    2011-01-01

    A mixed methods approach, combining quantitative with qualitative data methods and analysis, offers a promising means of advancing the study of violence. Integrating semi-structured interviews and qualitative analysis into a quantitative program of research on women’s sexual victimization has resulted in valuable scientific insight and generation of novel hypotheses for testing. This mixed methods approach is described and recommendations for integrating qualitative data into quantitative research are provided. PMID:21307032

  5. Advancing the study of violence against women using mixed methods: integrating qualitative methods into a quantitative research program.

    Science.gov (United States)

    Testa, Maria; Livingston, Jennifer A; VanZile-Tamsen, Carol

    2011-02-01

    A mixed methods approach, combining quantitative with qualitative data methods and analysis, offers a promising means of advancing the study of violence. Integrating semi-structured interviews and qualitative analysis into a quantitative program of research on women's sexual victimization has resulted in valuable scientific insight and generation of novel hypotheses for testing. This mixed methods approach is described and recommendations for integrating qualitative data into quantitative research are provided.

  6. ADVANCING THE STUDY OF VIOLENCE AGAINST WOMEN USING MIXED METHODS: INTEGRATING QUALITATIVE METHODS INTO A QUANTITATIVE RESEARCH PROGRAM

    OpenAIRE

    Testa, Maria; Livingston, Jennifer A.; VanZile-Tamsen, Carol

    2011-01-01

    A mixed methods approach, combining quantitative with qualitative data methods and analysis, offers a promising means of advancing the study of violence. Integrating semi-structured interviews and qualitative analysis into a quantitative program of research on women’s sexual victimization has resulted in valuable scientific insight and generation of novel hypotheses for testing. This mixed methods approach is described and recommendations for integrating qualitative data into quantitative res...

  7. Comparison of Statistical Methods for Detector Testing Programs

    Energy Technology Data Exchange (ETDEWEB)

    Rennie, John Alan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Abhold, Mark [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-10-14

    A typical goal for any detector testing program is to ascertain not only the performance of the detector systems under test, but also the confidence that systems accepted using that testing program’s acceptance criteria will exceed a minimum acceptable performance (which is usually expressed as the minimum acceptable success probability, p). A similar problem often arises in statistics, where we would like to ascertain the fraction, p, of a population of items that possess a property that may take one of two possible values. Typically, the problem is approached by drawing a fixed sample of size n, with the number of items out of n that possess the desired property, x, being termed successes. The sample mean gives an estimate of the population mean p ≈ x/n, although usually it is desirable to accompany such an estimate with a statement concerning the range within which p may fall and the confidence associated with that range. Procedures for establishing such ranges and confidence limits are described in detail by Clopper, Brown, and Agresti for two-sided symmetric confidence intervals.

  8. A Distributed Learning Method for ℓ 1 -Regularized Kernel Machine over Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Xinrong Ji

    2016-07-01

    Full Text Available In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ 1 norm regularization ( ℓ 1 -regularized is investigated, and a novel distributed learning algorithm for the ℓ 1 -regularized kernel minimum mean squared error (KMSE machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN test platform further shows the advantages of the proposed algorithm with respect to communication cost.

  9. DLTAP: A Network-efficient Scheduling Method for Distributed Deep Learning Workload in Containerized Cluster Environment

    Directory of Open Access Journals (Sweden)

    Qiao Wei

    2017-01-01

    Full Text Available Deep neural networks (DNNs have recently yielded strong results on a range of applications. Training these DNNs using a cluster of commodity machines is a promising approach since training is time consuming and compute-intensive. Furthermore, putting DNN tasks into containers of clusters would enable broader and easier deployment of DNN-based algorithms. Toward this end, this paper addresses the problem of scheduling DNN tasks in the containerized cluster environment. Efficiently scheduling data-parallel computation jobs like DNN over containerized clusters is critical for job performance, system throughput, and resource utilization. It becomes even more challenging with the complex workloads. We propose a scheduling method called Deep Learning Task Allocation Priority (DLTAP which performs scheduling decisions in a distributed manner, and each of scheduling decisions takes aggregation degree of parameter sever task and worker task into account, in particularly, to reduce cross-node network transmission traffic and, correspondingly, decrease the DNN training time. We evaluate the DLTAP scheduling method using a state-of-the-art distributed DNN training framework on 3 benchmarks. The results show that the proposed method can averagely reduce 12% cross-node network traffic, and decrease the DNN training time even with the cluster of low-end servers.

  10. Estimation of the Binominal Distribution Parameters Using the Method of Moments and Its Asymptotic Properties

    Directory of Open Access Journals (Sweden)

    A.N. Safiullina

    2016-06-01

    Full Text Available The problem of estimating the parameters m and p of the binomial distribution for a sample having the fixed volume n with the help of the method of moments is considered in this paper. Using the delta method, the joint asymptotic normality of the estimates is established and the parameters of the limit distribution are calculated. The moment estimates of the parameters m and p do not have averages and variance. An explanation is offered for the asymptotic normality parameters in terms of characteristics of the accuracy properties of the estimates. On the basis of the data of statistical modelling, the accuracy properties of the estimates by the delta-method and their modifications which do not have initial defects of the estimates (the values of the estimates of p are below zero and those of m are smaller than the greatest value in the sample are explored. An example of estimating the parameters m and p according to the observations of the number of responses in the experiment with nervous synapse (m is the number of vesicles with acetylcholine in the vicinity of the synapse, p is the probability of acetylcholine release by each vesicle is provided.

  11. A Method for Automatic Runtime Verification of Automata-Based Programs

    OpenAIRE

    Oleg, Stepanov; Anatoly, Shalyto

    2008-01-01

    Currently Model Checking is the only practically used method for verification of automata-based programs. However, current implementations of this method only allow verification of simple automata systems. We suggest using a different approach, runtime verification, for verification of automata systems. We discuss advantages and disadvantages of this approach, propose a method for automatic verification of automata-based programs which uses this approach and conduct experimental performance s...

  12. Playable Serious Games for Studying and Programming Computational STEM and Informatics Applications of Distributed and Parallel Computer Architectures

    Science.gov (United States)

    Amenyo, John-Thones

    2012-01-01

    Carefully engineered playable games can serve as vehicles for students and practitioners to learn and explore the programming of advanced computer architectures to execute applications, such as high performance computing (HPC) and complex, inter-networked, distributed systems. The article presents families of playable games that are grounded in…

  13. Food Deserts in Leon County, FL: Disparate Distribution of Supplemental Nutrition Assistance Program-Accepting Stores by Neighborhood Characteristics

    Science.gov (United States)

    Rigby, Samantha; Leone, Angela F.; Kim, Hwahwan; Betterley, Connie; Johnson, Mary Ann; Kurtz, Hilda; Lee, Jung Sun

    2012-01-01

    Objective: Examine whether neighborhood characteristics of racial composition, income, and rurality were related to distribution of Supplemental Nutrition Assistance Program (SNAP)-accepting stores in Leon County, Florida. Design: Cross-sectional; neighborhood and food store data collected in 2008. Setting and Participants: Forty-eight census…

  14. Mapping post-disturbance stand age distribution in Siberian larch forest based on a novel method

    Science.gov (United States)

    Chen, D.; Loboda, T. V.; Krylov, A.; Potapov, P.

    2014-12-01

    The Siberian larch forest, which accounts for nearly 20% of the global boreal forest biome, is unique, important, yet significantly understudied. These deciduous needleleaf forests with a single species dominance over a large continuous area are not found anywhere except the extreme continental zones of Siberia and the Russian Far East. Most of these forests are located in remote and sparsely populated areas and, therefore, little is known about spatial variability of their structure and dynamics. Wall-to-wall repeated observations of this area are available only since the 2000s. Previously, we developed methods for reconstruction of stand-age distribution from a sample of 1980-2000 disturbances in Landsat TM and ETM+ imagery. However, availability of those images in Siberian larch forests is particularly limited. Built upon the hypothesis that the spectral characteristics of the disturbed forest in the region change with time consistently, this paper proposes a novel method utilizing the newly released Global Forest Change (GFC) 2000-2012 dataset. We exploit the data-rich era of annual forest disturbance samples identified between 2000 and 2012 in the Siberian larch forest by the GFC dataset to build a robust training set of spectral signatures from regrowing larch forests as they appear in Landsat imagery in 2012. The extracted statistics are ingested into a random forest, which predicts the approximate stand age for every forested pixel in the circa 2000 composite. After merging the estimated stand age distribution for 1989-2000 with the observed disturbance records for 2001-2012, a gap-free 30 m resolution 24-year long record of stand age distribution is obtained. A preliminary accuracy assessment against the Advanced Very High Resolution Radiometer (AVHRR) burned area product suggested satisfactory performance of the proposed method.

  15. Tau lepton production and decays: perspective of multi-dimensional distributions and Monte Carlo methods

    Science.gov (United States)

    Was, Z.

    2017-06-01

    Status of τ lepton decay Monte Carlo generator TAUOLA, its main applications and recent developments are reviewed. It is underlined, that in recent efforts on development of new hadronic currents, the multi-dimensional nature of distributions of the experimental data must be taken with a great care: lesson from comparison and fits to the BaBar and Belle data is recalled. It was found, that as in the past at a time of comparisons with CLEO and ALEPH data, proper fitting, to as detailed as possible representation of the experimental data, is essential for appropriate developments of models of τ decay dynamic. This multi-dimensional nature of distributions is also important for observables where τ leptons are used to constrain experimental data. In later part of the presentation, use of the TAUOLA program for phenomenology of W, Z, H decays at LHC is addressed, in particular in the context of the Higgs boson parity measurements. Some new results, relevant for QED lepton pair emission are mentioned as well.

  16. Pore size distribution of bioresorbable films using a 3-D diffusion NMR method.

    Science.gov (United States)

    Benjamini, Dan; Elsner, Jonathan J; Zilberman, Meital; Nevo, Uri

    2014-06-01

    Pore size distribution (PSD) within porous biomaterials is an important microstructural feature for assessing their biocompatibility, longevity and drug release kinetics. Scanning electron microscopy (SEM) is the most common method used to obtain the PSD of soft biomaterials. The method is highly invasive and user dependent, since it requires fracturing of the sample and then considers only the small portion that the user had acquired in the image. In the current study we present a novel nuclear magnetic resonance (NMR) method as an alternative method for estimation of PSD in soft porous materials. This noninvasive 3-D diffusion NMR method considers the entire volume of the specimen and eliminates the user's need to choose a specific field of view. Moreover, NMR does not involve exposure to ionizing radiation and can potentially have preclinical and clinical uses. The method was applied on four porous 50/50 poly(dl-lactic-co-glycolic acid) bioresorbable films with different porosities, which were created using the freeze-drying of inverted emulsions technique. We show that the proposed NMR method is able to address the main limitations associated with SEM-based PSD estimations by being non-destructive, depicting the full volume of the specimens and not being dependent on the magnification factor. Upon comparison, both methods yielded a similar PSD in the smaller pore size range (1-25μm), while the NMR-based method provided additional information on the larger pores (25-50μm). Copyright © 2014 Acta Materialia Inc. All rights reserved.

  17. A double-distribution-function lattice Boltzmann method for bed-load sediment transport

    OpenAIRE

    Cai, Li; Xu, Wenjing; Luo, Xiaoyu

    2017-01-01

    The governing equations of bed-load sediment transport are the shallow water equations and the Exner equation. To embody the advantages of the lattice Boltzmann method (e.g., simplicity, efficiency), the three-velocity (D1Q3) and five-velocity (D1Q5) double-distribution-function lattice Boltzmann models (DDF-LBMs), which can present the numerical solution for one-dimensional bed-load sediment transport, are proposed here based on the quasi-steady approach. The so-called DDF-LBM means we use t...

  18. Improved Extreme-Scenario Extraction Method For The Economic Dispatch Of Active Distribution Networks

    DEFF Research Database (Denmark)

    Zhang, Yipu; Ai, Xiaomeng; Fang, Jiakun

    2017-01-01

    ) of active distribution network with renewables. The extreme scenarios are selected from the historical data using the improved minimum volume enclosing ellipsoid (MVEE) algorithm to guarantee the security of system operation while avoid frequently switching the transformer tap. It is theoretically proved......Optimization techniques with good characterization of the uncertainties in modern power system enable the system operators well trade-off between security and sustainability. This paper proposes the extreme-scenario extraction based robust optimization method for the economic dispatch (ED...

  19. U.S. Geological Survey Gap Analysis Program Species Distribution Models

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — GAP distribution models represent the areas where species are predicted to occur based on habitat associations. GAP distribution models are the spatial arrangement...

  20. Integrating design science theory and methods to improve the development and evaluation of health communication programs.

    Science.gov (United States)

    Neuhauser, Linda; Kreps, Gary L

    2014-12-01

    Traditional communication theory and research methods provide valuable guidance about designing and evaluating health communication programs. However, efforts to use health communication programs to educate, motivate, and support people to adopt healthy behaviors often fail to meet the desired goals. One reason for this failure is that health promotion issues are complex, changeable, and highly related to the specific needs and contexts of the intended audiences. It is a daunting challenge to effectively influence health behaviors, particularly culturally learned and reinforced behaviors concerning lifestyle factors related to diet, exercise, and substance (such as alcohol and tobacco) use. Too often, program development and evaluation are not adequately linked to provide rapid feedback to health communication program developers so that important revisions can be made to design the most relevant and personally motivating health communication programs for specific audiences. Design science theory and methods commonly used in engineering, computer science, and other fields can address such program and evaluation weaknesses. Design science researchers study human-created programs using tightly connected build-and-evaluate loops in which they use intensive participatory methods to understand problems and develop solutions concurrently and throughout the duration of the program. Such thinking and strategies are especially relevant to address complex health communication issues. In this article, the authors explore the history, scientific foundation, methods, and applications of design science and its potential to enhance health communication programs and their evaluation.

  1. Optimal Operation of Distribution Electronic Power Transformer Using Linear Quadratic Regulator Method

    Directory of Open Access Journals (Sweden)

    Mohammad Hosein Rezaei

    2011-10-01

    Full Text Available Transformers perform many functions such as voltage transformation, isolation and noise decoupling. They are indispensable components in electric power distribution system. However, at low frequencies (50 Hz, they are one of the heaviest and the most expensive equipment in an electrical distribution system. Nowadays, electronic power transformers are used instead of conventional power transformers that do voltage transformation and power delivery in power system by power electronic converter. In this paper, the structure of distribution electronic power transformer (DEPT are analized and then paid attention on the design of a linear-quadratic-regulator (LQR with integral action to improve dynamic performance of DEPT with voltage unbalance, voltage sags, voltage harmonics and voltage flicker. The presentation control strategy is simulated by MATLAB/SIMULINK. In addition, the results that are in terms of dc-link reference voltage, input and output voltages clearly show that a better dynamic performance can be achieved by using the LQR method when compared to other techniques.

  2. Investigating the effect of tablet thickness and punch curvature on density distribution using finite elements method.

    Science.gov (United States)

    Diarra, Harona; Mazel, Vincent; Busignies, Virginie; Tchoreloff, Pierre

    2015-09-30

    Finite elements method was used to study the influence of tablet thickness and punch curvature on the density distribution inside convex faced (CF) tablets. The modeling of the process was conducted on 2 pharmaceutical excipients (anhydrous calcium phosphate and microcrystalline cellulose) by using Drucker-Prager Cap model in Abaqus(®) software. The parameters of the model were obtained from experimental tests. Several punch shapes based on industrial standards were used. A flat-faced (FF) punch and 3 convex faced (CF) punches (8R11, 8R8 and 8R6) with a diameter of 8mm were chosen. Different tablet thicknesses were studied at a constant compression force. The simulation of the compaction of CF tablets with increasing thicknesses showed an important change on the density distribution inside the tablet. For smaller thicknesses, low density zones are located toward the center. The density is not uniform inside CF tablets and the center of the 2 faces appears with low density whereas the distribution inside FF tablets is almost independent of the tablet thickness. These results showed that FF and CF tablets, even obtained at the same compression force, do not have the same density at the center of the compact. As a consequence differences in tensile strength, as measured by diametral compression, are expected. This was confirmed by experimental tests. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Development of Power Supply System with Distributed Generators using Parallel Processing Method

    Science.gov (United States)

    Hirose, Kenichi; Takeda, Takashi; Okui, Yoshiaki; Yukita, Kazuto; Goto, Yasuyuki; Ichiyanagi, Katsuhiro; Matsumura, Toshiro

    This paper describes a novel power system which consists of distributed energy resources (DER) with a static switch at the point of common coupling. Usage of the static switch with a parallel processing control is a new application of line interactive type uninterruptible power supply (UPS). In recent years, various ways of design, operation, and control methods have been studied in order to find more effective ways to utilize renewable energy and to reduce impact for environment. One of features of a proposed power system can interconnect to existing utility grid without interruption. Electrical power distribution to the loads by the power system can be continued between the states of interconnection and isolate operation seamlessly. The novel power system has other benefits such as more efficiency, demand site management, easy to control power system inside, improvement of reliability for power distribution, the minimum requirement of protection relays for grid interconnection. The proposed power system has been operated with the actual loads of 20kW in the campus of the Aichi Institute of Technology since 2007.

  4. Development of a Live-Line Robot and a New Construction Method at Distributed Lines

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Seung Ho; Kim, Chang Hoi; Seo, Yong Chil; Shin, Ho Cheol; Lee, Sung Uk; Kim, Seung Jo

    2008-10-15

    The developed live-line robot is installed in the commercial live-line working truck used in the live-line works. The optimal installing position and method was analyzed by the graphical simulation. The dual arm robot was designed to have a maximize work space through the optimal configuration simulation in which the accessibility and interference of the robot to the distributed line environments were checked. The shape, material and size of the links was designed through the mechanical analysis, The prototype was made based on the analysis. The developed robot is insulated by the upper arm made of the FRP material. The insulation performance of the robot was verify by the leakage current test and the flashover voltage test. A tele-operating control system was developed for a operator to easily manipulate the developed robot and to actively manage the various conditions of the distributed line. The functions of the robot tested in the mockup. The insulation performance was tested and verified in the KERI (Korea Electrotechnology Research Institute). Finally, the performance of the developed robot was tested at the distributed line of the Gochang electric power test center.

  5. Testing methods of pressure distribution of bra cups on breasts soft tissue

    Science.gov (United States)

    Musilova, B.; Nemcokova, R.; Svoboda, M.

    2017-10-01

    Objective of this study is to evaluate testing methods of pressure distribution of bra cups on breasts soft tissue, the system which do not affect the space between the wearer's body surface and bra cups and thus do not influence the geometry of the measured body surface and thus investigate the functional performance of brassieres. Two measuring systems were used for the pressure comfort evaluating: 1) The pressure distribution of a wearing bra during 20 minutes on women's breasts has been directly measured using pressure sensor, a dielectricum which is elastic polyurethane foam bra cups. Twelve points were measured in bra cups. 2) Simultaneously the change of temperature in the same points bra was tested with the help of noncontact system the thermal imager. The results indicate that both of those systems can identify different pressure distribution at different points. The same size of bra designing features bra cups made from the same material and which is define by the help of same standardised body dimensions (bust and underbust) can cause different value of a compression on different shape of a woman´s breast soft tissue.

  6. Extending the absorbing boundary method to fit dwell-time distributions of molecular motors with complex kinetic pathways.

    Science.gov (United States)

    Liao, Jung-Chi; Spudich, James A; Parker, David; Delp, Scott L

    2007-02-27

    Dwell-time distributions, waiting-time distributions, and distributions of pause durations are widely reported for molecular motors based on single-molecule biophysical experiments. These distributions provide important information concerning the functional mechanisms of enzymes and their underlying kinetic and mechanical processes. We have extended the absorbing boundary method to simulate dwell-time distributions of complex kinetic schemes, which include cyclic, branching, and reverse transitions typically observed in molecular motors. This extended absorbing boundary method allows global fitting of dwell-time distributions for enzymes subject to different experimental conditions. We applied the extended absorbing boundary method to experimental dwell-time distributions of single-headed myosin V, and were able to use a single kinetic scheme to fit dwell-time distributions observed under different ligand concentrations and different directions of optical trap forces. The ability to use a single kinetic scheme to fit dwell-time distributions arising from a variety of experimental conditions is important for identifying a mechanochemical model of a molecular motor. This efficient method can be used to study dwell-time distributions for a broad class of molecular motors, including kinesin, RNA polymerase, helicase, F(1) ATPase, and to examine conformational dynamics of other enzymes such as ion channels.

  7. An introduction to fuzzy linear programming problems theory, methods and applications

    CERN Document Server

    Kaur, Jagdeep

    2016-01-01

    The book presents a snapshot of the state of the art in the field of fully fuzzy linear programming. The main focus is on showing current methods for finding the fuzzy optimal solution of fully fuzzy linear programming problems in which all the parameters and decision variables are represented by non-negative fuzzy numbers. It presents new methods developed by the authors, as well as existing methods developed by others, and their application to real-world problems, including fuzzy transportation problems. Moreover, it compares the outcomes of the different methods and discusses their advantages/disadvantages. As the first work to collect at one place the most important methods for solving fuzzy linear programming problems, the book represents a useful reference guide for students and researchers, providing them with the necessary theoretical and practical knowledge to deal with linear programming problems under uncertainty.

  8. Mapping Power Law Distributions in Digital Health Social Networks: Methods, Interpretations, and Practical Implications.

    Science.gov (United States)

    van Mierlo, Trevor; Hyatt, Douglas; Ching, Andrew T

    2015-06-25

    Social networks are common in digital health. A new stream of research is beginning to investigate the mechanisms of digital health social networks (DHSNs), how they are structured, how they function, and how their growth can be nurtured and managed. DHSNs increase in value when additional content is added, and the structure of networks may resemble the characteristics of power laws. Power laws are contrary to traditional Gaussian averages in that they demonstrate correlated phenomena. The objective of this study is to investigate whether the distribution frequency in four DHSNs can be characterized as following a power law. A second objective is to describe the method used to determine the comparison. Data from four DHSNs—Alcohol Help Center (AHC), Depression Center (DC), Panic Center (PC), and Stop Smoking Center (SSC)—were compared to power law distributions. To assist future researchers and managers, the 5-step methodology used to analyze and compare datasets is described. All four DHSNs were found to have right-skewed distributions, indicating the data were not normally distributed. When power trend lines were added to each frequency distribution, R(2) values indicated that, to a very high degree, the variance in post frequencies can be explained by actor rank (AHC .962, DC .975, PC .969, SSC .95). Spearman correlations provided further indication of the strength and statistical significance of the relationship (AHC .987. DC .967, PC .983, SSC .993, P<.001). This is the first study to investigate power distributions across multiple DHSNs, each addressing a unique condition. Results indicate that despite vast differences in theme, content, and length of existence, DHSNs follow properties of power laws. The structure of DHSNs is important as it gives insight to researchers and managers into the nature and mechanisms of network functionality. The 5-step process undertaken to compare actor contribution patterns can be replicated in networks that are managed by

  9. Bubble size distribution in acoustic droplet vaporization via dissolution using an ultrasound wide-beam method.

    Science.gov (United States)

    Xu, Shanshan; Zong, Yujin; Li, Wusong; Zhang, Siyuan; Wan, Mingxi

    2014-05-01

    Performance and efficiency of numerous cavitation enhanced applications in a wide range of areas depend on the cavitation bubble size distribution. Therefore, cavitation bubble size estimation would be beneficial for biological and industrial applications that rely on cavitation. In this study, an acoustic method using a wide beam with low pressure is proposed to acquire the time intensity curve of the dissolution process for the cavitation bubble population and then determine the bubble size distribution. Dissolution of the cavitation bubbles in saline and in phase-shift nanodroplet emulsion diluted with undegassed or degassed saline was obtained to quantify the effects of pulse duration (PD) and acoustic power (AP) or peak negative pressure (PNP) of focused ultrasound on the size distribution of induced cavitation bubbles. It was found that an increase of PD will induce large bubbles while AP had only a little effect on the mean bubble size in saline. It was also recognized that longer PD and higher PNP increases the proportions of large and small bubbles, respectively, in suspensions of phase-shift nanodroplet emulsions. Moreover, degassing of the suspension tended to bring about smaller mean bubble size than the undegassed suspension. In addition, condensation of cavitation bubble produced in diluted suspension of phase-shift nanodroplet emulsion was involved in the calculation to discuss the effect of bubble condensation in the bubble size estimation in acoustic droplet vaporization. It was shown that calculation without considering the condensation might underestimate the mean bubble size and the calculation with considering the condensation might have more influence over the size distribution of small bubbles, but less effect on that of large bubbles. Without or with considering bubble condensation, the accessible minimum bubble radius was 0.4 or 1.7 μm and the step size was 0.3 μm. This acoustic technique provides an approach to estimate the size

  10. Dietary Changes by Expanded Food and Nutrition Education Program (EFNEP) Graduates Are Independent of Program Delivery Method.

    Science.gov (United States)

    Luccia, Barbara H. D.; Kunkel, Mary E.; Cason, Katherine L.

    2003-01-01

    Expanded Food and Nutrition Education Program graduates (n=1,141) who received either individual (21.3%), group (76.2%), or combined (2.5%) instruction were assessed. Independent of method, participants significantly improved the number of servings consumed from grains, vegetables, dairy, and meat and meat alternatives; total calories consumed;…

  11. Using sequential self-calibration method to identify conductivity distribution: Conditioning on tracer test data

    Science.gov (United States)

    Hu, B.X.; He, C.

    2008-01-01

    An iterative inverse method, the sequential self-calibration method, is developed for mapping spatial distribution of a hydraulic conductivity field by conditioning on nonreactive tracer breakthrough curves. A streamline-based, semi-analytical simulator is adopted to simulate solute transport in a heterogeneous aquifer. The simulation is used as the forward modeling step. In this study, the hydraulic conductivity is assumed to be a deterministic or random variable. Within the framework of the streamline-based simulator, the efficient semi-analytical method is used to calculate sensitivity coefficients of the solute concentration with respect to the hydraulic conductivity variation. The calculated sensitivities account for spatial correlations between the solute concentration and parameters. The performance of the inverse method is assessed by two synthetic tracer tests conducted in an aquifer with a distinct spatial pattern of heterogeneity. The study results indicate that the developed iterative inverse method is able to identify and reproduce the large-scale heterogeneity pattern of the aquifer given appropriate observation wells in these synthetic cases. ?? International Association for Mathematical Geology 2008.

  12. Wide binaries in Tycho-Gaia: search method and the distribution of orbital separations

    Science.gov (United States)

    Andrews, Jeff J.; Chanamé, Julio; Agüeros, Marcel A.

    2017-11-01

    We mine the Tycho-Gaia astrometric solution (TGAS) catalogue for wide stellar binaries by matching positions, proper motions and astrometric parallaxes. We separate genuine binaries from unassociated stellar pairs through a Bayesian formulation that includes correlated uncertainties in the proper motions and parallaxes. Rather than relying on assumptions about the structure of the Galaxy, we calculate Bayesian priors and likelihoods based on the nature of Keplerian orbits and the TGAS catalogue itself. We calibrate our method using radial velocity measurements and obtain 7108 high-confidence candidate wide binaries with projected separations s ≲ 1 pc. The normalization of this distribution suggests that at least 0.7 per cent of TGAS stars have an associated, distant TGAS companion in a wide binary. We demonstrate that Gaia's astrometry is precise enough that it can detect projected orbital velocities in wide binaries with orbital periods as large as 106 yr. For pairs with s ≲ 4 × 104 au, characterization of random alignments indicates our contamination to be 5-10 per cent. For s ≲ 5 × 103 au, our distribution is consistent with Öpik's law. At larger separations, the distribution is steeper and consistent with a power-law P(s) ∝ s-1.6; there is no evidence in our data of any bimodality in this distribution for s ≲ 1 pc. Using radial velocities, we demonstrate that at large separations, I.e. of order s ˜ 1 pc and beyond, any potential sample of genuine wide binaries in TGAS cannot be easily distinguished from ionized former wide binaries, moving groups or contamination from randomly aligned stars.

  13. Monitoring system and methods for a distributed and recoverable digital control system

    Science.gov (United States)

    Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)

    2010-01-01

    A monitoring system and methods are provided for a distributed and recoverable digital control system. The monitoring system generally comprises two independent monitoring planes within the control system. The first monitoring plane is internal to the computing units in the control system, and the second monitoring plane is external to the computing units. The internal first monitoring plane includes two in-line monitors. The first internal monitor is a self-checking, lock-step-processing monitor with integrated rapid recovery capability. The second internal monitor includes one or more reasonableness monitors, which compare actual effector position with commanded effector position. The external second monitor plane includes two monitors. The first external monitor includes a pre-recovery computing monitor, and the second external monitor includes a post recovery computing monitor. Various methods for implementing the monitoring functions are also disclosed.

  14. Method and system for redundancy management of distributed and recoverable digital control system

    Science.gov (United States)

    Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)

    2012-01-01

    A method and system for redundancy management is provided for a distributed and recoverable digital control system. The method uses unique redundancy management techniques to achieve recovery and restoration of redundant elements to full operation in an asynchronous environment. The system includes a first computing unit comprising a pair of redundant computational lanes for generating redundant control commands. One or more internal monitors detect data errors in the control commands, and provide a recovery trigger to the first computing unit. A second redundant computing unit provides the same features as the first computing unit. A first actuator control unit is configured to provide blending and monitoring of the control commands from the first and second computing units, and to provide a recovery trigger to each of the first and second computing units. A second actuator control unit provides the same features as the first actuator control unit.

  15. Frequent major errors in antimicrobial susceptibility testing of bacterial strains distributed under the Deutsches Krebsforschungszentrum Quality Assurance Program.

    Science.gov (United States)

    Boot, R

    2012-07-01

    The Quality Assurance Program (QAP) of the Deutsches Krebsforschungszentrum (DKFZ) was a proficiency testing system developed to service the laboratory animal discipline. The QAP comprised the distribution of bacterial strains from various species of animals for identification to species level and antibiotic susceptibility testing (AST). Identification capabilities were below acceptable standards. This study evaluated AST results using the DKFZ compilations of test results for all bacterial strains showing the number of participants reporting the strain as resistant (R), sensitive (S) or intermediate susceptible (I) to each antibiotic substance used. Due to lack of information about methods used, it was assumed that what the majority of the participants reported (R or S) was the correct test result and that an opposite result was a major error (ME). MEs occurred in 1375 of 14,258 (9.7%) of test results and ME% ranged from 0% to 23.2% per bacterial group-agent group combination. Considerable variation in MEs was found within groups of bacteria and within groups of agents. In addition to poor performance in proper species classification, the quality of AST in laboratory animal diagnostic laboratories seems far below standards considered acceptable in human diagnostic microbiology.

  16. Testing methods for using high-resolution satellite imagery to monitor polar bear abundance and distribution

    Science.gov (United States)

    LaRue, Michelle A.; Stapleton, Seth P.; Porter, Claire; Atkinson, Stephen N.; Atwood, Todd C.; Dyck, Markus; Lecomte, Nicolas

    2015-01-01

    High-resolution satellite imagery is a promising tool for providing coarse information about polar species abundance and distribution, but current applications are limited. With polar bears (Ursus maritimus), the technique has only proven effective on landscapes with little topographic relief that are devoid of snow and ice, and time-consuming manual review of imagery is required to identify bears. Here, we evaluated mechanisms to further develop methods for satellite imagery by examining data from Rowley Island, Canada. We attempted to automate and expedite detection via a supervised spectral classification and image differencing to expedite image review. We also assessed what proportion of a region should be sampled to obtain reliable estimates of density and abundance. Although the spectral signature of polar bears differed from nontarget objects, these differences were insufficient to yield useful results via a supervised classification process. Conversely, automated image differencing—or subtracting one image from another—correctly identified nearly 90% of polar bear locations. This technique, however, also yielded false positives, suggesting that manual review will still be required to confirm polar bear locations. On Rowley Island, bear distribution approximated a Poisson distribution across a range of plot sizes, and resampling suggests that sampling >50% of the site facilitates reliable estimation of density (CV <15%). Satellite imagery may be an effective monitoring tool in certain areas, but large-scale applications remain limited because of the challenges in automation and the limited environments in which the method can be effectively applied. Improvements in resolution may expand opportunities for its future uses.

  17. A Review of Discrete Element Method (DEM) Particle Shapes and Size Distributions for Lunar Soil

    Science.gov (United States)

    Lane, John E.; Metzger, Philip T.; Wilkinson, R. Allen

    2010-01-01

    As part of ongoing efforts to develop models of lunar soil mechanics, this report reviews two topics that are important to discrete element method (DEM) modeling the behavior of soils (such as lunar soils): (1) methods of modeling particle shapes and (2) analytical representations of particle size distribution. The choice of particle shape complexity is driven primarily by opposing tradeoffs with total number of particles, computer memory, and total simulation computer processing time. The choice is also dependent on available DEM software capabilities. For example, PFC2D/PFC3D and EDEM support clustering of spheres; MIMES incorporates superquadric particle shapes; and BLOKS3D provides polyhedra shapes. Most commercial and custom DEM software supports some type of complex particle shape beyond the standard sphere. Convex polyhedra, clusters of spheres and single parametric particle shapes such as the ellipsoid, polyellipsoid, and superquadric, are all motivated by the desire to introduce asymmetry into the particle shape, as well as edges and corners, in order to better simulate actual granular particle shapes and behavior. An empirical particle size distribution (PSD) formula is shown to fit desert sand data from Bagnold. Particle size data of JSC-1a obtained from a fine particle analyzer at the NASA Kennedy Space Center is also fitted to a similar empirical PSD function.

  18. [Method of correcting sensitivity nonuniformity using gaussian distribution on 3.0 Tesla abdominal MRI].

    Science.gov (United States)

    Hayashi, Norio; Miyati, Tosiaki; Takanaga, Masako; Ohno, Naoki; Hamaguchi, Takashi; Kozaka, Kazuto; Sanada, Shigeru; Yamamoto, Tomoyuki; Matsui, Osamu

    2011-01-01

    In the direction where the phased array coil used in parallel magnetic resonance imaging (MRI) is perpendicular to the arrangement, sensitivity falls significantly. Moreover, in a 3.0 tesla (3T) abdominal MRI, the quality of the image is reduced by changes in the relaxation time, reinforcement of the magnetic susceptibility effect, etc. In a 3T MRI, which has a high resonant frequency, the signal of the depths (central part) is reduced in the trunk part. SCIC, which is sensitivity correction processing, has inadequate correction processing, such as that edges are emphasized and the central part is corrected. Therefore, we used 3T with a Gaussian distribution. The uneven compensation processing for sensitivity of an abdomen MR image was considered. The correction processing consisted of the following methods. 1) The center of gravity of the domain of the human body in an abdomen MR image was calculated. 2) The correction coefficient map was created from the center of gravity using the Gaussian distribution. 3) The sensitivity correction image was created from the correction coefficient map and the original picture image. Using the Gaussian correction to process the image, the uniformity calculated using the NEMA method was improved significantly compared to the original image of a phantom. In a visual evaluation by radiologists, the uniformity was improved significantly using the Gaussian correction processing. Because of the homogeneous improvement of the abdomen image taken using 3T MRI, the Gaussian correction processing is considered to be a very useful technique.

  19. Regional distribution of methionine adenosyltransferase in rat brain as measured by a rapid radiochemical method

    Energy Technology Data Exchange (ETDEWEB)

    Hiemke, C.; Ghraf, R.

    1981-09-01

    The distribution of methionine adenosyltransferase (MAT) in the CNS of the rat was studied by use of a rapid, sensitive and specific radiochemical method. The S-adenosyl-(methyl-/sup 14/C)L-methionine ((/sup 14/C)SAM) generated by adenosyl transfer from ATP to (methyl-/sup 14/C)L-methionine is quantitated by use of a SAM-consuming transmethylation reaction. Catechol O-methyltransferase (COMT), prepared from rat liver, transfers the methyl-/sup 14/C group of SAM to 3,4-dihydroxybenzoic acid. The /sup 14/C-labelled methylation products, vanillic acid and isovanillic acid, are separated from unreacted methionine by solvent extraction and quantitated by liquid scintillation counting. Compared to other methods of MAT determination, which include separation of generated SAM from methionine by ion-exchange chromatography, the assay described exhibited the same high degree of specificity and sensitivity but proved to be less time consuming. MAT activity was found to be uniformly distributed between various brain regions and the pituitary gland of adult male rats. In the pineal gland the enzyme activity is about tenfold higher.

  20. A practical simulation method to calculate sample size of group sequential trials for time-to-event data under exponential and Weibull distribution.

    Directory of Open Access Journals (Sweden)

    Zhiwei Jiang

    Full Text Available Group sequential design has been widely applied in clinical trials in the past few decades. The sample size estimation is a vital concern of sponsors and investigators. Especially in the survival group sequential trials, it is a thorny question because of its ambiguous distributional form, censored data and different definition of information time. A practical and easy-to-use simulation-based method is proposed for multi-stage two-arm survival group sequential design in the article and its SAS program is available. Besides the exponential distribution, which is usually assumed for survival data, the Weibull distribution is considered here. The incorporation of the probability of discontinuation in the simulation leads to the more accurate estimate. The assessment indexes calculated in the simulation are helpful to the determination of number and timing of the interim analysis. The use of the method in the survival group sequential trials is illustrated and the effects of the varied shape parameter on the sample size under the Weibull distribution are explored by employing an example. According to the simulation results, a method to estimate the shape parameter of the Weibull distribution is proposed based on the median survival time of the test drug and the hazard ratio, which are prespecified by the investigators and other participants. 10+ simulations are recommended to achieve the robust estimate of the sample size. Furthermore, the method is still applicable in adaptive design if the strategy of sample size scheme determination is adopted when designing or the minor modifications on the program are made.

  1. A Practical Simulation Method to Calculate Sample Size of Group Sequential Trials for Time-to-Event Data under Exponential and Weibull Distribution

    Science.gov (United States)

    Jiang, Zhiwei; Wang, Ling; Li, Chanjuan; Xia, Jielai; Jia, Hongxia

    2012-01-01

    Group sequential design has been widely applied in clinical trials in the past few decades. The sample size estimation is a vital concern of sponsors and investigators. Especially in the survival group sequential trials, it is a thorny question because of its ambiguous distributional form, censored data and different definition of information time. A practical and easy-to-use simulation-based method is proposed for multi-stage two-arm survival group sequential design in the article and its SAS program is available. Besides the exponential distribution, which is usually assumed for survival data, the Weibull distribution is considered here. The incorporation of the probability of discontinuation in the simulation leads to the more accurate estimate. The assessment indexes calculated in the simulation are helpful to the determination of number and timing of the interim analysis. The use of the method in the survival group sequential trials is illustrated and the effects of the varied shape parameter on the sample size under the Weibull distribution are explored by employing an example. According to the simulation results, a method to estimate the shape parameter of the Weibull distribution is proposed based on the median survival time of the test drug and the hazard ratio, which are prespecified by the investigators and other participants. 10+ simulations are recommended to achieve the robust estimate of the sample size. Furthermore, the method is still applicable in adaptive design if the strategy of sample size scheme determination is adopted when designing or the minor modifications on the program are made. PMID:22957040

  2. Comparing performances of clements, box-cox, Johnson methods with weibull distributions for assessing process capability

    Directory of Open Access Journals (Sweden)

    Ozlem Senvar

    2016-08-01

    Full Text Available Purpose: This study examines Clements’ Approach (CA, Box-Cox transformation (BCT, and Johnson transformation (JT methods for process capability assessments through Weibull-distributed data with different parameters to figure out the effects of the tail behaviours on process capability and compares their estimation performances in terms of accuracy and precision. Design/methodology/approach: Usage of process performance index (PPI Ppu is handled for process capability analysis (PCA because the comparison issues are performed through generating Weibull data without subgroups. Box plots, descriptive statistics, the root-mean-square deviation (RMSD, which is used as a measure of error, and a radar chart are utilized all together for evaluating the performances of the methods. In addition, the bias of the estimated values is important as the efficiency measured by the mean square error. In this regard, Relative Bias (RB and the Relative Root Mean Square Error (RRMSE are also considered. Findings: The results reveal that the performance of a method is dependent on its capability to fit the tail behavior of the Weibull distribution and on targeted values of the PPIs. It is observed that the effect of tail behavior is more significant when the process is more capable. Research limitations/implications: Some other methods such as Weighted Variance method, which also give good results, were also conducted. However, we later realized that it would be confusing in terms of comparison issues between the methods for consistent interpretations. Practical implications: Weibull distribution covers a wide class of non-normal processes due to its capability to yield a variety of distinct curves based on its parameters. Weibull distributions are known to have significantly different tail behaviors, which greatly affects the process capability. In quality and reliability applications, they are widely used for the analyses of failure data in order to understand how

  3. Creation of a Course in Computer Methods and Modeling for Undergraduate Earth Science Programs

    Science.gov (United States)

    Menking, K. M.; Dashnaw, J. M.

    2003-12-01

    In recent years computer modeling has gained importance in geological research as a means to generate and test hypotheses and to allow simulation of processes in places inaccessible to humans (e.g., outer core fluid dynamics), too slow to permit observation (e.g., erosionally-induced uplift of topography), or too large to facilitate construction of physical models (e.g., faulting on the San Andreas). Entire fields within the Earth sciences now exist in which computer modeling has become the core work of the discipline. Undergraduate geology/Earth science programs have been slow to adapt to this change, and computer science curricular offerings often do not meet geology students' needs. To address these problems, a course in Computer Methods and Modeling in the Earth Sciences is being developed at Vassar College. The course uses the STELLA iconographical box modeling software developed by High Performance Systems, Inc. to teach students the fundamentals of dynamical systems modeling and then builds on the knowledge students have constructed with STELLA to teach introductory computer programming in Fortran. Fully documented and debugged STELLA and Fortran models along with reading lists, answer keys, and course notes are being developed for distribution to anyone interested in teaching a course such as this. Modeling topics include U-Pb concordia/discordia dating techniques, the global phosphorus cycle, Earth's energy balance and temperature, the impact of climate change on a chain of lakes in eastern California, heat flow in permafrost, and flow of ice in glaciers by plastic deformation. The course has been taught twice at Vassar and has been enthusiastically received by students who reported not only that they enjoyed learning the process of modeling, but also that they had a newfound appreciation for the role of mathematics in geology and intended to enroll in more math courses in the future.

  4. Blinded Anonymization: a method for evaluating cancer prevention programs under restrictive data protection regulations.

    Science.gov (United States)

    Bartholomäus, Sebastian; Hense, Hans Werner; Heidinger, Oliver

    2015-01-01

    Evaluating cancer prevention programs requires collecting and linking data on a case specific level from multiple sources of the healthcare system. Therefore, one has to comply with data protection regulations which are restrictive in Germany and will likely become stricter in Europe in general. To facilitate the mortality evaluation of the German mammography screening program, with more than 10 Million eligible women, we developed a method that does not require written individual consent and is compliant to existing privacy regulations. Our setup is composed of different data owners, a data collection center (DCC) and an evaluation center (EC). Each data owner uses a dedicated software that preprocesses plain-text personal identifiers (IDAT) and plaintext evaluation data (EDAT) in such a way that only irreversibly encrypted record assignment numbers (RAN) and pre-aggregated, reversibly encrypted EDAT are transmitted to the DCC. The DCC uses the RANs to perform a probabilistic record linkage which is based on an established and evaluated algorithm. For potentially identifying attributes within the EDAT ('quasi-identifiers'), we developed a novel process, named 'blinded anonymization'. It allows selecting a specific generalization from the pre-processed and encrypted attribute aggregations, to create a new data set with assured k-anonymity, without using any plain-text information. The anonymized data is transferred to the EC where the EDAT is decrypted and used for evaluation. Our concept was approved by German data protection authorities. We implemented a prototype and tested it with more than 1.5 Million simulated records, containing realistically distributed IDAT. The core processes worked well with regard to performance parameters. We created different generalizations and calculated the respective suppression rates. We discuss modalities, implications and limitations for large data sets in the cancer registry domain, as well as approaches for further

  5. A Comparison of Traditional Worksheet and Linear Programming Methods for Teaching Manure Application Planning.

    Science.gov (United States)

    Schmitt, M. A.; And Others

    1994-01-01

    Compares traditional manure application planning techniques calculated to meet agronomic nutrient needs on a field-by-field basis with plans developed using computer-assisted linear programming optimization methods. Linear programming provided the most economical and environmentally sound manure application strategy. (Contains 15 references.) (MDH)

  6. Comparison of linear, mixed integer and non-linear programming methods in energy system dispatch modelling

    DEFF Research Database (Denmark)

    Ommen, Torben Schmidt; Markussen, Wiebke Brix; Elmegaard, Brian

    2014-01-01

    differences and differences between the solution found by each optimisation method. One of the investigated approaches utilises LP (linear programming) for optimisation, one uses LP with binary operation constraints, while the third approach uses NLP (non-linear programming). The LP model is used...

  7. Methods for obtaining true particle size distributions from cross section measurements

    Energy Technology Data Exchange (ETDEWEB)

    Lord, Kristina Alyse [Iowa State Univ., Ames, IA (United States)

    2013-01-01

    Sectioning methods are frequently used to measure grain sizes in materials. These methods do not provide accurate grain sizes for two reasons. First, the sizes of features observed on random sections are always smaller than the true sizes of solid spherical shaped objects, as noted by Wicksell [1]. This is the case because the section very rarely passes through the center of solid spherical shaped objects randomly dispersed throughout a material. The sizes of features observed on random sections are inversely related to the distance of the center of the solid object from the section [1]. Second, on a plane section through the solid material, larger sized features are more frequently observed than smaller ones due to the larger probability for a section to come into contact with the larger sized portion of the spheres than the smaller sized portion. As a result, it is necessary to find a method that takes into account these reasons for inaccurate particle size measurements, while providing a correction factor for accurately determining true particle size measurements. I present a method for deducing true grain size distributions from those determined from specimen cross sections, either by measurement of equivalent grain diameters or linear intercepts.

  8. Round-robin differential-phase-shift quantum key distribution with a passive decoy state method.

    Science.gov (United States)

    Liu, Li; Guo, Fen-Zhuo; Qin, Su-Juan; Wen, Qiao-Yan

    2017-02-13

    Recently, a new type of protocol named Round-robin differential-phase-shift quantum key distribution (RRDPS QKD) was proposed, where the security can be guaranteed without monitoring conventional signal disturbances. The active decoy state method can be used in this protocol to overcome the imperfections of the source. But, it may lead to side channel attacks and break the security of QKD systems. In this paper, we apply the passive decoy state method to the RRDPS QKD protocol. Not only can the more environment disturbance be tolerated, but in addition it can overcome side channel attacks on the sources. Importantly, we derive a new key generation rate formula for our RRDPS protocol using passive decoy states and enhance the key generation rate. We also compare the performance of our RRDPS QKD to that using the active decoy state method and the original RRDPS QKD without any decoy states. From numerical simulations, the performance improvement of the RRDPS QKD by our new method can be seen.

  9. Practical method for radioactivity distribution analysis in small-animal PET cancer studies.

    Science.gov (United States)

    Slavine, Nikolai V; Antich, Peter P

    2008-12-01

    We present a practical method for radioactivity distribution analysis in small-animal tumors and organs using positron emission tomography imaging with a calibrated source of known activity and size in the field of view. We reconstruct the imaged mouse together with a source under the same conditions, using an iterative method, Maximum likelihood expectation-maximization with system modeling, capable of delivering high-resolution images. Corrections for the ratios of geometrical efficiencies, radioisotope decay in time and photon attenuation are included in the algorithm. We demonstrate reconstruction results for the amount of radioactivity within the scanned mouse in a sample study of osteolytic and osteoblastic bone metastasis from prostate cancer xenografts. Data acquisition was performed on the small-animal PET system, which was tested with different radioactive sources, phantoms and animals to achieve high sensitivity and spatial resolution. Our method uses high-resolution images to determine the volume of organ or tumor and the amount of their radioactivity has the possibility of saving time, effort and the necessity to sacrifice animals. This method has utility for prognosis and quantitative analysis in small-animal cancer studies, and will enhance the assessment of characteristics of tumor growth, identifying metastases, and potentially determining the effectiveness of cancer treatment. The possible application for this technique could be useful for the organ radioactivity dosimetry studies.

  10. Evaluating simplistic methods to understand current distributions and forecast distribution changes under climate change scenarios: An example with coypu (Myocastor coypus)

    Science.gov (United States)

    Jarnevich, Catherine S.; Young, Nicholas E; Sheffels, Trevor R.; Carter, Jacoby; Systma, Mark D.; Talbert, Colin

    2017-01-01

    Invasive species provide a unique opportunity to evaluate factors controlling biogeographic distributions; we can consider introduction success as an experiment testing suitability of environmental conditions. Predicting potential distributions of spreading species is not easy, and forecasting potential distributions with changing climate is even more difficult. Using the globally invasive coypu (Myocastor coypus [Molina, 1782]), we evaluate and compare the utility of a simplistic ecophysiological based model and a correlative model to predict current and future distribution. The ecophysiological model was based on winter temperature relationships with nutria survival. We developed correlative statistical models using the Software for Assisted Habitat Modeling and biologically relevant climate data with a global extent. We applied the ecophysiological based model to several global circulation model (GCM) predictions for mid-century. We used global coypu introduction data to evaluate these models and to explore a hypothesized physiological limitation, finding general agreement with known coypu distribution locally and globally and support for an upper thermal tolerance threshold. Global circulation model based model results showed variability in coypu predicted distribution among GCMs, but had general agreement of increasing suitable area in the USA. Our methods highlighted the dynamic nature of the edges of the coypu distribution due to climate non-equilibrium, and uncertainty associated with forecasting future distributions. Areas deemed suitable habitat, especially those on the edge of the current known range, could be used for early detection of the spread of coypu populations for management purposes. Combining approaches can be beneficial to predicting potential distributions of invasive species now and in the future and in exploring hypotheses of factors controlling distributions.

  11. Evaluating simplistic methods to understand current distributions and forecast distribution changes under climate change scenarios: an example with coypu (Myocastor coypus

    Directory of Open Access Journals (Sweden)

    Catherine S. Jarnevich

    2017-01-01

    Full Text Available Invasive species provide a unique opportunity to evaluate factors controlling biogeographic distributions; we can consider introduction success as an experiment testing suitability of environmental conditions. Predicting potential distributions of spreading species is not easy, and forecasting potential distributions with changing climate is even more difficult. Using the globally invasive coypu (Myocastor coypus [Molina, 1782], we evaluate and compare the utility of a simplistic ecophysiological based model and a correlative model to predict current and future distribution. The ecophysiological model was based on winter temperature relationships with nutria survival. We developed correlative statistical models using the Software for Assisted Habitat Modeling and biologically relevant climate data with a global extent. We applied the ecophysiological based model to several global circulation model (GCM predictions for mid-century. We used global coypu introduction data to evaluate these models and to explore a hypothesized physiological limitation, finding general agreement with known coypu distribution locally and globally and support for an upper thermal tolerance threshold. Global circulation model based model results showed variability in coypu predicted distribution among GCMs, but had general agreement of increasing suitable area in the USA. Our methods highlighted the dynamic nature of the edges of the coypu distribution due to climate non-equilibrium, and uncertainty associated with forecasting future distributions. Areas deemed suitable habitat, especially those on the edge of the current known range, could be used for early detection of the spread of coypu populations for management purposes. Combining approaches can be beneficial to predicting potential distributions of invasive species now and in the future and in exploring hypotheses of factors controlling distributions.

  12. Methods and challenges for the health impact assessment of vaccination programs in Latin America

    OpenAIRE

    Sartori, Ana Marli Christovam; Nascimento, Andr?ia de F?tima; Yuba, T?nia Yuka; de So?rez, Patr?cia Coelho; Novaes, Hillegonda Maria Dutilh

    2015-01-01

    ABSTRACT OBJECTIVE To describe methods and challenges faced in the health impact assessment of vaccination programs, focusing on the pneumococcal conjugate and rotavirus vaccines in Latin America and the Caribbean. METHODS For this narrative review, we searched for the terms "rotavirus", "pneumococcal", "conjugate vaccine", "vaccination", "program", and "impact" in the databases Medline and LILACS. The search was extended to the grey literature in Google Scholar. No limits were defined for pu...

  13. The Value of Developing a Mixed-Methods Program of Research.

    Science.gov (United States)

    Simonovich, Shannon

    2017-07-01

    This article contributes to the discussion of the value of utilizing mixed methodological approaches to conduct nursing research. To this end, the author of this article proposes creating a mixed-methods program of research over time, where both quantitative and qualitative data are collected and analyzed simultaneously, rather than focusing efforts on designing singular mixed-methods studies. A mixed-methods program of research would allow for the best of both worlds: precision through focus on one method at a time, and the benefits of creating a robust understanding of a phenomenon over the trajectory of one's career through examination from various methodological approaches.

  14. Simulation of Temperature Distribution in TIG Spot Welds of(Al-Mg) Alloy Using Finite Element Method

    OpenAIRE

    Ahlam Abid Ameer Alkhafajy; Abdul Hussain G. Al-Maliky; Muna K Abbas

    2008-01-01

    This research concern to analyse and simulate the temperature distribution in the spot welding joints using tungsten arc welding shielded with inert gas (TIG Spot) for the aluminum-magnesium alloy type (5052-O). The effect of and the quantity of the heat input that enter the weld zone has been investigated welding current, welding time and arc length on temperature distribution. The finite element method (by utilizing programme ANSYS 5.4) is presented the temperature distribution in a circula...

  15. Studies on remote sensing method of particle size and water density distribution in mists and clouds using laser radar techniques

    Science.gov (United States)

    Shimizu, H.; Kobayasi, T.; Inaba, H.

    1979-01-01

    A method of remote measurement of the particle size and density distribution of water droplets was developed. In this method, the size of droplets is measured from the Mie scattering parameter which is defined as the total-to-backscattering ratio of the laser beam. The water density distribution is obtained by a combination of the Mie scattering parameter and the extinction coefficient of the laser beam. This method was examined experimentally for the mist generated by an ultrasonic mist generator and applied to clouds containing rain and snow. Compared with the conventional sampling method, the present method has advantages of remote measurement capability and improvement in accuracy.

  16. Second Order Cone Programming (SOCP) Relaxation Based Optimal Power Flow with Hybrid VSC-HVDC Transmission and Active Distribution Networks

    DEFF Research Database (Denmark)

    Ding, Tao; Li, Cheng; Yang, Yongheng

    2017-01-01

    The detailed topology of renewable resource bases may have the impact on the optimal power flow of the VSC-HVDC transmission network. To address this issue, this paper develops an optimal power flow with the hybrid VSC-HVDC transmission and active distribution networks to optimally schedule...... the generation output and voltage regulation of both networks, which leads to a non-convex programming model. Furthermore, the non-convex power flow equations are based on the Second Order Cone Programming (SOCP) relaxation approach. Thus, the proposed model can be relaxed to a SOCP that can be tractably solved...

  17. A Novel Method of Statistical Line Loss Estimation for Distribution Feeders Based on Feeder Cluster and Modified XGBoost

    Directory of Open Access Journals (Sweden)

    Shouxiang Wang

    2017-12-01

    Full Text Available The estimation of losses of distribution feeders plays a crucial guiding role for the planning, design, and operation of a distribution system. This paper proposes a novel estimation method of statistical line loss of distribution feeders using the feeder cluster technique and modified eXtreme Gradient Boosting (XGBoost algorithm that is based on the characteristic data of feeders that are collected in the smart power distribution and utilization system. In order to enhance the applicability and accuracy of the estimation model, k-medoids algorithm with weighting distance for clustering distribution feeders is proposed. Meanwhile, a variable selection method for clustering distribution feeders is discussed, considering the correlation and validity of variables. This paper next modifies the XGBoost algorithm by adding a penalty function in consideration of the effect of the theoretical value to the loss function for the estimation of statistical line loss of distribution feeders. The validity of the proposed methodology is verified by 762 distribution feeders in the Shanghai distribution system. The results show that the XGBoost method has higher accuracy than decision tree, neural network, and random forests by comparison of Root Mean Square Error (RMSE, Mean Absolute Percentage Error (MAPE, and Absolute Percentage Error (APE indexes. In particular, the theoretical value can significantly improve the reasonability of estimated results.

  18. Isotope production and distribution Programs Fiscal Year (FY) 1995 Financial Statement Audit (ER-FC-96-01)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-02-12

    The charter of the Department of Energy (DOE) Isotope Production and Distribution Program (Isotope Program) covers the production and sale of radioactive and stable isotopes, associated byproducts, surplus materials such as lithium and deuterium, and related isotope services. Services provided include, but are not limited to, irradiation services, target preparation and processing, source encapsulation and other special preparations, analyses, chemical separations, and leasing of stable isotopes for research purposes. Isotope Program products and services are sold worldwide for use in a wide variety of research, development, biomedical, and industrial applications. The Isotope Program reports to the Director of the Office of Nuclear Energy, Science and Technology. The Isotope Program operates under a revolving fund, as established by the Fiscal Year 1990 Energy and Water Appropriations Act (Public Law 101-101). The Fiscal Year 1995 Appropriations Act (Public Law 103-316) modified predecessor acts to allow prices charged for Isotope Program products and services to be based on production costs, market value, the needs of the research community, and other factors. Prices set for small-volume, high-cost isotopes that are needed for research may not achieve full-cost recovery. Isotope Program costs are financed by revenues from the sale of isotopes and associated services and through payments from the isotope support decision unit, which was established in the DOE fiscal year 1995 Energy, Supply, Research, and Development appropriation. The isotope decision unit finances the production and processing of unprofitable isotopes that are vital to the national interest.

  19. Prosopis juliflora L: DISTRIBUTION, IMPACTS AND AVAILABLE CONTROL METHODS IN ETHIOPIA

    Directory of Open Access Journals (Sweden)

    Mohammed Mussa Abdulahi

    2017-05-01

    Full Text Available Prosopis juliflora, an evergreen shrub, is one of the most invasive alien species causing economic and environmental harm in arid and semi-arid areas. It is spreading rapidly in the rangelands, croplands and forests and in particular is threatening pastoral and agro-pastoral livelihoods. Prosopis has invaded parts of wildlife reserves and National Parks threatening biodiversity. There are several factors favoring its rapid distribution in the environment. Its ability to adapt wide range of climatic condition, effective dispersal mechanism, its allelopathic effect, prolific nature, having large seed bank in the soil environment, fast growing and vigorous coppicing ability are among the principal factors. Prosopis has the capacity to decrease the composition and diversity of plant species and it has adverse effects on crop yield, as well as animal and human health. Despite its negative effects, the tree has potential uses such as fuel, charcoal, fodder, food, bio-char, bio- control, windbreaks, shade, construction and furniture materials, and soil stabilization. It can be also be used against different disease and ameliorated environmental conditions through carbon sequestration. On the other hand, manual, mechanical, chemical and biological control methods as well as control by utilization have been pointed out as an effective control ways and management of this weed. There is urgent need to develop management strategies that are environmentally friendly and economically viable to bring them under control. Therefore, objective of this review was to explore the distribution, impacts, benefits and as well as the possible management approaches against Prosopis.

  20. Developing methods for assessing abundance and distribution of European oysters (Ostrea edulis using towed video.

    Directory of Open Access Journals (Sweden)

    Linnea Thorngren

    Full Text Available Due to large-scale habitat losses and increasing pressures, benthic habitats in general, and perhaps oyster beds in particular, are commonly in decline and severely threatened on regional and global scales. Appropriate and cost-efficient methods for mapping and monitoring of the distribution, abundance and quality of remaining oyster populations are fundamental for sustainable management and conservation of these habitats and their associated values. Towed video has emerged as a promising method for surveying benthic communities in a both non-destructive and cost-efficient way. Here we examine its use as a tool for quantification and monitoring of oyster populations by (i analysing how well abundances can be estimated and how living Ostrea edulis individuals can be distinguished from dead ones, (ii estimating the variability within and among observers as well as the spatial variability at a number of scales, and finally (iii evaluating the precision of estimated abundances under different scenarios for monitoring. Overall, the results show that the can be used to quantify abundance and occurrence of Ostrea edulis in heterogeneous environments. There was a strong correlation between abundances determined in the field and abundances estimated by video-analyses (r2 = 0.93, even though video analyses underestimated the total abundance of living oysters by 20%. Additionally, the method was largely repeatable within and among observers and revealed no evident bias in identification of living and dead oysters. We also concluded that the spatial variability was an order of magnitude larger than that due to observer errors. Subsequent modelling of precision showed that the total area sampled was the main determinant of precision and provided general method for determining precision. This study provides a thorough validation of the application of towed video on quantitative estimations of live oysters. The results suggest that the method can indeed be