Solow, Daniel
2014-01-01
This text covers the basic theory and computation for a first course in linear programming, including substantial material on mathematical proof techniques and sophisticated computation methods. Includes Appendix on using Excel. 1984 edition.
Karloff, Howard
1991-01-01
To this reviewer’s knowledge, this is the first book accessible to the upper division undergraduate or beginning graduate student that surveys linear programming from the Simplex Method…via the Ellipsoid algorithm to Karmarkar’s algorithm. Moreover, its point of view is algorithmic and thus it provides both a history and a case history of work in complexity theory. The presentation is admirable; Karloff's style is informal (even humorous at times) without sacrificing anything necessary for understanding. Diagrams (including horizontal brackets that group terms) aid in providing clarity. The end-of-chapter notes are helpful...Recommended highly for acquisition, since it is not only a textbook, but can also be used for independent reading and study. —Choice Reviews The reader will be well served by reading the monograph from cover to cover. The author succeeds in providing a concise, readable, understandable introduction to modern linear programming. —Mathematics of Computing This is a textbook intend...
Huitzing, Hiddo A.
2004-01-01
This article shows how set covering with item sampling (SCIS) methods can be used in the analysis and preanalysis of linear programming models for test assembly (LPTA). LPTA models can construct tests, fulfilling a set of constraints set by the test assembler. Sometimes, no solution to the LPTA model exists. The model is then said to be…
An Interactive Method to Solve Infeasibility in Linear Programming Test Assembling Models
Huitzing, Hiddo A.
2004-01-01
In optimal assembly of tests from item banks, linear programming (LP) models have proved to be very useful. Assembly by hand has become nearly impossible, but these LP techniques are able to find the best solutions, given the demands and needs of the test to be assembled and the specifics of the item bank from which it is assembled. However,…
Reduction of Linear Programming to Linear Approximation
Vaserstein, Leonid N.
2006-01-01
It is well known that every Chebyshev linear approximation problem can be reduced to a linear program. In this paper we show that conversely every linear program can be reduced to a Chebyshev linear approximation problem.
Tuey, R. C.
1972-01-01
Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.
Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.
1982-01-01
The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.
International Nuclear Information System (INIS)
Rogner, H.H.
1989-01-01
The submitted sections on linear programming are extracted from 'Theorie und Technik der Planung' (1978) by W. Blaas and P. Henseler and reformulated for presentation at the Workshop. They consider a brief introduction to the theory of linear programming and to some essential aspects of the SIMPLEX solution algorithm for the purposes of economic planning processes. 1 fig
A linear programming algorithm to test for jamming in hard-sphere packings
International Nuclear Information System (INIS)
Donev, Aleksandar; Torquato, Salvatore.; Stillinger, Frank H.; Connelly, Robert
2004-01-01
Jamming in hard-particle packings has been the subject of considerable interest in recent years. In a paper by Torquato and Stillinger [J. Phys. Chem. B 105 (2001)], a classification scheme of jammed packings into hierarchical categories of locally, collectively and strictly jammed configurations has been proposed. They suggest that these jamming categories can be tested using numerical algorithms that analyze an equivalent contact network of the packing under applied displacements, but leave the design of such algorithms as a future task. In this work, we present a rigorous and practical algorithm to assess whether an ideal hard-sphere packing in two or three dimensions is jammed according to the aforementioned categories. The algorithm is based on linear programming and is applicable to regular as well as random packings of finite size with hard-wall and periodic boundary conditions. If the packing is not jammed, the algorithm yields representative multi-particle unjamming motions. Furthermore, we extend the jamming categories and the testing algorithm to packings with significant interparticle gaps. We describe in detail two variants of the proposed randomized linear programming approach to test for jamming in hard-sphere packings. The first algorithm treats ideal packings in which particles form perfect contacts. Another algorithm treats the case of jamming in packings with significant interparticle gaps. This extended algorithm allows one to explore more fully the nature of the feasible particle displacements. We have implemented the algorithms and applied them to ordered as well as random packings of circular disks and spheres with periodic boundary conditions. Some representative results for large disordered disk and sphere packings are given, but more robust and efficient implementations as well as further applications (e.g., non-spherical particles) are anticipated for the future
Brameier, Markus
2007-01-01
Presents a variant of Genetic Programming that evolves imperative computer programs as linear sequences of instructions, in contrast to the more traditional functional expressions or syntax trees. This book serves as a reference for researchers, but also contains sufficient introduction for students and those who are new to the field
Linear programming using Matlab
Ploskas, Nikolaos
2017-01-01
This book offers a theoretical and computational presentation of a variety of linear programming algorithms and methods with an emphasis on the revised simplex method and its components. A theoretical background and mathematical formulation is included for each algorithm as well as comprehensive numerical examples and corresponding MATLAB® code. The MATLAB® implementations presented in this book are sophisticated and allow users to find solutions to large-scale benchmark linear programs. Each algorithm is followed by a computational study on benchmark problems that analyze the computational behavior of the presented algorithms. As a solid companion to existing algorithmic-specific literature, this book will be useful to researchers, scientists, mathematical programmers, and students with a basic knowledge of linear algebra and calculus. The clear presentation enables the reader to understand and utilize all components of simplex-type methods, such as presolve techniques, scaling techniques, pivoting ru...
Mills, James W.; And Others
1973-01-01
The Study reported here tested an application of the Linear Programming Model at the Reading Clinic of Drew University. Results, while not conclusive, indicate that this approach yields greater gains in speed scores than a traditional approach for this population. (Author)
175 Years of Linear Programming
Indian Academy of Sciences (India)
polynomial-time solvability of linear programming, that is, testing if a polyhedron Q E ~ ... Q is rational, i.e. all extreme points and rays of Q are ra- tional vectors or ..... rithrll terminates with an interior solution, a post-processing step is usually ...
Klumpp, A. R.; Lawson, C. L.
1988-01-01
Routines provided for common scalar, vector, matrix, and quaternion operations. Computer program extends Ada programming language to include linear-algebra capabilities similar to HAS/S programming language. Designed for such avionics applications as software for Space Station.
Linear Programming and Network Flows
Bazaraa, Mokhtar S; Sherali, Hanif D
2011-01-01
The authoritative guide to modeling and solving complex problems with linear programming-extensively revised, expanded, and updated The only book to treat both linear programming techniques and network flows under one cover, Linear Programming and Network Flows, Fourth Edition has been completely updated with the latest developments on the topic. This new edition continues to successfully emphasize modeling concepts, the design and analysis of algorithms, and implementation strategies for problems in a variety of fields, including industrial engineering, management science, operations research
Ferencz, Donald C.; Viterna, Larry A.
1991-01-01
ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.
Elementary linear programming with applications
Kolman, Bernard
1995-01-01
Linear programming finds the least expensive way to meet given needs with available resources. Its results are used in every area of engineering and commerce: agriculture, oil refining, banking, and air transport. Authors Kolman and Beck present the basic notions of linear programming and illustrate how they are used to solve important common problems. The software on the included disk leads students step-by-step through the calculations. The Second Edition is completely revised and provides additional review material on linear algebra as well as complete coverage of elementary linear program
Linear Programming across the Curriculum
Yoder, S. Elizabeth; Kurz, M. Elizabeth
2015-01-01
Linear programming (LP) is taught in different departments across college campuses with engineering and management curricula. Modeling an LP problem is taught in every linear programming class. As faculty teaching in Engineering and Management departments, the depth to which teachers should expect students to master this particular type of…
The Use of Linear Programming for Prediction.
Schnittjer, Carl J.
The purpose of the study was to develop a linear programming model to be used for prediction, test the accuracy of the predictions, and compare the accuracy with that produced by curvilinear multiple regression analysis. (Author)
A Direct Heuristic Algorithm for Linear Programming
Indian Academy of Sciences (India)
Abstract. An (3) mathematically non-iterative heuristic procedure that needs no artificial variable is presented for solving linear programming problems. An optimality test is included. Numerical experiments depict the utility/scope of such a procedure.
Linear programming foundations and extensions
Vanderbei, Robert J
2001-01-01
Linear Programming: Foundations and Extensions is an introduction to the field of optimization. The book emphasizes constrained optimization, beginning with a substantial treatment of linear programming, and proceeding to convex analysis, network flows, integer programming, quadratic programming, and convex optimization. The book is carefully written. Specific examples and concrete algorithms precede more abstract topics. Topics are clearly developed with a large number of numerical examples worked out in detail. Moreover, Linear Programming: Foundations and Extensions underscores the purpose of optimization: to solve practical problems on a computer. Accordingly, the book is coordinated with free efficient C programs that implement the major algorithms studied: -The two-phase simplex method; -The primal-dual simplex method; -The path-following interior-point method; -The homogeneous self-dual methods. In addition, there are online JAVA applets that illustrate various pivot rules and variants of the simplex m...
Directory of Open Access Journals (Sweden)
Leif E. Peterson
1997-11-01
Full Text Available A computer program for multifactor relative risks, confidence limits, and tests of hypotheses using regression coefficients and a variance-covariance matrix obtained from a previous additive or multiplicative regression analysis is described in detail. Data used by the program can be stored and input from an external disk-file or entered via the keyboard. The output contains a list of the input data, point estimates of single or joint effects, confidence intervals and tests of hypotheses based on a minimum modified chi-square statistic. Availability of the program is also discussed.
175 Years of Linear Programming
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 4; Issue 10. 175 Years of Linear Programming - Max Flow = Min Cut. Vijay Chandru M R Rao. Series Article Volume 4 Issue 10 October 1999 pp 22-39. Fulltext. Click here to view fulltext PDF. Permanent link:
175 Years of Linear Programming
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 4; Issue 5. 175 Years of Linear Programming - Pune's Gift. Vijay Chandru M R Rao. Series Article Volume 4 Issue 5 May ... Computer Science and Automation, IISc Bangalore 560012, India. Director, Indian Institute of Management, Bannerghatta Road, ...
ALPS - A LINEAR PROGRAM SOLVER
Viterna, L. A.
1994-01-01
Linear programming is a widely-used engineering and management tool. Scheduling, resource allocation, and production planning are all well-known applications of linear programs (LP's). Most LP's are too large to be solved by hand, so over the decades many computer codes for solving LP's have been developed. ALPS, A Linear Program Solver, is a full-featured LP analysis program. ALPS can solve plain linear programs as well as more complicated mixed integer and pure integer programs. ALPS also contains an efficient solution technique for pure binary (0-1 integer) programs. One of the many weaknesses of LP solvers is the lack of interaction with the user. ALPS is a menu-driven program with no special commands or keywords to learn. In addition, ALPS contains a full-screen editor to enter and maintain the LP formulation. These formulations can be written to and read from plain ASCII files for portability. For those less experienced in LP formulation, ALPS contains a problem "parser" which checks the formulation for errors. ALPS creates fully formatted, readable reports that can be sent to a printer or output file. ALPS is written entirely in IBM's APL2/PC product, Version 1.01. The APL2 workspace containing all the ALPS code can be run on any APL2/PC system (AT or 386). On a 32-bit system, this configuration can take advantage of all extended memory. The user can also examine and modify the ALPS code. The APL2 workspace has also been "packed" to be run on any DOS system (without APL2) as a stand-alone "EXE" file, but has limited memory capacity on a 640K system. A numeric coprocessor (80X87) is optional but recommended. The standard distribution medium for ALPS is a 5.25 inch 360K MS-DOS format diskette. IBM, IBM PC and IBM APL2 are registered trademarks of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation.
A program package for solving linear optimization problems
International Nuclear Information System (INIS)
Horikami, Kunihiko; Fujimura, Toichiro; Nakahara, Yasuaki
1980-09-01
Seven computer programs for the solution of linear, integer and quadratic programming (four programs for linear programming, one for integer programming and two for quadratic programming) have been prepared and tested on FACOM M200 computer, and auxiliary programs have been written to make it easy to use the optimization program package. The characteristics of each program are explained and the detailed input/output descriptions are given in order to let users know how to use them. (author)
Linear Synchronous Motor Repeatability Tests
International Nuclear Information System (INIS)
Ward, C.R.
2002-01-01
A cart system using linear synchronous motors was being considered for the Plutonium Immobilization Plant (PIP). One of the applications in the PIP was the movement of a stack of furnace trays, filled with the waste form (pucks) from a stacking/unstacking station to several bottom loaded furnaces. A system was ordered to perform this function in the PIP Ceramic Prototype Test Facility (CPTF). This system was installed and started up in SRTC prior to being installed in the CPTF. The PIP was suspended and then canceled after the linear synchronous motor system was started up. This system was used to determine repeatability of a linear synchronous motor cart system for the Modern Pit Facility
On the linear programming bound for linear Lee codes.
Astola, Helena; Tabus, Ioan
2016-01-01
Based on an invariance-type property of the Lee-compositions of a linear Lee code, additional equality constraints can be introduced to the linear programming problem of linear Lee codes. In this paper, we formulate this property in terms of an action of the multiplicative group of the field [Formula: see text] on the set of Lee-compositions. We show some useful properties of certain sums of Lee-numbers, which are the eigenvalues of the Lee association scheme, appearing in the linear programming problem of linear Lee codes. Using the additional equality constraints, we formulate the linear programming problem of linear Lee codes in a very compact form, leading to a fast execution, which allows to efficiently compute the bounds for large parameter values of the linear codes.
Ranking Forestry Investments With Parametric Linear Programming
Paul A. Murphy
1976-01-01
Parametric linear programming is introduced as a technique for ranking forestry investments under multiple constraints; it combines the advantages of simple tanking and linear programming as capital budgeting tools.
Test accelerator for linear collider
International Nuclear Information System (INIS)
Takeda, S.; Akai, K.; Akemoto, M.; Araki, S.; Hayano, H.; Hugo, T.; Ishihara, N.; Kawamoto, T.; Kimura, Y.; Kobayashi, H.; Kubo, T.; Kurokawa, S.; Matsumoto, H.; Mizuno, H.; Odagiri, J.; Otake, Y.; Sakai, H.; Shidara, T.; Shintake, T.; Suetake, M.; Takashima, T.; Takata, K.; Takeuchi, Y.; Urakawa, J.; Yamamoto, N.; Yokoya, K.; Yoshida, M.; Yoshioka, M.; Yamaoka, Y.
1989-01-01
KEK has proposed to build Test Accelerator Facility (TAF) capable of producing a 2.5 GeV electron beam for the purpose of stimulating R ampersand D for linear collider in TeV region. The TAF consists of a 1.5 GeV S-band linear accelerator, 1.5 GeV damping ring and 1.0 GeV X-band linear accelerator. The TAF project will be carried forward in three phases. Through Phase-I and Phase-II, the S-band and X-band linacs will be constructed, and in Phase-III, the damping ring will be completed. The construction of TAF Phase-I has started, and the 0.2 GeV S-band injector linac has been almost completed. The Phase-I linac is composed of a 240 keV electron gun, subharmonic bunchers, prebunchers and traveling buncher followed by high-gradient accelerating structures. The SLAC 5045 klystrons are driven at 450 kV in order to obtain the rf-power of 100 MW in a 1 μs pulse duration. The rf-power from a pair of klystrons are combined into an accelerating structure. The accelerating gradient up to 100 MeV/m will be obtained in a 0.6 m long structure. 5 refs., 3 figs., 2 tabs
Computer Program For Linear Algebra
Krogh, F. T.; Hanson, R. J.
1987-01-01
Collection of routines provided for basic vector operations. Basic Linear Algebra Subprogram (BLAS) library is collection from FORTRAN-callable routines for employing standard techniques to perform basic operations of numerical linear algebra.
The linear programming bound for binary linear codes
Brouwer, A.E.
1993-01-01
Combining Delsarte's (1973) linear programming bound with the information that certain weights cannot occur, new upper bounds for dmin (n,k), the maximum possible minimum distance of a binary linear code with given word length n and dimension k, are derived.
Linear programming algorithms and applications
Vajda, S
1981-01-01
This text is based on a course of about 16 hours lectures to students of mathematics, statistics, and/or operational research. It is intended to introduce readers to the very wide range of applicability of linear programming, covering problems of manage ment, administration, transportation and a number of other uses which are mentioned in their context. The emphasis is on numerical algorithms, which are illustrated by examples of such modest size that the solutions can be obtained using pen and paper. It is clear that these methods, if applied to larger problems, can also be carried out on automatic (electronic) computers. Commercially available computer packages are, in fact, mainly based on algorithms explained in this book. The author is convinced that the user of these algorithms ought to be knowledgeable about the underlying theory. Therefore this volume is not merely addressed to the practitioner, but also to the mathematician who is interested in relatively new developments in algebraic theory and in...
Investigating Integer Restrictions in Linear Programming
Edwards, Thomas G.; Chelst, Kenneth R.; Principato, Angela M.; Wilhelm, Thad L.
2015-01-01
Linear programming (LP) is an application of graphing linear systems that appears in many Algebra 2 textbooks. Although not explicitly mentioned in the Common Core State Standards for Mathematics, linear programming blends seamlessly into modeling with mathematics, the fourth Standard for Mathematical Practice (CCSSI 2010, p. 7). In solving a…
Joint shape segmentation with linear programming
Huang, Qixing; Koltun, Vladlen; Guibas, Leonidas
2011-01-01
program is solved via a linear programming relaxation, using a block coordinate descent procedure that makes the optimization feasible for large databases. We evaluate the presented approach on the Princeton segmentation benchmark and show that joint shape
Timetabling an Academic Department with Linear Programming.
Bezeau, Lawrence M.
This paper describes an approach to faculty timetabling and course scheduling that uses computerized linear programming. After reviewing the literature on linear programming, the paper discusses the process whereby a timetable was created for a department at the University of New Brunswick. Faculty were surveyed with respect to course offerings…
Comparison of open-source linear programming solvers.
Energy Technology Data Exchange (ETDEWEB)
Gearhart, Jared Lee; Adair, Kristin Lynn; Durfee, Justin David.; Jones, Katherine A.; Martin, Nathaniel; Detry, Richard Joseph
2013-10-01
When developing linear programming models, issues such as budget limitations, customer requirements, or licensing may preclude the use of commercial linear programming solvers. In such cases, one option is to use an open-source linear programming solver. A survey of linear programming tools was conducted to identify potential open-source solvers. From this survey, four open-source solvers were tested using a collection of linear programming test problems and the results were compared to IBM ILOG CPLEX Optimizer (CPLEX) [1], an industry standard. The solvers considered were: COIN-OR Linear Programming (CLP) [2], [3], GNU Linear Programming Kit (GLPK) [4], lp_solve [5] and Modular In-core Nonlinear Optimization System (MINOS) [6]. As no open-source solver outperforms CPLEX, this study demonstrates the power of commercial linear programming software. CLP was found to be the top performing open-source solver considered in terms of capability and speed. GLPK also performed well but cannot match the speed of CLP or CPLEX. lp_solve and MINOS were considerably slower and encountered issues when solving several test problems.
M. ZANGIABADI; H. R. MALEKI
2007-01-01
In the real-world optimization problems, coefficients of the objective function are not known precisely and can be interpreted as fuzzy numbers. In this paper we define the concepts of optimality for linear programming problems with fuzzy parameters based on those for multiobjective linear programming problems. Then by using the concept of comparison of fuzzy numbers, we transform a linear programming problem with fuzzy parameters to a multiobjective linear programming problem. To this end, w...
Enhancement of Linear Circuit Program
DEFF Research Database (Denmark)
Gaunholt, Hans; Dabu, Mihaela; Beldiman, Octavian
1996-01-01
In this report a preliminary user friendly interface has been added to the LCP2 program making it possible to describe an electronic circuit by actually drawing the circuit on the screen. Component values and other options and parameters can easily be set by the aid of the interface. The interface...
Linear and integer programming made easy
Hu, T C
2016-01-01
Linear and integer programming are fundamental toolkits for data and information science and technology, particularly in the context of today’s megatrends toward statistical optimization, machine learning, and big data analytics. Drawn from over 30 years of classroom teaching and applied research experience, this textbook provides a crisp and practical introduction to the basics of linear and integer programming. The authors’ approach is accessible to students from all fields of engineering, including operations research, statistics, machine learning, control system design, scheduling, formal verification, and computer vision. Readers will learn to cast hard combinatorial problems as mathematical programming optimizations, understand how to achieve formulations where the objective and constraints are linear, choose appropriate solution methods, and interpret results appropriately. •Provides a concise introduction to linear and integer programming, appropriate for undergraduates, graduates, a short cours...
Sparsity Prevention Pivoting Method for Linear Programming
DEFF Research Database (Denmark)
Li, Peiqiang; Li, Qiyuan; Li, Canbing
2018-01-01
When the simplex algorithm is used to calculate a linear programming problem, if the matrix is a sparse matrix, it will be possible to lead to many zero-length calculation steps, and even iterative cycle will appear. To deal with the problem, a new pivoting method is proposed in this paper....... The principle of this method is avoided choosing the row which the value of the element in the b vector is zero as the row of the pivot element to make the matrix in linear programming density and ensure that most subsequent steps will improve the value of the objective function. One step following...... this principle is inserted to reselect the pivot element in the existing linear programming algorithm. Both the conditions for inserting this step and the maximum number of allowed insertion steps are determined. In the case study, taking several numbers of linear programming problems as examples, the results...
Sparsity Prevention Pivoting Method for Linear Programming
DEFF Research Database (Denmark)
Li, Peiqiang; Li, Qiyuan; Li, Canbing
2018-01-01
. The principle of this method is avoided choosing the row which the value of the element in the b vector is zero as the row of the pivot element to make the matrix in linear programming density and ensure that most subsequent steps will improve the value of the objective function. One step following......When the simplex algorithm is used to calculate a linear programming problem, if the matrix is a sparse matrix, it will be possible to lead to many zero-length calculation steps, and even iterative cycle will appear. To deal with the problem, a new pivoting method is proposed in this paper...... this principle is inserted to reselect the pivot element in the existing linear programming algorithm. Both the conditions for inserting this step and the maximum number of allowed insertion steps are determined. In the case study, taking several numbers of linear programming problems as examples, the results...
Evaluation of film dosemeters by linear programming
International Nuclear Information System (INIS)
Kragh, P.; Nitschke, J.
1992-01-01
An evaluation method for multi-component dosemeters is described which uses linear programming in order to decrease the dependence on energy and direction. The results of this method are more accurate than those obtained with the evaluation methods so far applied in film dosimetry. In addition, systematic errors can be given when evaluating individual measurements. Combined linear programming, as a special case of the presented method, is described taking a film dosemeter of particular type as an example. (orig.) [de
Linear programming mathematics, theory and algorithms
1996-01-01
Linear Programming provides an in-depth look at simplex based as well as the more recent interior point techniques for solving linear programming problems. Starting with a review of the mathematical underpinnings of these approaches, the text provides details of the primal and dual simplex methods with the primal-dual, composite, and steepest edge simplex algorithms. This then is followed by a discussion of interior point techniques, including projective and affine potential reduction, primal and dual affine scaling, and path following algorithms. Also covered is the theory and solution of the linear complementarity problem using both the complementary pivot algorithm and interior point routines. A feature of the book is its early and extensive development and use of duality theory. Audience: The book is written for students in the areas of mathematics, economics, engineering and management science, and professionals who need a sound foundation in the important and dynamic discipline of linear programming.
Performance test of 100 W linear compressor
Energy Technology Data Exchange (ETDEWEB)
Ko, J; Ko, D. Y.; Park, S. J.; Kim, H. B.; Hong, Y. J.; Yeom, H. K. [Korea Institute of Machinery and Materials, Daejeon(Korea, Republic of)
2013-09-15
In this paper, we present test results of developed 100 W class linear compressor for Stirling-type pulse tube refrigerator. The fabricated linear compressor has dual-opposed configuration, free piston and moving magnet type linear motor. Power transfer, efficiency and required pressure waveform are predicted with designed and measured specifications. In experiments, room temperature test with flow impedance is conducted to evaluate performance of developed linear compressor. Flow impedance is loaded to compressor with metering valve for flow resistance, inertance tube for flow inertance and buffer volumes for flow compliance. Several operating parameters such as input voltage, current, piston displacement and pressure wave are measured for various operating frequency and fixed input current level. Behaviors of dynamics and performance of linear compressor as varying flow impedance are discussed with measured experimental results. The developed linear compressor shows 124 W of input power, 86 % of motor efficiency and 60 % of compressor efficiency at its resonant operating condition.
Fuzzy Multi-objective Linear Programming Approach
Directory of Open Access Journals (Sweden)
Amna Rehmat
2007-07-01
Full Text Available Traveling salesman problem (TSP is one of the challenging real-life problems, attracting researchers of many fields including Artificial Intelligence, Operations Research, and Algorithm Design and Analysis. The problem has been well studied till now under different headings and has been solved with different approaches including genetic algorithms and linear programming. Conventional linear programming is designed to deal with crisp parameters, but information about real life systems is often available in the form of vague descriptions. Fuzzy methods are designed to handle vague terms, and are most suited to finding optimal solutions to problems with vague parameters. Fuzzy multi-objective linear programming, an amalgamation of fuzzy logic and multi-objective linear programming, deals with flexible aspiration levels or goals and fuzzy constraints with acceptable deviations. In this paper, a methodology, for solving a TSP with imprecise parameters, is deployed using fuzzy multi-objective linear programming. An example of TSP with multiple objectives and vague parameters is discussed.
Linear Logistic Test Modeling with R
Baghaei, Purya; Kubinger, Klaus D.
2015-01-01
The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…
Portfolio optimization using fuzzy linear programming
Pandit, Purnima K.
2013-09-01
Portfolio Optimization (PO) is a problem in Finance, in which investor tries to maximize return and minimize risk by carefully choosing different assets. Expected return and risk are the most important parameters with regard to optimal portfolios. In the simple form PO can be modeled as quadratic programming problem which can be put into equivalent linear form. PO problems with the fuzzy parameters can be solved as multi-objective fuzzy linear programming problem. In this paper we give the solution to such problems with an illustrative example.
Some Properties of Multiple Parameters Linear Programming
Directory of Open Access Journals (Sweden)
Maoqin Li
2010-01-01
Full Text Available We consider a linear programming problem in which the right-hand side vector depends on multiple parameters. We study the characters of the optimal value function and the critical regions based on the concept of the optimal partition. We show that the domain of the optimal value function f can be decomposed into finitely many subsets with disjoint relative interiors, which is different from the result based on the concept of the optimal basis. And any directional derivative of f at any point can be computed by solving a linear programming problem when only an optimal solution is available at the point.
Some Properties of Multiple Parameters Linear Programming
Directory of Open Access Journals (Sweden)
Yan Hong
2010-01-01
Full Text Available Abstract We consider a linear programming problem in which the right-hand side vector depends on multiple parameters. We study the characters of the optimal value function and the critical regions based on the concept of the optimal partition. We show that the domain of the optimal value function can be decomposed into finitely many subsets with disjoint relative interiors, which is different from the result based on the concept of the optimal basis. And any directional derivative of at any point can be computed by solving a linear programming problem when only an optimal solution is available at the point.
PCX, Interior-Point Linear Programming Solver
International Nuclear Information System (INIS)
Czyzyk, J.
2004-01-01
1 - Description of program or function: PCX solves linear programming problems using the Mehrota predictor-corrector interior-point algorithm. PCX can be called as a subroutine or used in stand-alone mode, with data supplied from an MPS file. The software incorporates modules that can be used separately from the linear programming solver, including a pre-solve routine and data structure definitions. 2 - Methods: The Mehrota predictor-corrector method is a primal-dual interior-point method for linear programming. The starting point is determined from a modified least squares heuristic. Linear systems of equations are solved at each interior-point iteration via a sparse Cholesky algorithm native to the code. A pre-solver is incorporated in the code to eliminate inefficiencies in the user's formulation of the problem. 3 - Restriction on the complexity of the problem: There are no size limitations built into the program. The size of problem solved is limited by RAM and swap space on the user's computer
Spline smoothing of histograms by linear programming
Bennett, J. O.
1972-01-01
An algorithm for an approximating function to the frequency distribution is obtained from a sample of size n. To obtain the approximating function a histogram is made from the data. Next, Euclidean space approximations to the graph of the histogram using central B-splines as basis elements are obtained by linear programming. The approximating function has area one and is nonnegative.
Generalised Assignment Matrix Methodology in Linear Programming
Jerome, Lawrence
2012-01-01
Discrete Mathematics instructors and students have long been struggling with various labelling and scanning algorithms for solving many important problems. This paper shows how to solve a wide variety of Discrete Mathematics and OR problems using assignment matrices and linear programming, specifically using Excel Solvers although the same…
Fuzzy linear programming approach for solving transportation
Indian Academy of Sciences (India)
Transportation problem (TP) is an important network structured linear programming problem that arises in several contexts and has deservedly received a great deal of attention in the literature. The central concept in this problem is to find the least total transportation cost of a commodity in order to satisfy demands at ...
Statistical Tests for Mixed Linear Models
Khuri, André I; Sinha, Bimal K
2011-01-01
An advanced discussion of linear models with mixed or random effects. In recent years a breakthrough has occurred in our ability to draw inferences from exact and optimum tests of variance component models, generating much research activity that relies on linear models with mixed and random effects. This volume covers the most important research of the past decade as well as the latest developments in hypothesis testing. It compiles all currently available results in the area of exact and optimum tests for variance component models and offers the only comprehensive treatment for these models a
Menu-Driven Solver Of Linear-Programming Problems
Viterna, L. A.; Ferencz, D.
1992-01-01
Program assists inexperienced user in formulating linear-programming problems. A Linear Program Solver (ALPS) computer program is full-featured LP analysis program. Solves plain linear-programming problems as well as more-complicated mixed-integer and pure-integer programs. Also contains efficient technique for solution of purely binary linear-programming problems. Written entirely in IBM's APL2/PC software, Version 1.01. Packed program contains licensed material, property of IBM (copyright 1988, all rights reserved).
Optimized remedial groundwater extraction using linear programming
International Nuclear Information System (INIS)
Quinn, J.J.
1995-01-01
Groundwater extraction systems are typically installed to remediate contaminant plumes or prevent further spread of contamination. These systems are expensive to install and maintain. A traditional approach to designing such a wellfield uses a series of trial-and-error simulations to test the effects of various well locations and pump rates. However, the optimal locations and pump rates of extraction wells are difficult to determine when objectives related to the site hydrogeology and potential pumping scheme are considered. This paper describes a case study of an application of linear programming theory to determine optimal well placement and pump rates. The objectives of the pumping scheme were to contain contaminant migration and reduce contaminant concentrations while minimizing the total amount of water pumped and treated. Past site activities at the area under study included disposal of contaminants in pits. Several groundwater plumes have been identified, and others may be present. The area of concern is bordered on three sides by a wetland, which receives a portion of its input budget as groundwater discharge from the pits. Optimization of the containment pumping scheme was intended to meet three goals: (1) prevent discharge of contaminated groundwater to the wetland, (2) minimize the total water pumped and treated (cost benefit), and (3) avoid dewatering of the wetland (cost and ecological benefits). Possible well locations were placed at known source areas. To constrain the problem, the optimization program was instructed to prevent any flow toward the wetland along a user-specified border. In this manner, the optimization routine selects well locations and pump rates so that a groundwater divide is produced along this boundary
Test facilities for future linear colliders
International Nuclear Information System (INIS)
Ruth, R.D.
1995-12-01
During the past several years there has been a tremendous amount of progress on Linear Collider technology world wide. This research has led to the construction of the test facilities described in this report. Some of the facilities will be complete as early as the end of 1996, while others will be finishing up around the end 1997. Even now there are extensive tests ongoing for the enabling technologies for all of the test facilities. At the same time the Linear Collider designs are quite mature now and the SLC is providing the key experience base that can only come from a working collider. All this taken together indicates that the technology and accelerator physics will be ready for a future Linear Collider project to begin in the last half of the 1990s
The RANDOM computer program: A linear congruential random number generator
Miles, R. F., Jr.
1986-01-01
The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.
The simplex method of linear programming
Ficken, Frederick A
1961-01-01
This concise but detailed and thorough treatment discusses the rudiments of the well-known simplex method for solving optimization problems in linear programming. Geared toward undergraduate students, the approach offers sufficient material for readers without a strong background in linear algebra. Many different kinds of problems further enrich the presentation. The text begins with examinations of the allocation problem, matrix notation for dual problems, feasibility, and theorems on duality and existence. Subsequent chapters address convex sets and boundedness, the prepared problem and boun
Synthesizing Dynamic Programming Algorithms from Linear Temporal Logic Formulae
Rosu, Grigore; Havelund, Klaus
2001-01-01
The problem of testing a linear temporal logic (LTL) formula on a finite execution trace of events, generated by an executing program, occurs naturally in runtime analysis of software. We present an algorithm which takes an LTL formula and generates an efficient dynamic programming algorithm. The generated algorithm tests whether the LTL formula is satisfied by a finite trace of events given as input. The generated algorithm runs in linear time, its constant depending on the size of the LTL formula. The memory needed is constant, also depending on the size of the formula.
Updating Linear Schedules with Lowest Cost: a Linear Programming Model
Biruk, Sławomir; Jaśkowski, Piotr; Czarnigowska, Agata
2017-10-01
Many civil engineering projects involve sets of tasks repeated in a predefined sequence in a number of work areas along a particular route. A useful graphical representation of schedules of such projects is time-distance diagrams that clearly show what process is conducted at a particular point of time and in particular location. With repetitive tasks, the quality of project performance is conditioned by the ability of the planner to optimize workflow by synchronizing the works and resources, which usually means that resources are planned to be continuously utilized. However, construction processes are prone to risks, and a fully synchronized schedule may expire if a disturbance (bad weather, machine failure etc.) affects even one task. In such cases, works need to be rescheduled, and another optimal schedule should be built for the changed circumstances. This typically means that, to meet the fixed completion date, durations of operations have to be reduced. A number of measures are possible to achieve such reduction: working overtime, employing more resources or relocating resources from less to more critical tasks, but they all come at a considerable cost and affect the whole project. The paper investigates the problem of selecting the measures that reduce durations of tasks of a linear project so that the cost of these measures is kept to the minimum and proposes an algorithm that could be applied to find optimal solutions as the need to reschedule arises. Considering that civil engineering projects, such as road building, usually involve less process types than construction projects, the complexity of scheduling problems is lower, and precise optimization algorithms can be applied. Therefore, the authors put forward a linear programming model of the problem and illustrate its principle of operation with an example.
Joint shape segmentation with linear programming
Huang, Qixing
2011-01-01
We present an approach to segmenting shapes in a heterogenous shape database. Our approach segments the shapes jointly, utilizing features from multiple shapes to improve the segmentation of each. The approach is entirely unsupervised and is based on an integer quadratic programming formulation of the joint segmentation problem. The program optimizes over possible segmentations of individual shapes as well as over possible correspondences between segments from multiple shapes. The integer quadratic program is solved via a linear programming relaxation, using a block coordinate descent procedure that makes the optimization feasible for large databases. We evaluate the presented approach on the Princeton segmentation benchmark and show that joint shape segmentation significantly outperforms single-shape segmentation techniques. © 2011 ACM.
A goal programming procedure for solving fuzzy multiobjective fractional linear programming problems
Directory of Open Access Journals (Sweden)
Tunjo Perić
2014-12-01
Full Text Available This paper presents a modification of Pal, Moitra and Maulik's goal programming procedure for fuzzy multiobjective linear fractional programming problem solving. The proposed modification of the method allows simpler solving of economic multiple objective fractional linear programming (MOFLP problems, enabling the obtained solutions to express the preferences of the decision maker defined by the objective function weights. The proposed method is tested on the production planning example.
International program on linear electric motors
Energy Technology Data Exchange (ETDEWEB)
Dawson, G.E.; Eastham, A.R.; Parker, J.H.
1992-05-01
The International Program on Linear Electric Motors (LEM) was initiated for the purposes of commumication and coordination between various centers of expertise in LEM technology in Germany, Japan and Canada. Furthermore, it was intended to provide assessment and support of the planning of technological developments and for dissemination of information to researchers, service operators and policy makers, and to ensure that full advantage can be taken if opportunities for technology transfer occur. In the process, the program was able to provide closer contacts between researchers, to enhance and encourage collaborative research and development, and to facilitate joint ventures in advanced transportation technologies. Work done under the program is documented, and seminar materials presented by Canadian researchers in Italy, and by Italian researchers at Queen's University in Canada are presented. Five separate abstracts have been prepared for the main body of the report and the seminar materials.
Controller design approach based on linear programming.
Tanaka, Ryo; Shibasaki, Hiroki; Ogawa, Hiromitsu; Murakami, Takahiro; Ishida, Yoshihisa
2013-11-01
This study explains and demonstrates the design method for a control system with a load disturbance observer. Observer gains are determined by linear programming (LP) in terms of the Routh-Hurwitz stability criterion and the final-value theorem. In addition, the control model has a feedback structure, and feedback gains are determined to be the linear quadratic regulator. The simulation results confirmed that compared with the conventional method, the output estimated by our proposed method converges to a reference input faster when a load disturbance is added to a control system. In addition, we also confirmed the effectiveness of the proposed method by performing an experiment with a DC motor. © 2013 ISA. Published by ISA. All rights reserved.
International Nuclear Information System (INIS)
Bandyopadhyay, K.K.; Kunkel, C.; Shteyngart, S.
1994-02-01
This report presents the results of a relay test program conducted by Brookhaven National Laboratory (BNL) under the sponsorship of the US Nuclear Regulatory Commission (NRC). The program is a continuation of an earlier test program the results of which were published in NUREG/CR-4867. The current program was carried out in two phases: electrical testing and vibration testing. The objective was primarily to focus on the electrical discontinuity or continuity of relays and circuit breaker tripping mechanisms subjected to electrical pulses and vibration loads. The electrical testing was conducted by KEMA-Powertest Company and the vibration testing was performed at Wyle Laboratories, Huntsville, Alabama. This report discusses the test procedures, presents the test data, includes an analysis of the data and provides recommendations regarding reliable relay testing
Game Theory and its Relationship with Linear Programming Models ...
African Journals Online (AJOL)
Game Theory and its Relationship with Linear Programming Models. ... This paper shows that game theory and linear programming problem are closely related subjects since any computing method devised for ... AJOL African Journals Online.
Introduction to linear programming: Coalitional game experiments
Energy Technology Data Exchange (ETDEWEB)
Lucas, W.
1994-12-31
Many solution notions in the multiperson cooperative games (in characteristic function form) make use of linear programming (LP). The popular concept of the {open_quotes}core{close_quotes} of a coalitional game is a special type of LP. It can be introduced in a very simple and quite exciting manner by means of a group experiment. A total of fifty dollars will be given to three randomly selected attendees who will take part in an experiment during this talk, presuming they behave in a Pareto optimal manner. Furthermore, the dual of the particular LP for the core gives rise to the idea of {open_quotes}balanced sets{close_quotes} which is an interesting combinatorial structure in its own right.
Robust Control Design via Linear Programming
Keel, L. H.; Bhattacharyya, S. P.
1998-01-01
This paper deals with the problem of synthesizing or designing a feedback controller of fixed dynamic order. The closed loop specifications considered here are given in terms of a target performance vector representing a desired set of closed loop transfer functions connecting various signals. In general these point targets are unattainable with a fixed order controller. By enlarging the target from a fixed point set to an interval set the solvability conditions with a fixed order controller are relaxed and a solution is more easily enabled. Results from the parametric robust control literature can be used to design the interval target family so that the performance deterioration is acceptable, even when plant uncertainty is present. It is shown that it is possible to devise a computationally simple linear programming approach that attempts to meet the desired closed loop specifications.
Stochastic linear programming models, theory, and computation
Kall, Peter
2011-01-01
This new edition of Stochastic Linear Programming: Models, Theory and Computation has been brought completely up to date, either dealing with or at least referring to new material on models and methods, including DEA with stochastic outputs modeled via constraints on special risk functions (generalizing chance constraints, ICC’s and CVaR constraints), material on Sharpe-ratio, and Asset Liability Management models involving CVaR in a multi-stage setup. To facilitate use as a text, exercises are included throughout the book, and web access is provided to a student version of the authors’ SLP-IOR software. Additionally, the authors have updated the Guide to Available Software, and they have included newer algorithms and modeling systems for SLP. The book is thus suitable as a text for advanced courses in stochastic optimization, and as a reference to the field. From Reviews of the First Edition: "The book presents a comprehensive study of stochastic linear optimization problems and their applications. … T...
The CERN linear collider test facility (CTF)
International Nuclear Information System (INIS)
Baconnier, Y.; Battisti, S.; Bossart, R.; Delahaye, J.P.; Geissler, K.K.; Godot, J.C.; Huebner, K.; Madsen, J.H.B.; Potier, J.P.; Riche, A.J.; Sladen, J.; Suberlucq, G.; Wilson, I.; Wuensch, W.
1992-01-01
The CTF (Collider Test Facility) was brought into service last year. The 3 GHz gun produced a beam of 3 MeV/c which was accelerated to 40 MeV/c. This beam, passing a prototype CLIC (linear collider) structure, generated a sizeable amount of 30 GHz power. This paper describes the results and experience with the gun driven by a 8 ns long laser pulse and its CsI photo cathode, the beam behaviour, the beam diagnostics in particular with the bunch measurements by Cerenkov or transition radiation light and streak camera, the photo cathode research, and the beam dynamics studies on space charge effects. (Author)4 figs., tab., 6 refs
The Next Linear Collider Test Accelerator
International Nuclear Information System (INIS)
Ruth, R.D.; Adolphsen, C.; Bane, K.
1993-04-01
During the past several years, there has been tremendous progress the development of the RF system and accelerating structures for a Next Linear Collider (NLC). Developments include high-power klystrons, RF pulse compression systems and damped/detuned accelerator structures to reduce wakefields. In order to integrate these separate development efforts into an actual X-band accelerator capable of accelerating the electron beams necessary for an NLC, we are building an NLC Test Accelerator (NLCTA). The goal of the NLCTA is to bring together all elements of the entire accelerating system by constructing and reliably operating an engineered model of a high-gradient linac suitable for the NLC. The NLCTA will serve as a testbed as the design of the NLC evolves. In addition to testing the RF acceleration system, the NLCTA is designed to address many questions related to the dynamics of the beam during acceleration. In this paper, we will report oil the status of the design, component development, and construction of the NLC Test Accelerator
An alternative test for verifying electronic balance linearity
International Nuclear Information System (INIS)
Thomas, I.R.
1998-02-01
This paper presents an alternative method for verifying electronic balance linearity and accuracy. This method is being developed for safeguards weighings (weighings for the control and accountability of nuclear material) at the Idaho National Engineering and Environmental Laboratory (INEEL). With regard to balance linearity and accuracy, DOE Order 5633.3B, Control and Accountability of Nuclear Materials, Paragraph 2, 4, e, (1), (a) Scales and Balances Program, states: ''All scales and balances used for accountability purposes shall be maintained in good working condition, recalibrated according to an established schedule, and checked for accuracy and linearity on each day that the scale or balance is used for accountability purposes.'' Various tests have been proposed for testing accuracy and linearity. In the 1991 Measurement Science Conference, Dr. Walter E. Kupper presented a paper entitled: ''Validation of High Accuracy Weighing Equipment.'' Dr. Kupper emphasized that tolerance checks for calibrated, state-of-the-art electronic equipment need not be complicated, and he presented four easy steps for verifying that a calibrated balance is operating correctly. These tests evaluate the standard deviation of successive weighings (of the same load), the off-center error, the calibration error, and the error due to nonlinearity. This method of balance validation is undoubtedly an authoritative means of ensuring balance operability, yet it could have two drawbacks: one, the test for linearity is not intuitively obvious, especially from a statistical viewpoint; and two, there is an absence of definitively defined testing limits. Hence, this paper describes an alternative means of verifying electronic balance linearity and accuracy that is being developed for safeguards measurements at the INEEL
A LINEAR PROGRAMMING ALGORITHM FOR LEAST-COST SCHEDULING
Directory of Open Access Journals (Sweden)
AYMAN H AL-MOMANI
1999-12-01
Full Text Available In this research, some concepts of linear programming and critical path method are reviewed to describe recent modeling structures that have been of great value in analyzing extended planning horizon project time-cost trade-offs problems. A simplified representation of a small project and a linear programming model is formulated to represent this system. Procedures to solve these various problems formulations were cited and the final solution is obtained using LINDO program. The model developed represents many restrictions and management considerations of the project. It could be used by construction managers in a planning stage to explore numerous possible opportunities to the contractor and predict the effect of a decision on the construction to facilitate a preferred operating policy given different management objectives. An implementation using this method is shown to outperform several other techniques and a large class of test problems. Linear programming show that the algorithm is very promising in practice on a wide variety of time-cost trade-offs problems. This method is simple, applicable to a large network, and generates a shorter computational time at low cost, along with an increase in robustness.
Test Stand for Linear Induction Accelerator Optimization
International Nuclear Information System (INIS)
Ong, M; DeHope, B; Griffin, K; Goerz, D; Kihara, R; Vogtlin, G; Zentler, J M; Scarpetti, R
2003-01-01
Lawrence Livermore National Laboratory has designed and constructed a test stand to improve the voltage regulation in our Flash X-Ray (FXR) accelerator cell. The goal is to create a more mono-energetic electron beam that will create an x-ray source with a smaller spot size. Studying the interaction of the beam and pulse-power system with the accelerator cell will improve the design of high-current accelerators at Livermore and elsewhere. On the test stand, a standard FXR cell is driven by a flexible pulse-power system and the beam current is simulated with a switched center conductor. The test stand is fully instrumented with high-speed digitizers to document the effect of impedance mismatches when the cell is operated under various full-voltage conditions. A time-domain reflectometry technique was also developed to characterize the beam and cell interactions by measuring the impedance of the accelerator and pulse-power component. Computer models are being developed in parallel with the testing program to validate the measurements and evaluate different design changes. Both 3D transient electromagnetic and circuit models are being used
An Approach for Solving Linear Fractional Programming Problems
Andrew Oyakhobo Odior
2012-01-01
Linear fractional programming problems are useful tools in production planning, financial and corporate planning, health care and hospital planning and as such have attracted considerable research interest. The paper presents a new approach for solving a fractional linear programming problem in which the objective function is a linear fractional function, while the constraint functions are in the form of linear inequalities. The approach adopted is based mainly upon solving the problem algebr...
A Fuzzy Linear Programming Approach for Aggregate Production Planning
DEFF Research Database (Denmark)
Iris, Cagatay; Cevikcan, Emre
2014-01-01
a mathematical programming framework for aggregate production planning problem under imprecise data environment. After providing background information about APP problem, together with fuzzy linear programming, the fuzzy linear programming model of APP is solved on an illustrative example for different a...
A New Finite Continuation Algorithm for Linear Programming
DEFF Research Database (Denmark)
Madsen, Kaj; Nielsen, Hans Bruun; Pinar, Mustafa
1996-01-01
We describe a new finite continuation algorithm for linear programming. The dual of the linear programming problem with unit lower and upper bounds is formulated as an $\\ell_1$ minimization problem augmented with the addition of a linear term. This nondifferentiable problem is approximated...... by a smooth problem. It is shown that the minimizers of the smooth problem define a family of piecewise-linear paths as a function of a smoothing parameter. Based on this property, a finite algorithm that traces these paths to arrive at an optimal solution of the linear program is developed. The smooth...
Optimization Research of Generation Investment Based on Linear Programming Model
Wu, Juan; Ge, Xueqian
Linear programming is an important branch of operational research and it is a mathematical method to assist the people to carry out scientific management. GAMS is an advanced simulation and optimization modeling language and it will combine a large number of complex mathematical programming, such as linear programming LP, nonlinear programming NLP, MIP and other mixed-integer programming with the system simulation. In this paper, based on the linear programming model, the optimized investment decision-making of generation is simulated and analyzed. At last, the optimal installed capacity of power plants and the final total cost are got, which provides the rational decision-making basis for optimized investments.
Manipulator comparative testing program
International Nuclear Information System (INIS)
Draper, J.V.; Handel, S.J.; Sundstrom, E.; Herndon, J.N.; Fujita, Y.; Maeda, M.
1986-01-01
The Manipulator Comparative Testing Program examined differences among manipulator systems from the United States and Japan. The manipulator systems included the Meidensha BILARM 83A, the Model M-2 of Central Research Laboratories Division of Sargent Industries (CRL), and the GCA Corporation PaR Systems Model 6000. The site of testing was the Remote Operations Maintenance Demonstration (ROMD) facility, operated by the Fuel Recycle Division in the Consolidated Fuel Reprocessing Program at the Oak Ridge National Laboratory (ORNL). In all stages of testing, operators using the CRL Model M-2 manipulator had consistently lower times to completion and error rates than they did using other machines. Performance was second best with the Meidensha BILARM 83A in master-slave mode. Performance with the BILARM in switchbox mode and the PaR 6000 manipulator was approximately equivalent in terms of criteria recorded in testing. These data show no impact of force reflection on task performance
An approach for solving linear fractional programming problems ...
African Journals Online (AJOL)
The paper presents a new approach for solving a fractional linear programming problem in which the objective function is a linear fractional function, while the constraint functions are in the form of linear inequalities. The approach adopted is based mainly upon solving the problem algebraically using the concept of duality ...
Directory of Open Access Journals (Sweden)
Tunjo Perić
2017-01-01
Full Text Available This paper presents and analyzes the applicability of three linearization techniques used for solving multi-objective linear fractional programming problems using the goal programming method. The three linearization techniques are: (1 Taylor’s polynomial linearization approximation, (2 the method of variable change, and (3 a modification of the method of variable change proposed in [20]. All three linearization techniques are presented and analyzed in two variants: (a using the optimal value of the objective functions as the decision makers’ aspirations, and (b the decision makers’ aspirations are given by the decision makers. As the criteria for the analysis we use the efficiency of the obtained solutions and the difficulties the analyst comes upon in preparing the linearization models. To analyze the applicability of the linearization techniques incorporated in the linear goal programming method we use an example of a financial structure optimization problem.
A Sawmill Manager Adapts To Change With Linear Programming
George F. Dutrow; James E. Granskog
1973-01-01
Linear programming provides guidelines for increasing sawmill capacity and flexibility and for determining stumpagepurchasing strategy. The operator of a medium-sized sawmill implemented improvements suggested by linear programming analysis; results indicate a 45 percent increase in revenue and a 36 percent hike in volume processed.
Analytic central path, sensitivity analysis and parametric linear programming
A.G. Holder; J.F. Sturm; S. Zhang (Shuzhong)
1998-01-01
textabstractIn this paper we consider properties of the central path and the analytic center of the optimal face in the context of parametric linear programming. We first show that if the right-hand side vector of a standard linear program is perturbed, then the analytic center of the optimal face
Application of the simplex method of linear programming model to ...
African Journals Online (AJOL)
This work discussed how the simplex method of linear programming could be used to maximize the profit of any business firm using Saclux Paint Company as a case study. It equally elucidated the effect variation in the optimal result obtained from linear programming model, will have on any given firm. It was demonstrated ...
Integrating Linear Programming and Analytical Hierarchical ...
African Journals Online (AJOL)
Study area is about 28000 ha of Keleibar- Chai Watershed, located in eastern Azerbaijan, Iran. Socio-economic information collected through a two-stage survey of 19 villages, including 300 samples. Thematic maps also have summarized Ecological factors, including physical and economic data. A comprehensive Linear ...
Introductory Linear Regression Programs in Undergraduate Chemistry.
Gale, Robert J.
1982-01-01
Presented are simple programs in BASIC and FORTRAN to apply the method of least squares. They calculate gradients and intercepts and express errors as standard deviations. An introduction of undergraduate students to such programs in a chemistry class is reviewed, and issues instructors should be aware of are noted. (MP)
A test for the parameters of multiple linear regression models ...
African Journals Online (AJOL)
A test for the parameters of multiple linear regression models is developed for conducting tests simultaneously on all the parameters of multiple linear regression models. The test is robust relative to the assumptions of homogeneity of variances and absence of serial correlation of the classical F-test. Under certain null and ...
High gradient tests of SLAC Linear Collider Accelerator Structures
International Nuclear Information System (INIS)
Wang, J.W.; Deruyter, H.; Eichner, J.; Fant, K.H.; Hoag, H.A.; Koontz, R.F.; Lavine, T.; Loew, G.A.; Loewen, R.; Menegat, L.
1994-08-01
This paper describes the current SLAC R ampersand D program to develop room temperature accelerator structures for the Next Linear Collider (NLC). The structures are designed to operate at 11.4 GHz at an accelerating gradient in the range of 50 to 100 MV/m. In the past year a 26 cm constant-impedance traveling-wave section, a 75 cm constant-impedance traveling-wave section, and a 1.8 m traveling-wave section with detuned deflecting modes have been high-power tested. The paper presents a brief description of the RF test setup, the design and manufacturing details of the structures, and a discussion of test results including field emission, RF processing, dark current spectrum and RF breakdown
Antares alignment gimbal positioner linear bearing tests
International Nuclear Information System (INIS)
Day, R.D.; McKay, M.D.; Pierce, D.D.; Lujan, R.E.
1981-01-01
The data indicate that of the six configurations tested, the solid circular rails with either the wet or dry lubricant are superior to the other configurations. Therefore, these two will undergo additional tests. These tests will consist of (1) modifying the testing procedure to obtain a better estimation of the limits of precision; and (2) subjecting the bearings to moments more closely approximating the actual conditions they will undergo on the AGP
Non-linear programming method in optimization of fast reactors
International Nuclear Information System (INIS)
Pavelesku, M.; Dumitresku, Kh.; Adam, S.
1975-01-01
Application of the non-linear programming methods on optimization of nuclear materials distribution in fast reactor is discussed. The programming task composition is made on the basis of the reactor calculation dependent on the fuel distribution strategy. As an illustration of this method application the solution of simple example is given. Solution of the non-linear program is done on the basis of the numerical method SUMT. (I.T.)
Pop, P.C.; Still, Georg J.
1999-01-01
In linear programming it is known that an appropriate non-homogeneous Farkas Lemma leads to a short proof of the strong duality results for a pair of primal and dual programs. By using a corresponding generalized Farkas lemma we give a similar proof of the strong duality results for semidefinite
Linear System of Equations, Matrix Inversion, and Linear Programming Using MS Excel
El-Gebeily, M.; Yushau, B.
2008-01-01
In this note, we demonstrate with illustrations two different ways that MS Excel can be used to solve Linear Systems of Equation, Linear Programming Problems, and Matrix Inversion Problems. The advantage of using MS Excel is its availability and transparency (the user is responsible for most of the details of how a problem is solved). Further, we…
Duality in non-linear programming
Jeyalakshmi, K.
2018-04-01
In this paper we consider duality and converse duality for a programming problem involving convex objective and constraint functions with finite dimensional range. We do not assume any constraint qualification. The dual is presented by reducing the problem to a standard Lagrange multiplier problem.
Directory of Open Access Journals (Sweden)
Yi-hua Zhong
2013-01-01
Full Text Available Recently, various methods have been developed for solving linear programming problems with fuzzy number, such as simplex method and dual simplex method. But their computational complexities are exponential, which is not satisfactory for solving large-scale fuzzy linear programming problems, especially in the engineering field. A new method which can solve large-scale fuzzy number linear programming problems is presented in this paper, which is named a revised interior point method. Its idea is similar to that of interior point method used for solving linear programming problems in crisp environment before, but its feasible direction and step size are chosen by using trapezoidal fuzzy numbers, linear ranking function, fuzzy vector, and their operations, and its end condition is involved in linear ranking function. Their correctness and rationality are proved. Moreover, choice of the initial interior point and some factors influencing the results of this method are also discussed and analyzed. The result of algorithm analysis and example study that shows proper safety factor parameter, accuracy parameter, and initial interior point of this method may reduce iterations and they can be selected easily according to the actual needs. Finally, the method proposed in this paper is an alternative method for solving fuzzy number linear programming problems.
Linear Parametric Sensitivity Analysis of the Constraint Coefficient Matrix in Linear Programs
Zuidwijk, Rob
2005-01-01
textabstractSensitivity analysis is used to quantify the impact of changes in the initial data of linear programs on the optimal value. In particular, parametric sensitivity analysis involves a perturbation analysis in which the effects of small changes of some or all of the initial data on an optimal solution are investigated, and the optimal solution is studied on a so-called critical range of the initial data, in which certain properties such as the optimal basis in linear programming are ...
General guidelines solution for linear programming with fuzzy coefficients
Directory of Open Access Journals (Sweden)
Sergio Gerardo de los Cobos Silva
2013-08-01
Full Text Available This work introduce to the Possibilistic Programming and the Fuzzy Programming as paradigms that allow to resolve problems of linear programming when the coefficients of the model or the restrictions on the same are presented as fuzzy numbers, rather than exact numbers (crisp. This work presents some examples based on [1].
A Study of Joint Cost Inclusion in Linear Programming Optimization
Directory of Open Access Journals (Sweden)
P. Armaos
2013-08-01
Full Text Available The concept of Structural Optimization has been a topic or research over the past century. Linear Programming Optimization has proved being the most reliable method of structural optimization. Global advances in linear programming optimization have been recently powered by University of Sheffield researchers, to include joint cost, self-weight and buckling considerations. A joint cost inclusion scopes to reduce the number of joints existing in an optimized structural solution, transforming it to a practically viable solution. The topic of the current paper is to investigate the effects of joint cost inclusion, as this is currently implemented in the optimization code. An extended literature review on this subject was conducted prior to familiarization with small scale optimization software. Using IntelliFORM software, a structured series of problems were set and analyzed. The joint cost tests examined benchmark problems and their consequent changes in the member topology, as the design domain was expanding. The findings of the analyses were remarkable and are being commented further on. The distinct topologies of solutions created by optimization processes are also recognized. Finally an alternative strategy of penalizing joints is presented.
Arc-Search Infeasible Interior-Point Algorithm for Linear Programming
Yang, Yaguang
2014-01-01
Mehrotra's algorithm has been the most successful infeasible interior-point algorithm for linear programming since 1990. Most popular interior-point software packages for linear programming are based on Mehrotra's algorithm. This paper proposes an alternative algorithm, arc-search infeasible interior-point algorithm. We will demonstrate, by testing Netlib problems and comparing the test results obtained by arc-search infeasible interior-point algorithm and Mehrotra's algorithm, that the propo...
Material testing in a linear theta pinch
International Nuclear Information System (INIS)
Alani, R.; Azodi, H.; Naraghi, M.; Safaii, B.; Torabi-Fard, A.
1983-01-01
The interaction of stainless steel 316 and Inconel 625 alloys has been investigated with a thermonuclear-like plasma, n = 10 16 cm -3 and Tsub(i) = 1 keV, generated in the Alvand I linear theta pinch. The average power flux is 10 7 W/cm 2 and the interaction time nearly one μs. A theoretical analysis based on the formation of an observed impurity layer near the material, has been used to determine the properties of the impurity layer and the extent of the damage on the material. Although arcing has been observed, the dominant damage mechanism has been assessed to be due to evaporation. Exposure to single shots has produced very heavily defective areas and even surface cracks on the SS 316 sample, but no cracks were observed on Inconel 625 after exposure to even 18 shots. On the basis of temperature rise and evaporation a comparison is made among materials exposed to plasmas of a theta pinch, shock tube, present generation tokamak and an anticipated tokamak reactor. (orig.)
Large-scale linear programs in planning and prediction.
2017-06-01
Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...
Evaluating forest management policies by parametric linear programing
Daniel I. Navon; Richard J. McConnen
1967-01-01
An analytical and simulation technique, parametric linear programing explores alternative conditions and devises an optimal management plan for each condition. Its application in solving policy-decision problems in the management of forest lands is illustrated in an example.
Formulated linear programming problems from game theory and its ...
African Journals Online (AJOL)
Formulated linear programming problems from game theory and its computer implementation using Tora package. ... Game theory, a branch of operations research examines the various concepts of decision ... AJOL African Journals Online.
Near-Regular Structure Discovery Using Linear Programming
Huang, Qixing; Guibas, Leonidas J.; Mitra, Niloy J.
2014-01-01
as an optimization and efficiently solve it using linear programming techniques. Our optimization has a discrete aspect, that is, the connectivity relationships among the elements, as well as a continuous aspect, namely the locations of the elements of interest. Both
A property of assignment type mixed integer linear programming problems
Benders, J.F.; van Nunen, J.A.E.E.
1982-01-01
In this paper we will proof that rather tight upper bounds can be given for the number of non-unique assignments that are achieved after solving the linear programming relaxation of some types of mixed integer linear assignment problems. Since in these cases the number of splitted assignments is
Linear Programming and Its Application to Pattern Recognition Problems
Omalley, M. J.
1973-01-01
Linear programming and linear programming like techniques as applied to pattern recognition problems are discussed. Three relatively recent research articles on such applications are summarized. The main results of each paper are described, indicating the theoretical tools needed to obtain them. A synopsis of the author's comments is presented with regard to the applicability or non-applicability of his methods to particular problems, including computational results wherever given.
Linear programming phase unwrapping for dual-wavelength digital holography.
Wang, Zhaomin; Jiao, Jiannan; Qu, Weijuan; Yang, Fang; Li, Hongru; Tian, Ailing; Asundi, Anand
2017-01-20
A linear programming phase unwrapping method in dual-wavelength digital holography is proposed and verified experimentally. The proposed method uses the square of height difference as a convergence standard and theoretically gives the boundary condition in a searching process. A simulation was performed by unwrapping step structures at different levels of Gaussian noise. As a result, our method is capable of recovering the discontinuities accurately. It is robust and straightforward. In the experiment, a microelectromechanical systems sample and a cylindrical lens were measured separately. The testing results were in good agreement with true values. Moreover, the proposed method is applicable not only in digital holography but also in other dual-wavelength interferometric techniques.
EZLP: An Interactive Computer Program for Solving Linear Programming Problems. Final Report.
Jarvis, John J.; And Others
Designed for student use in solving linear programming problems, the interactive computer program described (EZLP) permits the student to input the linear programming model in exactly the same manner in which it would be written on paper. This report includes a brief review of the development of EZLP; narrative descriptions of program features,…
Planning Student Flow with Linear Programming: A Tunisian Case Study.
Bezeau, Lawrence
A student flow model in linear programming format, designed to plan the movement of students into secondary and university programs in Tunisia, is described. The purpose of the plan is to determine a sufficient number of graduating students that would flow back into the system as teachers or move into the labor market to meet fixed manpower…
Linear Programming for Vocational Education Planning. Interim Report.
Young, Robert C.; And Others
The purpose of the paper is to define for potential users of vocational education management information systems a quantitative analysis technique and its utilization to facilitate more effective planning of vocational education programs. Defining linear programming (LP) as a management technique used to solve complex resource allocation problems…
Using linear programming to analyze and optimize stochastic flow lines
DEFF Research Database (Denmark)
Helber, Stefan; Schimmelpfeng, Katja; Stolletz, Raik
2011-01-01
This paper presents a linear programming approach to analyze and optimize flow lines with limited buffer capacities and stochastic processing times. The basic idea is to solve a huge but simple linear program that models an entire simulation run of a multi-stage production process in discrete time...... programming and hence allows us to solve buffer allocation problems. We show under which conditions our method works well by comparing its results to exact values for two-machine models and approximate simulation results for longer lines....
Underground Nuclear Testing Program, Nevada Test Site
International Nuclear Information System (INIS)
1975-09-01
The Energy Research and Development Administration (ERDA) continues to conduct an underground nuclear testing program which includes tests for nuclear weapons development and other tests for development of nuclear explosives and methods for their application for peaceful uses. ERDA also continues to provide nuclear explosive and test site support for nuclear effects tests sponsored by the Department of Defense. This Supplement extends the Environmental Statement (WASH-1526) to cover all underground nuclear tests and preparations for tests of one megaton (1 MT) or less at the Nevada Test Site (NTS) during Fiscal Year 1976. The test activities covered include numerous continuing programs, both nuclear and non-nuclear, which can best be conducted in a remote area. However, if nuclear excavation tests or tests of yields above 1 MT or tests away from NTS should be planned, these will be covered by separate environmental statements
Linear combination of forecasts with numerical adjustment via MINIMAX non-linear programming
Directory of Open Access Journals (Sweden)
Jairo Marlon Corrêa
2016-03-01
Full Text Available This paper proposes a linear combination of forecasts obtained from three forecasting methods (namely, ARIMA, Exponential Smoothing and Artificial Neural Networks whose adaptive weights are determined via a multi-objective non-linear programming problem, which seeks to minimize, simultaneously, the statistics: MAE, MAPE and MSE. The results achieved by the proposed combination are compared with the traditional approach of linear combinations of forecasts, where the optimum adaptive weights are determined only by minimizing the MSE; with the combination method by arithmetic mean; and with individual methods
Testing Parametric versus Semiparametric Modelling in Generalized Linear Models
Härdle, W.K.; Mammen, E.; Müller, M.D.
1996-01-01
We consider a generalized partially linear model E(Y|X,T) = G{X'b + m(T)} where G is a known function, b is an unknown parameter vector, and m is an unknown function.The paper introduces a test statistic which allows to decide between a parametric and a semiparametric model: (i) m is linear, i.e.
Linear program differentiation for single-channel speech separation
DEFF Research Database (Denmark)
Pearlmutter, Barak A.; Olsson, Rasmus Kongsgaard
2006-01-01
Many apparently difficult problems can be solved by reduction to linear programming. Such problems are often subproblems within larger systems. When gradient optimisation of the entire larger system is desired, it is necessary to propagate gradients through the internally-invoked LP solver....... For instance, when an intermediate quantity z is the solution to a linear program involving constraint matrix A, a vector of sensitivities dE/dz will induce sensitivities dE/dA. Here we show how these can be efficiently calculated, when they exist. This allows algorithmic differentiation to be applied...... to algorithms that invoke linear programming solvers as subroutines, as is common when using sparse representations in signal processing. Here we apply it to gradient optimisation of over complete dictionaries for maximally sparse representations of a speech corpus. The dictionaries are employed in a single...
Linear Parametric Sensitivity Analysis of the Constraint Coefficient Matrix in Linear Programs
R.A. Zuidwijk (Rob)
2005-01-01
textabstractSensitivity analysis is used to quantify the impact of changes in the initial data of linear programs on the optimal value. In particular, parametric sensitivity analysis involves a perturbation analysis in which the effects of small changes of some or all of the initial data on an
An Instructional Note on Linear Programming--A Pedagogically Sound Approach.
Mitchell, Richard
1998-01-01
Discusses the place of linear programming in college curricula and the advantages of using linear-programming software. Lists important characteristics of computer software used in linear programming for more effective teaching and learning. (ASK)
Applied Research of Enterprise Cost Control Based on Linear Programming
Directory of Open Access Journals (Sweden)
Yu Shuo
2015-01-01
This paper researches the enterprise cost control through the linear programming model, and analyzes the restriction factors of the labor of enterprise production, raw materials, processing equipment, sales price, and other factors affecting the enterprise income, so as to obtain an enterprise cost control model based on the linear programming. This model can calculate rational production mode in the case of limited resources, and acquire optimal enterprise income. The production guiding program and scheduling arrangement of the enterprise can be obtained through calculation results, so as to provide scientific and effective guidance for the enterprise production. This paper adds the sensitivity analysis in the linear programming model, so as to learn about the stability of the enterprise cost control model based on linear programming through the sensitivity analysis, and verify the rationality of the model, and indicate the direction for the enterprise cost control. The calculation results of the model can provide a certain reference for the enterprise planning in the market economy environment, which have strong reference and practical significance in terms of the enterprise cost control.
Non-linear nuclear engineering models as genetic programming application
International Nuclear Information System (INIS)
Domingos, Roberto P.; Schirru, Roberto; Martinez, Aquilino S.
1997-01-01
This work presents a Genetic Programming paradigm and a nuclear application. A field of Artificial Intelligence, based on the concepts of Species Evolution and Natural Selection, can be understood as a self-programming process where the computer is the main agent responsible for the discovery of a program able to solve a given problem. In the present case, the problem was to find a mathematical expression in symbolic form, able to express the existent relation between equivalent ratio of a fuel cell, the enrichment of fuel elements and the multiplication factor. Such expression would avoid repeatedly reactor physics codes execution for core optimization. The results were compared with those obtained by different techniques such as Neural Networks and Linear Multiple Regression. Genetic Programming has shown to present a performance as good as, and under some features superior to Neural Network and Linear Multiple Regression. (author). 10 refs., 8 figs., 1 tabs
lmerTest Package: Tests in Linear Mixed Effects Models
DEFF Research Database (Denmark)
Kuznetsova, Alexandra; Brockhoff, Per B.; Christensen, Rune Haubo Bojesen
2017-01-01
One of the frequent questions by users of the mixed model function lmer of the lme4 package has been: How can I get p values for the F and t tests for objects returned by lmer? The lmerTest package extends the 'lmerMod' class of the lme4 package, by overloading the anova and summary functions...... by providing p values for tests for fixed effects. We have implemented the Satterthwaite's method for approximating degrees of freedom for the t and F tests. We have also implemented the construction of Type I - III ANOVA tables. Furthermore, one may also obtain the summary as well as the anova table using...
A MICROCOMPUTER LINEAR PROGRAMMING PACKAGE: AN ALTERNATIVE TO MAINFRAMES
Laughlin, David H.
1984-01-01
This paper presents the capabilities and limitations of a microcomputer linear programming package. The solution algorithm is a version of the revised simplex. Rapid problem entry, user ease of operation, sensitivity analyses on objective function and right hand sides are advantages. A problem size of 150 activities and 64 constraints can be solved in present form. Due to problem size, limitations and lack of parametric and integer programming routines, this package is thought to have the mos...
Optimal traffic control in highway transportation networks using linear programming
Li, Yanning; Canepa, Edward S.; Claudel, Christian G.
2014-01-01
of the Hamilton-Jacobi PDE, the problem of controlling the state of the system on a network link in a finite horizon can be posed as a Linear Program. Assuming all intersections in the network are controllable, we show that the optimization approach can
LCPT: a program for finding linear canonical transformations
International Nuclear Information System (INIS)
Char, B.W.; McNamara, B.
1979-01-01
This article describes a MACSYMA program to compute symbolically a canonical linear transformation between coordinate systems. The difficulties in implementation of this canonical small physics problem are also discussed, along with the implications that may be drawn from such difficulties about widespread MACSYMA usage by the community of computational/theoretical physicists
Fitting program for linear regressions according to Mahon (1996)
Energy Technology Data Exchange (ETDEWEB)
2018-01-09
This program takes the users' Input data and fits a linear regression to it using the prescription presented by Mahon (1996). Compared to the commonly used York fit, this method has the correct prescription for measurement error propagation. This software should facilitate the proper fitting of measurements with a simple Interface.
Linear Programming, the Simplex Algorithm and Simple Polytopes
Directory of Open Access Journals (Sweden)
Das Bhusan
2010-09-01
Full Text Available In the first part of the paper we survey some far reaching applications of the basis facts of linear programming to the combinatorial theory of simple polytopes. In the second part we discuss some recent developments concurring the simplex algorithm. We describe sub-exponential randomized pivot roles and upper bounds on the diameter of graphs of polytopes.
A mixed integer linear program for an integrated fishery | Hasan ...
African Journals Online (AJOL)
... and labour allocation of quota based integrated fisheries. We demonstrate the workability of our model with a numerical example and sensitivity analysis based on data obtained from one of the major fisheries in New Zealand. Keywords: mixed integer linear program, fishing, trawler scheduling, processing, quotas ORiON: ...
Interior-Point Methods for Linear Programming: A Review
Singh, J. N.; Singh, D.
2002-01-01
The paper reviews some recent advances in interior-point methods for linear programming and indicates directions in which future progress can be made. Most of the interior-point methods belong to any of three categories: affine-scaling methods, potential reduction methods and central path methods. These methods are discussed together with…
A Partitioning and Bounded Variable Algorithm for Linear Programming
Sheskin, Theodore J.
2006-01-01
An interesting new partitioning and bounded variable algorithm (PBVA) is proposed for solving linear programming problems. The PBVA is a variant of the simplex algorithm which uses a modified form of the simplex method followed by the dual simplex method for bounded variables. In contrast to the two-phase method and the big M method, the PBVA does…
A Spreadsheet-Based, Matrix Formulation Linear Programming Lesson
DEFF Research Database (Denmark)
Harrod, Steven
2009-01-01
The article focuses on the spreadsheet-based, matrix formulation linear programming lesson. According to the article, it makes a higher level of theoretical mathematics approachable by a wide spectrum of students wherein many may not be decision sciences or quantitative methods majors. Moreover...
175 Years of Linear Programming - Minimax and Cake Topography
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 4; Issue 7. 175 Years of Linear Programming - Minimax and Cake Topography. Vijay Chandru M R Rao. Series Article Volume 4 Issue 7 July 1999 pp 4-13. Fulltext. Click here to view fulltext PDF. Permanent link:
Analysis of Students' Errors on Linear Programming at Secondary ...
African Journals Online (AJOL)
The purpose of this study was to identify secondary school students' errors on linear programming at 'O' level. It is based on the fact that students' errors inform teaching hence an essential tool for any serious mathematics teacher who intends to improve mathematics teaching. The study was guided by a descriptive survey ...
Frost, Susan A.; Bodson, Marc; Acosta, Diana M.
2009-01-01
The Next Generation (NextGen) transport aircraft configurations being investigated as part of the NASA Aeronautics Subsonic Fixed Wing Project have more control surfaces, or control effectors, than existing transport aircraft configurations. Conventional flight control is achieved through two symmetric elevators, two antisymmetric ailerons, and a rudder. The five effectors, reduced to three command variables, produce moments along the three main axes of the aircraft and enable the pilot to control the attitude and flight path of the aircraft. The NextGen aircraft will have additional redundant control effectors to control the three moments, creating a situation where the aircraft is over-actuated and where a simple relationship does not exist anymore between the required effector deflections and the desired moments. NextGen flight controllers will incorporate control allocation algorithms to determine the optimal effector commands and attain the desired moments, taking into account the effector limits. Approaches to solving the problem using linear programming and quadratic programming algorithms have been proposed and tested. It is of great interest to understand their relative advantages and disadvantages and how design parameters may affect their properties. In this paper, we investigate the sensitivity of the effector commands with respect to the desired moments and show on some examples that the solutions provided using the l2 norm of quadratic programming are less sensitive than those using the l1 norm of linear programming.
Testing hypotheses for differences between linear regression lines
Stanley J. Zarnoch
2009-01-01
Five hypotheses are identified for testing differences between simple linear regression lines. The distinctions between these hypotheses are based on a priori assumptions and illustrated with full and reduced models. The contrast approach is presented as an easy and complete method for testing for overall differences between the regressions and for making pairwise...
Ford, F. E.; Harkness, J. M.
1977-01-01
A brief discussion on the accelerated testing of batteries is given. The statistical analysis and the various aspects of the modeling that was done and the results attained from the model are also briefly discussed.
Linear decomposition approach for a class of nonconvex programming problems.
Shen, Peiping; Wang, Chunfeng
2017-01-01
This paper presents a linear decomposition approach for a class of nonconvex programming problems by dividing the input space into polynomially many grids. It shows that under certain assumptions the original problem can be transformed and decomposed into a polynomial number of equivalent linear programming subproblems. Based on solving a series of liner programming subproblems corresponding to those grid points we can obtain the near-optimal solution of the original problem. Compared to existing results in the literature, the proposed algorithm does not require the assumptions of quasi-concavity and differentiability of the objective function, and it differs significantly giving an interesting approach to solving the problem with a reduced running time.
Learning oncogenetic networks by reducing to mixed integer linear programming.
Shahrabi Farahani, Hossein; Lagergren, Jens
2013-01-01
Cancer can be a result of accumulation of different types of genetic mutations such as copy number aberrations. The data from tumors are cross-sectional and do not contain the temporal order of the genetic events. Finding the order in which the genetic events have occurred and progression pathways are of vital importance in understanding the disease. In order to model cancer progression, we propose Progression Networks, a special case of Bayesian networks, that are tailored to model disease progression. Progression networks have similarities with Conjunctive Bayesian Networks (CBNs) [1],a variation of Bayesian networks also proposed for modeling disease progression. We also describe a learning algorithm for learning Bayesian networks in general and progression networks in particular. We reduce the hard problem of learning the Bayesian and progression networks to Mixed Integer Linear Programming (MILP). MILP is a Non-deterministic Polynomial-time complete (NP-complete) problem for which very good heuristics exists. We tested our algorithm on synthetic and real cytogenetic data from renal cell carcinoma. We also compared our learned progression networks with the networks proposed in earlier publications. The software is available on the website https://bitbucket.org/farahani/diprog.
DEFF Research Database (Denmark)
Ommen, Torben Schmidt; Markussen, Wiebke Brix; Elmegaard, Brian
2014-01-01
In the paper, three frequently used operation optimisation methods are examined with respect to their impact on operation management of the combined utility technologies for electric power and DH (district heating) of eastern Denmark. The investigation focusses on individual plant operation...... differences and differences between the solution found by each optimisation method. One of the investigated approaches utilises LP (linear programming) for optimisation, one uses LP with binary operation constraints, while the third approach uses NLP (non-linear programming). The LP model is used...... as a benchmark, as this type is frequently used, and has the lowest amount of constraints of the three. A comparison of the optimised operation of a number of units shows significant differences between the three methods. Compared to the reference, the use of binary integer variables, increases operation...
Train Repathing in Emergencies Based on Fuzzy Linear Programming
Directory of Open Access Journals (Sweden)
Xuelei Meng
2014-01-01
Full Text Available Train pathing is a typical problem which is to assign the train trips on the sets of rail segments, such as rail tracks and links. This paper focuses on the train pathing problem, determining the paths of the train trips in emergencies. We analyze the influencing factors of train pathing, such as transferring cost, running cost, and social adverse effect cost. With the overall consideration of the segment and station capability constraints, we build the fuzzy linear programming model to solve the train pathing problem. We design the fuzzy membership function to describe the fuzzy coefficients. Furthermore, the contraction-expansion factors are introduced to contract or expand the value ranges of the fuzzy coefficients, coping with the uncertainty of the value range of the fuzzy coefficients. We propose a method based on triangular fuzzy coefficient and transfer the train pathing (fuzzy linear programming model to a determinate linear model to solve the fuzzy linear programming problem. An emergency is supposed based on the real data of the Beijing-Shanghai Railway. The model in this paper was solved and the computation results prove the availability of the model and efficiency of the algorithm.
Train repathing in emergencies based on fuzzy linear programming.
Meng, Xuelei; Cui, Bingmou
2014-01-01
Train pathing is a typical problem which is to assign the train trips on the sets of rail segments, such as rail tracks and links. This paper focuses on the train pathing problem, determining the paths of the train trips in emergencies. We analyze the influencing factors of train pathing, such as transferring cost, running cost, and social adverse effect cost. With the overall consideration of the segment and station capability constraints, we build the fuzzy linear programming model to solve the train pathing problem. We design the fuzzy membership function to describe the fuzzy coefficients. Furthermore, the contraction-expansion factors are introduced to contract or expand the value ranges of the fuzzy coefficients, coping with the uncertainty of the value range of the fuzzy coefficients. We propose a method based on triangular fuzzy coefficient and transfer the train pathing (fuzzy linear programming model) to a determinate linear model to solve the fuzzy linear programming problem. An emergency is supposed based on the real data of the Beijing-Shanghai Railway. The model in this paper was solved and the computation results prove the availability of the model and efficiency of the algorithm.
Relaxation Methods for Strictly Convex Regularizations of Piecewise Linear Programs
International Nuclear Information System (INIS)
Kiwiel, K. C.
1998-01-01
We give an algorithm for minimizing the sum of a strictly convex function and a convex piecewise linear function. It extends several dual coordinate ascent methods for large-scale linearly constrained problems that occur in entropy maximization, quadratic programming, and network flows. In particular, it may solve exact penalty versions of such (possibly inconsistent) problems, and subproblems of bundle methods for nondifferentiable optimization. It is simple, can exploit sparsity, and in certain cases is highly parallelizable. Its global convergence is established in the recent framework of B -functions (generalized Bregman functions)
International program on linear electric motors. CIGGT report No. 92-1
Energy Technology Data Exchange (ETDEWEB)
Dawson, G.E.; Eastham, A.R.; Parker, J.H.
1992-12-31
The International Program for Linear Electric Motors (LEM) was begun in April 1989 to communicate and coordinate activities with centers of expertise in Germany, Canada, and Japan; to provide for the assessment and support of the planning of technological developments and for dissemination of information to researchers, service operators, and policy makers; and to ensure that full advantage can be taken if opportunities for technology transfer occur. This report documents the work done under the program, including standardizing linear induction motor (LIM) design characteristics; test procedures and measurement methods; rating; database for design data; criteria for evaluation of designs; computer programs for modelling performance; and a design study for an agreed application.
International Nuclear Information System (INIS)
Berdaky, Mafalda Feliciano
2000-01-01
This work presents the operational part of the final process of the establishment of a radiotherapy service with a linear accelerator (6 MeV photon beams), including the acceptance tests, commissioning tests and the implementation of a quality control program through routine mechanical and radiation tests. All acceptance tests were satisfactory, showing results below the allowed limits of the manufacturer, the commissioning tests presented results within those of the international recommendations. The quality control program was performed during 34 months and showed an excellent stability of this accelerator. (author)
NLC. A test accelerator for the next linear collider
International Nuclear Information System (INIS)
Ruth, R.D.; Adolphsen, C.; Bane, K.; Boyce, R.F.; Burke, D.L.; Callin, R.; Caryotakis, G.; Cassel, R.; Clark, S.L.; Deruyter, H.; Fant, K.; Fuller, R.; Heifets, S.; Hoag, H.; Humphrey, R.; Kheifets, S.; Koontz, R.; Kroll, N.M.; Lavine, T.; Loew, G.A.; Menegat, A.; Miller, R.H.; Nantista, C.; Paterson, J.M.; Pearson, C.; Phillips, R.; Rifkin, J.; Spencer, J.; Tantawi, S.; Thompson, K.A.; Vlieks, A.; Vylet, V.; Wang, J.W.; Wilson, P.B.; Yeremian, A.; Youngman, B.
1993-01-01
At SLAC, we are pursuing the design of a Next Linear Collider (NLC) which would begin with a center-of-mass energy of 0.5 TeV, and be upgradable to at least 1.0 TeV. To achieve this high energy, we have been working on the development of a high-gradient 11.4-GHz (X-band) linear accelerator for the main linac of the collider. In this paper, we present the design of a 'Next Linear Collider Test Accelerator' (NLCTA). The goal of the NLCTA is to incorporate the new technologies of X-band accelerator structures, RF pulse compression systems and klystrons into a short linac which will then be a test bed for beam dynamics issues related to high-gradient acceleration. (orig.)
A test accelerator for the next linear collider
International Nuclear Information System (INIS)
Ruth, R.D.; Adolphsen, C.; Bane, K.; Boyce, R.F.; Burke, D.L.; Callin, R.; Caryotakis, G.; Cassel, R.; Clark, S.L.; Deruyter, H.; Fant, K.; Fuller, R.; Heifets, S.; Hoag, H.; Humphrey, R.; Kheifets, S.; Koontz, R.; Lavine, T.; Loew, G.A.; Menegat, A.; Miller, R.H.; Paterson, J.M.; Pearson, C.; Phillips, R.; Rifkin, J.; Spencer, J.; Tantawi, S.; Thompson, K.A.; Vlieks, A.; Vylet, V.; Wang, J.W.; Wilson, P.B.; Yeremian, A.; Youngman, B.; Kroll, N.M.; Nantista, C.
1993-07-01
At SLAC, the authors are pursuing the design of a Next Linear Collider (NLC) which would begin with a center-of-mass energy of 0.5 TeV, and be upgradable to at least 1.0 TeV. To achieve this high energy, they have been working on the development of a high-gradient 11.4-GHz (X-band) linear accelerator for the main linac of the collider. In this paper, they present the design of a open-quotes Next Linear Collider Test Acceleratorclose quotes (NLCTA). The goal of the NLCTA is to incorporate the new technologies of X-band accelerator structures, RF pulse compression systems and klystrons into a short linac which will then be a test bed for beam dynamics issues related to high-gradient acceleration
Development and adjustment of programs for solving systems of linear equations
International Nuclear Information System (INIS)
Fujimura, Toichiro
1978-03-01
Programs for solving the systems of linear equations have been adjusted and developed in expanding the scientific subroutine library SSL. The principal programs adjusted are based on the congruent method, method of product form of the inverse, orthogonal method, Crout's method for sparse system, and acceleration of iterative methods. The programs developed are based on the escalator method, direct parallel residue method and block tridiagonal method for band system. Described are usage of the programs developed and their future improvement. FORTRAN lists with simple examples in tests of the programs are also given. (auth.)
No-signaling quantum key distribution: solution by linear programming
Hwang, Won-Young; Bae, Joonwoo; Killoran, Nathan
2015-02-01
We outline a straightforward approach for obtaining a secret key rate using only no-signaling constraints and linear programming. Assuming an individual attack, we consider all possible joint probabilities. Initially, we study only the case where Eve has binary outcomes, and we impose constraints due to the no-signaling principle and given measurement outcomes. Within the remaining space of joint probabilities, by using linear programming, we get bound on the probability of Eve correctly guessing Bob's bit. We then make use of an inequality that relates this guessing probability to the mutual information between Bob and a more general Eve, who is not binary-restricted. Putting our computed bound together with the Csiszár-Körner formula, we obtain a positive key generation rate. The optimal value of this rate agrees with known results, but was calculated in a more straightforward way, offering the potential of generalization to different scenarios.
Planning under uncertainty solving large-scale stochastic linear programs
Energy Technology Data Exchange (ETDEWEB)
Infanger, G. [Stanford Univ., CA (United States). Dept. of Operations Research]|[Technische Univ., Vienna (Austria). Inst. fuer Energiewirtschaft
1992-12-01
For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.
Optimal selection for shielding materials by fuzzy linear programming
International Nuclear Information System (INIS)
Kanai, Y.; Miura, N.; Sugasawa, S.
1996-01-01
An application of fuzzy linear programming methods to optimization of a radiation shield is presented. The main purpose of the present study is the choice of materials and the search of the ratio of mixture-component as the first stage of the methodology on optimum shielding design according to individual requirements of nuclear reactor, reprocessing facility, shipping cask installing spent fuel, ect. The characteristic values for the shield optimization may be considered their cost, spatial space, weight and some shielding qualities such as activation rate and total dose rate for neutron and gamma ray (includes secondary gamma ray). This new approach can reduce huge combination calculations for conventional two-valued logic approaches to representative single shielding calculation by group-wised optimization parameters determined in advance. Using the fuzzy linear programming method, possibilities for reducing radiation effects attainable in optimal compositions hydrated, lead- and boron-contained materials are investigated
Algorithmic Trading with Developmental and Linear Genetic Programming
Wilson, Garnett; Banzhaf, Wolfgang
A developmental co-evolutionary genetic programming approach (PAM DGP) and a standard linear genetic programming (LGP) stock trading systemare applied to a number of stocks across market sectors. Both GP techniques were found to be robust to market fluctuations and reactive to opportunities associated with stock price rise and fall, with PAMDGP generating notably greater profit in some stock trend scenarios. Both algorithms were very accurate at buying to achieve profit and selling to protect assets, while exhibiting bothmoderate trading activity and the ability to maximize or minimize investment as appropriate. The content of the trading rules produced by both algorithms are also examined in relation to stock price trend scenarios.
Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution
Energy Technology Data Exchange (ETDEWEB)
Hamadameen, Abdulqader Othman [Optimization, Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia); Zainuddin, Zaitul Marlizawati [Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia)
2014-06-19
This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.
MAGDM linear-programming models with distinct uncertain preference structures.
Xu, Zeshui S; Chen, Jian
2008-10-01
Group decision making with preference information on alternatives is an interesting and important research topic which has been receiving more and more attention in recent years. The purpose of this paper is to investigate multiple-attribute group decision-making (MAGDM) problems with distinct uncertain preference structures. We develop some linear-programming models for dealing with the MAGDM problems, where the information about attribute weights is incomplete, and the decision makers have their preferences on alternatives. The provided preference information can be represented in the following three distinct uncertain preference structures: 1) interval utility values; 2) interval fuzzy preference relations; and 3) interval multiplicative preference relations. We first establish some linear-programming models based on decision matrix and each of the distinct uncertain preference structures and, then, develop some linear-programming models to integrate all three structures of subjective uncertain preference information provided by the decision makers and the objective information depicted in the decision matrix. Furthermore, we propose a simple and straightforward approach in ranking and selecting the given alternatives. It is worth pointing out that the developed models can also be used to deal with the situations where the three distinct uncertain preference structures are reduced to the traditional ones, i.e., utility values, fuzzy preference relations, and multiplicative preference relations. Finally, we use a practical example to illustrate in detail the calculation process of the developed approach.
Federal Laboratory Consortium — The ARDEC TPS Laboratory provides an organic Test Program Set (TPS) development, maintenance, and life cycle management capability for DoD LCMC materiel developers....
Evaluation of the linear power of HANARO test fuel bundles
Energy Technology Data Exchange (ETDEWEB)
Lee, Choong Sung; Seo, C. G.; Lee, B. C.; Kim, H. R
2001-02-01
The HANARO fuel was developed by AECL and it is configured in a bundle of rods containing uranium silicide. AECL has conducted a variety of tests using specimen in order to achieve its qualification and licensing and the highest linear power was evaluated to be 112.8kW/m. In design stage of HANARO, the best estimated maximum linear power at hot spot was found to occur in the transition core from the initial to the equilibrium and its value was 108kW/m, which exceeds 112.8kW/m if the physics uncertainty of the HANARO nuclear design model is taken into account. Consequently, the licensing body issued the conditional permit to operate HANARO and the fuel integrity at the linear power higher than 112.8kW/m was requested to be confirmed through irradiation tests by realizing its repeatability. Hereby, KAERI designed uninstrumented and instrumented test fuel bundles and conducted their burnup tests. In parallel with the tests, the nuclear design model has been revised and updated to enable us to pursue the pin-by-pin power history. This report describes the best estimated power history of the test fuel bundles using the revised model. In conclusion, HANARO fuel keeps its integrity at power condition greater than 120kW/m.
Nevada Test Site closure program
International Nuclear Information System (INIS)
Shenk, D.P.
1994-08-01
This report is a summary of the history, design and development, procurement, fabrication, installation and operation of the closures used as containment devices on underground nuclear tests at the Nevada Test Site. It also addresses the closure program mothball and start-up procedures. The Closure Program Document Index and equipment inventories, included as appendices, serve as location directories for future document reference and equipment use
An algorithm for the solution of dynamic linear programs
Psiaki, Mark L.
1989-01-01
The algorithm's objective is to efficiently solve Dynamic Linear Programs (DLP) by taking advantage of their special staircase structure. This algorithm constitutes a stepping stone to an improved algorithm for solving Dynamic Quadratic Programs, which, in turn, would make the nonlinear programming method of Successive Quadratic Programs more practical for solving trajectory optimization problems. The ultimate goal is to being trajectory optimization solution speeds into the realm of real-time control. The algorithm exploits the staircase nature of the large constraint matrix of the equality-constrained DLPs encountered when solving inequality-constrained DLPs by an active set approach. A numerically-stable, staircase QL factorization of the staircase constraint matrix is carried out starting from its last rows and columns. The resulting recursion is like the time-varying Riccati equation from multi-stage LQR theory. The resulting factorization increases the efficiency of all of the typical LP solution operations over that of a dense matrix LP code. At the same time numerical stability is ensured. The algorithm also takes advantage of dynamic programming ideas about the cost-to-go by relaxing active pseudo constraints in a backwards sweeping process. This further decreases the cost per update of the LP rank-1 updating procedure, although it may result in more changes of the active set that if pseudo constraints were relaxed in a non-stagewise fashion. The usual stability of closed-loop Linear/Quadratic optimally-controlled systems, if it carries over to strictly linear cost functions, implies that the saving due to reduced factor update effort may outweigh the cost of an increased number of updates. An aerospace example is presented in which a ground-to-ground rocket's distance is maximized. This example demonstrates the applicability of this class of algorithms to aerospace guidance. It also sheds light on the efficacy of the proposed pseudo constraint relaxation
Matzke, Orville R.
The purpose of this study was to formulate a linear programming model to simulate a foundation type support program and to apply this model to a state support program for the public elementary and secondary school districts in the State of Iowa. The model was successful in producing optimal solutions to five objective functions proposed for…
Directory of Open Access Journals (Sweden)
Claudio Roberto Fóffano Vasconcelos
2016-01-01
Full Text Available The aim of this study is to examine empirically the validity of PPP in the context of unit root tests based on linear and non-linear models of the real effective exchange rate of Argentina, Brazil, Chile, Colombia, Mexico, Peru and Venezuela. For this purpose, we apply the Harvey et al. (2008 linearity test and the non-linear unit root test (Kruse, 2011. The results show that the series with linear characteristics are Argentina, Brazil, Chile, Colombia and Peru and those with non-linear characteristics are Mexico and Venezuela. The linear unit root tests indicate that the real effective exchange rate is stationary for Chile and Peru, and the non-linear unit root tests evidence that Mexico is stationary. In the period analyzed, the results show support for the validity of PPP in only three of the seven countries.
A linear programming model of diet choice of free-living beavers
Nolet, BA; VanderVeer, PJ; Evers, EGJ; Ottenheim, MM
1995-01-01
Linear programming has been remarkably successful in predicting the diet choice of generalist herbivores. We used this technique to test the diet choice of free-living beavers (Castor fiber) in the Biesbosch (The Netherlands) under different Foraging goals, i.e. maximization of intake of energy,
A Primal-Dual Interior Point-Linear Programming Algorithm for MPC
DEFF Research Database (Denmark)
Edlund, Kristian; Sokoler, Leo Emil; Jørgensen, John Bagterp
2009-01-01
Constrained optimal control problems for linear systems with linear constraints and an objective function consisting of linear and l1-norm terms can be expressed as linear programs. We develop an efficient primal-dual interior point algorithm for solution of such linear programs. The algorithm...
Accelerated leach test development program
International Nuclear Information System (INIS)
Fuhrmann, M.; Pietrzak, R.F.; Heiser, J.; Franz, E.M.; Colombo, P.
1990-11-01
In FY 1989, a draft accelerated leach test for solidified waste was written. Combined test conditions that accelerate leaching were validated through experimental and modeling efforts. A computer program was developed that calculates test results and models leaching mechanisms. This program allows the user to determine if diffusion controls leaching and, if this is the case, to make projections of releases. Leaching mechanisms other than diffusion (diffusion plus source term partitioning and solubility limited leaching) are included in the program is indicators of other processes that may control leaching. Leach test data are presented and modeling results are discussed for laboratory scale waste forms composed of portland cement containing sodium sulfate salt, portland cement containing incinerator ash, and vinyl ester-styrene containing sodium sulfate. 16 refs., 38 figs., 5 tabs
Optimal traffic control in highway transportation networks using linear programming
Li, Yanning
2014-06-01
This article presents a framework for the optimal control of boundary flows on transportation networks. The state of the system is modeled by a first order scalar conservation law (Lighthill-Whitham-Richards PDE). Based on an equivalent formulation of the Hamilton-Jacobi PDE, the problem of controlling the state of the system on a network link in a finite horizon can be posed as a Linear Program. Assuming all intersections in the network are controllable, we show that the optimization approach can be extended to an arbitrary transportation network, preserving linear constraints. Unlike previously investigated transportation network control schemes, this framework leverages the intrinsic properties of the Halmilton-Jacobi equation, and does not require any discretization or boolean variables on the link. Hence this framework is very computational efficient and provides the globally optimal solution. The feasibility of this framework is illustrated by an on-ramp metering control example.
A recurrent neural network for solving bilevel linear programming problem.
He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie; Huang, Junjian
2014-04-01
In this brief, based on the method of penalty functions, a recurrent neural network (NN) modeled by means of a differential inclusion is proposed for solving the bilevel linear programming problem (BLPP). Compared with the existing NNs for BLPP, the model has the least number of state variables and simple structure. Using nonsmooth analysis, the theory of differential inclusions, and Lyapunov-like method, the equilibrium point sequence of the proposed NNs can approximately converge to an optimal solution of BLPP under certain conditions. Finally, the numerical simulations of a supply chain distribution model have shown excellent performance of the proposed recurrent NNs.
A scalable parallel algorithm for multiple objective linear programs
Wiecek, Malgorzata M.; Zhang, Hong
1994-01-01
This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included.
The MARX Modulator Development Program for the International Linear Collider
International Nuclear Information System (INIS)
Leyh, G.E.
2006-01-01
The International Linear Collider (ILC) Marx Modulator Development Program at SLAC is working towards developing a full-scale ILC Marx ''Reference Design'' modulator prototype, with the goal of significantly reducing the size and cost of the ILC modulator while improving overall modulator efficiency and availability. The ILC Reference Design prototype will provide a proof-of-concept model to industry in advance of Phase II SBIR funding, and also allow operation of the new 10MW L-Band Klystron prototypes immediately upon their arrival at SLAC
Marginal cost of electricity conservation: an application of linear program
International Nuclear Information System (INIS)
Silveira, A.M. da; Hollanda, J.B. de
1987-01-01
This paper is addressed ti the planning of electricity industry when the use of energetically efficient appliances (conservation) is financed by the utilities. It is based on the Linear Programming Model proposed by Masse and Boiteaux for planning of conventional energy sources, where one unity of electricity (Kw/Kw h) saved is treated as if it were a generator of equivalent size. In spite of the formal simplicity of the models it can support interesting concessions on the subject of a electrical energy conservation policy. (author)
Thurstonian models for sensory discrimination tests as generalized linear models
DEFF Research Database (Denmark)
Brockhoff, Per B.; Christensen, Rune Haubo Bojesen
2010-01-01
as a so-called generalized linear model. The underlying sensory difference 6 becomes directly a parameter of the statistical model and the estimate d' and it's standard error becomes the "usual" output of the statistical analysis. The d' for the monadic A-NOT A method is shown to appear as a standard......Sensory discrimination tests such as the triangle, duo-trio, 2-AFC and 3-AFC tests produce binary data and the Thurstonian decision rule links the underlying sensory difference 6 to the observed number of correct responses. In this paper it is shown how each of these four situations can be viewed...
Refining and end use study of coal liquids II - linear programming analysis
Energy Technology Data Exchange (ETDEWEB)
Lowe, C.; Tam, S.
1995-12-31
A DOE-funded study is underway to determine the optimum refinery processing schemes for producing transportation fuels that will meet CAAA regulations from direct and indirect coal liquids. The study consists of three major parts: pilot plant testing of critical upgrading processes, linear programming analysis of different processing schemes, and engine emission testing of final products. Currently, fractions of a direct coal liquid produced form bituminous coal are being tested in sequence of pilot plant upgrading processes. This work is discussed in a separate paper. The linear programming model, which is the subject of this paper, has been completed for the petroleum refinery and is being modified to handle coal liquids based on the pilot plant test results. Preliminary coal liquid evaluation studies indicate that, if a refinery expansion scenario is adopted, then the marginal value of the coal liquid (over the base petroleum crude) is $3-4/bbl.
Polymorphic Uncertain Linear Programming for Generalized Production Planning Problems
Directory of Open Access Journals (Sweden)
Xinbo Zhang
2014-01-01
Full Text Available A polymorphic uncertain linear programming (PULP model is constructed to formulate a class of generalized production planning problems. In accordance with the practical environment, some factors such as the consumption of raw material, the limitation of resource and the demand of product are incorporated into the model as parameters of interval and fuzzy subsets, respectively. Based on the theory of fuzzy interval program and the modified possibility degree for the order of interval numbers, a deterministic equivalent formulation for this model is derived such that a robust solution for the uncertain optimization problem is obtained. Case study indicates that the constructed model and the proposed solution are useful to search for an optimal production plan for the polymorphic uncertain generalized production planning problems.
Zörnig, Peter
2015-08-01
We present integer programming models for some variants of the farthest string problem. The number of variables and constraints is substantially less than that of the integer linear programming models known in the literature. Moreover, the solution of the linear programming-relaxation contains only a small proportion of noninteger values, which considerably simplifies the rounding process. Numerical tests have shown excellent results, especially when a small set of long sequences is given.
Adaptive linear rank tests for eQTL studies.
Szymczak, Silke; Scheinhardt, Markus O; Zeller, Tanja; Wild, Philipp S; Blankenberg, Stefan; Ziegler, Andreas
2013-02-10
Expression quantitative trait loci (eQTL) studies are performed to identify single-nucleotide polymorphisms that modify average expression values of genes, proteins, or metabolites, depending on the genotype. As expression values are often not normally distributed, statistical methods for eQTL studies should be valid and powerful in these situations. Adaptive tests are promising alternatives to standard approaches, such as the analysis of variance or the Kruskal-Wallis test. In a two-stage procedure, skewness and tail length of the distributions are estimated and used to select one of several linear rank tests. In this study, we compare two adaptive tests that were proposed in the literature using extensive Monte Carlo simulations of a wide range of different symmetric and skewed distributions. We derive a new adaptive test that combines the advantages of both literature-based approaches. The new test does not require the user to specify a distribution. It is slightly less powerful than the locally most powerful rank test for the correct distribution and at least as powerful as the maximin efficiency robust rank test. We illustrate the application of all tests using two examples from different eQTL studies. Copyright © 2012 John Wiley & Sons, Ltd.
AN APPLICATION FOR EFFICIENT TELECOMMUNICATION NETWORKS PROVISIONING USING LINEAR PROGRAMMING
Directory of Open Access Journals (Sweden)
Maria Augusta Soares Machado
2015-03-01
Full Text Available This paper presents a practical proposition for the application of the Linear Programming quantitative method in order to assist planning and control of customer circuit delivery activities in telecommunications companies working with the corporative market. Based upon data provided for by a telecom company operating in Brazil, the Linear Programming method was employed for one of the classical problems of determining the optimum mix of production quantities for a set of five products of that company: Private Telephone Network, Internet Network, Intranet Network, Low Speed Data Network, and High Speed Data Network, in face of several limitations of the productive resources, seeking to maximize the company’s monthly revenue. By fitting the production data available into a primary model, observation was made as to what number of monthly activations for each product would be mostly optimized in order to achieve maximum revenues in the company. The final delivery of a complete network was not observed but the delivery of the circuits that make it up, and this was a limiting factor for the study herein, which, however, brings an innovative proposition for the planning of private telecommunications network provisioning.
Assembling networks of microbial genomes using linear programming.
Holloway, Catherine; Beiko, Robert G
2010-11-20
Microbial genomes exhibit complex sets of genetic affinities due to lateral genetic transfer. Assessing the relative contributions of parent-to-offspring inheritance and gene sharing is a vital step in understanding the evolutionary origins and modern-day function of an organism, but recovering and showing these relationships is a challenging problem. We have developed a new approach that uses linear programming to find between-genome relationships, by treating tables of genetic affinities (here, represented by transformed BLAST e-values) as an optimization problem. Validation trials on simulated data demonstrate the effectiveness of the approach in recovering and representing vertical and lateral relationships among genomes. Application of the technique to a set comprising Aquifex aeolicus and 75 other thermophiles showed an important role for large genomes as 'hubs' in the gene sharing network, and suggested that genes are preferentially shared between organisms with similar optimal growth temperatures. We were also able to discover distinct and common genetic contributors to each sequenced representative of genus Pseudomonas. The linear programming approach we have developed can serve as an effective inference tool in its own right, and can be an efficient first step in a more-intensive phylogenomic analysis.
Linear programming based on neural networks for radiotherapy treatment planning
International Nuclear Information System (INIS)
Xingen Wu; Limin Luo
2000-01-01
In this paper, we propose a neural network model for linear programming that is designed to optimize radiotherapy treatment planning (RTP). This kind of neural network can be easily implemented by using a kind of 'neural' electronic system in order to obtain an optimization solution in real time. We first give an introduction to the RTP problem and construct a non-constraint objective function for the neural network model. We adopt a gradient algorithm to minimize the objective function and design the structure of the neural network for RTP. Compared to traditional linear programming methods, this neural network model can reduce the time needed for convergence, the size of problems (i.e., the number of variables to be searched) and the number of extra slack and surplus variables needed. We obtained a set of optimized beam weights that result in a better dose distribution as compared to that obtained using the simplex algorithm under the same initial condition. The example presented in this paper shows that this model is feasible in three-dimensional RTP. (author)
How to Use Linear Programming for Information System Performances Optimization
Directory of Open Access Journals (Sweden)
Hell Marko
2014-09-01
Full Text Available Background: Organisations nowadays operate in a very dynamic environment, and therefore, their ability of continuously adjusting the strategic plan to the new conditions is a must for achieving their strategic objectives. BSC is a well-known methodology for measuring performances enabling organizations to learn how well they are doing. In this paper, “BSC for IS” will be proposed in order to measure the IS impact on the achievement of organizations’ business goals. Objectives: The objective of this paper is to present the original procedure which is used to enhance the BSC methodology in planning the optimal targets of IS performances value in order to maximize the organization's effectiveness. Methods/Approach: The method used in this paper is the quantitative methodology - linear programming. In the case study, linear programming is used for optimizing organization’s strategic performance. Results: Results are shown on the example of a case study national park. An optimal performance value for the strategic objective has been calculated, as well as an optimal performance value for each DO (derived objective. Results are calculated in Excel, using Solver Add-in. Conclusions: The presentation of methodology through the case study of a national park shows that this methodology, though it requires a high level of formalisation, provides a very transparent performance calculation.
Testing for one Generalized Linear Single Order Parameter
DEFF Research Database (Denmark)
Ellegaard, Niels Langager; Christensen, Tage Emil; Dyre, Jeppe
We examine a linear single order parameter model for thermoviscoelastic relaxation in viscous liquids, allowing for a distribution of relaxation times. In this model the relaxation of volume and entalpy is completely described by the relaxation of one internal order parameter. In contrast to prior...... work the order parameter may be chosen to have a non-exponential relaxation. The model predictions contradict the general consensus of the properties of viscous liquids in two ways: (i) The model predicts that following a linear isobaric temperature step, the normalized volume and entalpy relaxation...... responses or extrapolate from measurements of a glassy state away from equilibrium. Starting from a master equation description of inherent dynamics, we calculate the complex thermodynamic response functions. We device a way of testing for the generalized single order parameter model by measuring 3 complex...
Assessment of the Roche Linear Array HPV Genotyping Test within the VALGENT framework.
Xu, Lan; Oštrbenk, Anja; Poljak, Mario; Arbyn, Marc
2018-01-01
Cervical cancer screening programs are switching from cytology-based screening to high-risk (hr) HPV testing. Only clinically validated tests should be used in clinical practice. To assess the clinical performance of the Roche Linear Array HPV genotyping test (Linear Array) within the VALGENT-3 framework. The VALGENT framework is designed for comprehensive comparison and clinical validation of HPV tests that have limited to extended genotyping capacity. The Linear Array enables type-specific detection of 37 HPV types. For the purpose of this study, Linear Array results were designated as positive only if one of the 13 hrHPV types also included in the Hybrid Capture 2 (HC2) was detected. The VALGENT-3 framework comprised 1600 samples obtained from Slovenian women (1300 sequential cases from routine cervical cancer screening enriched with 300 cytological abnormal samples). Sensitivity for cervical intraepithelial neoplasia of grade 2 or worse (CIN2+) (n=127) and specificity for Linear Array and for HC2 and non-inferiority of Linear Array relative to HC2 was checked. In addition, the prevalence of separate hrHPV types in the screening population, as well as the concordance for presence of HPV16, HPV18 and other hrHPV types between Linear Array and the Abbott RealTime High Risk HPV test (RealTime) were assessed. The clinical sensitivity and specificity for CIN2+ of the Linear Array in the total study population was 97.6% (95% CI, 93.3-99.5%) and 91.7% (95% CI, 90.0-93.2%), respectively. The relative sensitivity and specificity of Linear Array vs HC2 was 1.02 [95% CI, 0.98-1.05, (pLinear Array in the screening population was 10.5% (95% CI, 8.9-12.3%) with HPV16 and HPV18 detected in 2.3% and 0.9% of the samples, respectively. Excellent agreement for presence or absence of HPV16, HPV18 and other hrHPV between Linear Array and RealTime was observed. Linear Array showed similar sensitivity with higher specificity to detect CIN2+ compared to HC2. Detection of 13 hrHPV types
Testing Linear Temporal Logic Formulae on Finite Execution Traces
Havelund, Klaus; Rosu, Grigore; Norvig, Peter (Technical Monitor)
2001-01-01
We present an algorithm for efficiently testing Linear Temporal Logic (LTL) formulae on finite execution traces. The standard models of LTL are infinite traces, reflecting the behavior of reactive and concurrent systems which conceptually may be continuously alive. In most past applications of LTL. theorem provers and model checkers have been used to formally prove that down-scaled models satisfy such LTL specifications. Our goal is instead to use LTL for up-scaled testing of real software applications. Such tests correspond to analyzing the conformance of finite traces against LTL formulae. We first describe what it means for a finite trace to satisfy an LTL property. We then suggest an optimized algorithm based on transforming LTL formulae. The work is done using the Maude rewriting system. which turns out to provide a perfect notation and an efficient rewriting engine for performing these experiments.
Single event upset test programs
International Nuclear Information System (INIS)
Russen, L.C.
1984-11-01
It has been shown that the heavy ions in cosmic rays can give rise to single event upsets in VLSI random access memory devices (RAMs). Details are given of the programs written to test 1K, 4K, 16K and 64K memories during their irradiation with heavy charged ions, in order to simulate the effects of cosmic rays in space. The test equipment, which is used to load the memory device to be tested with a known bit pattern, and subsequently interrogate it for upsets, or ''flips'', is fully described. (author)
International Nuclear Information System (INIS)
Murao, Y.; Iguchi, T.; Sugimoto, J.; Akimoto, H.; Okubo, T.; Okabe, K.
1984-01-01
The cylindrical core test facility (CCTF) is one of the facilities of the large scale reflood test program which was initiated in April, 1976. The first series of the CCTF test (CCTF CORE I Test) was completed in April, 1981 and the second series (CCTF Core II Test) has been conducted since April, 1982. In the test, the following has been intended to be examined: (1) The conservativeness of the assumption of the safety analysis with the evaluation model (EM) code. (2) The refill and reflood phenomena for analytical modeling of thermo-hydrodynamics in the core and the system. (3) The validity of the models in the EM code and the application to the best estimate code development. In this paper, presented are the quantative evaluation of the REFLA code and the discussion of some CCTF Core II Test results. The REFLA code consists of REFLA-1D core code developed with the results of small scale tests and a simple system model developed with the results of the CCTF Core I Test. The CCTF Core II Test was perfored for developing more realistic model for the alternative ECCS as well as for the cold leg injection type ECCS
Automated [inservice testing] IST program
International Nuclear Information System (INIS)
Wright, W.M.
1990-01-01
There are two methods used to manage a Section XI program: Manual and Automated. The manual method usually consists of hand written records of test results and scheduling requirements. This method while initially lower in cost, results in problems later on in the life of a plant as data continues to accumulate. Automation allows instant access to forty years of test results. Due to the lower cost and higher performance of todays' personal computers, an automated method via a computer program provides an excellent method for managing the vast amount of data that accumulates over the forty year life of a plant. Through the use of a computer, special functions involving this data are available, which through a manual method would not be practical. This paper will describe some of the advantages in using a computer program to manage the Section XI 1ST program. The ISTBASE consists of program code and numerous databases. The source code is written and complied in CLIPPER (tm) language. Graphing routines are performed by dGE (tm) graphics library. Graphs are displayed in EGA form. Since it was estimated that the total complied code, would exceed 640K of ram, overlays through the use of modular programming were used to facilitate the DOS restrictions of 640K ram. The use of overlays still require the user to gain access to ISTBASE through the PASSWORD module. The database files are designed to be compatible with dBASE III+ (tm) data structure. This allows transfer of data between ISTBASE and other database managers/applications. A math co-processor is utilized to speed up calculations on graphs and other mathematical calculations. Program code and data files require a hard disk drive with at least 28 Meg capacity. While ISTBASE will execute on a 8088 based computer, an 80286 computer with a 12 MHz operating speed should be considered the minimum system configuration
Proceedings of the 2. International Linear Collider Test-beam workshop - LCTW'09
International Nuclear Information System (INIS)
Wormser, G.; Poeschl, R.; Takeshi, M.; Yu, J.; Hauptman, J.; Jeans, D.; Velthuis, J.; Repond, J.; Stanitzki, M.; Chefdeville, M.; Pauletta, G.; Hauptman, J.; Kulis, S.; Charpy, A.; Rivera, R.; Turchetti, M.; Vos, M.; Dehmelt, K.; Settles, R.; Decotigny, D.; Killenberg, M.; Haas, D.; Gaede, F.; Graf, N.; Wing, M.; Gaede, F.; Karstensen, S.; Meyners, N.; Hast, C.; Vrba, V.; Takeshita, T.; Kawagoe, K.; Linssen, L.; Ramberg, E.; Demarteau, M.; Fisk, H.E.; Savoy-Navarro, A.; Videau, H.; Boudry, V.; Hauptman, J.; Lipton, R.; Nelson, T.
2009-01-01
At this workshop detector and simulation experts have described and discussed the necessary ILC (International Linear Collider) detector research and development program in view of its need for test beams. This workshop has provided an opportunity to evaluate the capabilities and shortcomings of existing facilities in the context of planned test beam activities. This document gathers together the slides of the presentations. The presentations have been classified into 4 topics: -) plans of sub-detectors - calorimetry, silicon and gaseous tracking, -) data acquisition, -) test beam facilities, and -) resources and infrastructure for future test beams
Portfolio selection problem: a comparison of fuzzy goal programming and linear physical programming
Directory of Open Access Journals (Sweden)
Fusun Kucukbay
2016-04-01
Full Text Available Investors have limited budget and they try to maximize their return with minimum risk. Therefore this study aims to deal with the portfolio selection problem. In the study two criteria are considered which are expected return, and risk. In this respect, linear physical programming (LPP technique is applied on Bist 100 stocks to be able to find out the optimum portfolio. The analysis covers the period April 2009- March 2015. This period is divided into two; April 2009-March 2014 and April 2014 – March 2015. April 2009-March 2014 period is used as data to find an optimal solution. April 2014-March 2015 period is used to test the real performance of portfolios. The performance of the obtained portfolio is compared with that obtained from fuzzy goal programming (FGP. Then the performances of both method, LPP and FGP are compared with BIST 100 in terms of their Sharpe Indexes. The findings reveal that LPP for portfolio selection problem is a good alternative to FGP.
Aether: leveraging linear programming for optimal cloud computing in genomics.
Luber, Jacob M; Tierney, Braden T; Cofer, Evan M; Patel, Chirag J; Kostic, Aleksandar D
2018-05-01
Across biology, we are seeing rapid developments in scale of data production without a corresponding increase in data analysis capabilities. Here, we present Aether (http://aether.kosticlab.org), an intuitive, easy-to-use, cost-effective and scalable framework that uses linear programming to optimally bid on and deploy combinations of underutilized cloud computing resources. Our approach simultaneously minimizes the cost of data analysis and provides an easy transition from users' existing HPC pipelines. Data utilized are available at https://pubs.broadinstitute.org/diabimmune and with EBI SRA accession ERP005989. Source code is available at (https://github.com/kosticlab/aether). Examples, documentation and a tutorial are available at http://aether.kosticlab.org. chirag_patel@hms.harvard.edu or aleksandar.kostic@joslin.harvard.edu. Supplementary data are available at Bioinformatics online.
Microgrid Reliability Modeling and Battery Scheduling Using Stochastic Linear Programming
Energy Technology Data Exchange (ETDEWEB)
Cardoso, Goncalo; Stadler, Michael; Siddiqui, Afzal; Marnay, Chris; DeForest, Nicholas; Barbosa-Povoa, Ana; Ferrao, Paulo
2013-05-23
This paper describes the introduction of stochastic linear programming into Operations DER-CAM, a tool used to obtain optimal operating schedules for a given microgrid under local economic and environmental conditions. This application follows previous work on optimal scheduling of a lithium-iron-phosphate battery given the output uncertainty of a 1 MW molten carbonate fuel cell. Both are in the Santa Rita Jail microgrid, located in Dublin, California. This fuel cell has proven unreliable, partially justifying the consideration of storage options. Several stochastic DER-CAM runs are executed to compare different scenarios to values obtained by a deterministic approach. Results indicate that using a stochastic approach provides a conservative yet more lucrative battery schedule. Lower expected energy bills result, given fuel cell outages, in potential savings exceeding 6percent.
CONTRIBUTION OF A LINEAR PROGRAMMING VBA MODULE TO STUDENTS PEFORMANCE
Directory of Open Access Journals (Sweden)
KUČÍRKOVÁ Lenka
2010-12-01
Full Text Available This paper deals with the application of freeware modules as a teaching support of Operations Research methods at the Department of Systems Engineering, Czech university of Life Sciences (CULS Prague. In particular, we concentrated on a linear programming module and measured the impact on student performance. The motivation for this evaluation is based on a current development of a new module that focuses on Traveling Salesman Problem. First, we explain the current situation both worldwide and in the Czech Republic and the CULS Prague. Subsequently, we describe the content of students’ exams and statistical methods applied to the evaluation. Finally, we analyze and generalize the obtained results. The students exams have show a positive impact of the modules. Further, our analysis has proven that this impact is statistically significant. The findings motivate us to made new modules for other methods.
Local beam angle optimization with linear programming and gradient search
International Nuclear Information System (INIS)
Craft, David
2007-01-01
The optimization of beam angles in IMRT planning is still an open problem, with literature focusing on heuristic strategies and exhaustive searches on discrete angle grids. We show how a beam angle set can be locally refined in a continuous manner using gradient-based optimization in the beam angle space. The gradient is derived using linear programming duality theory. Applying this local search to 100 random initial angle sets of a phantom pancreatic case demonstrates the method, and highlights the many-local-minima aspect of the BAO problem. Due to this function structure, we recommend a search strategy of a thorough global search followed by local refinement at promising beam angle sets. Extensions to nonlinear IMRT formulations are discussed. (note)
Optimization of refinery product blending by using linear programming
International Nuclear Information System (INIS)
Ristikj, Julija; Tripcheva-Trajkovska, Loreta; Rikaloski, Ice; Markovska, Liljana
1999-01-01
The product slate of a simple refinery consists mainly of liquefied petroleum gas, leaded and unleaded gasoline, jet fuel, diesel fuel, extra light heating oil and fuel oil. The quality of the oil products (fuels) for sale has to comply with the adopted standards for liquid fuels, and the produced quantities have to be comply with the market needs. The oil products are manufactured by blending two or more different fractions which quantities and physical-chemical properties depend on the crude oil type, the way and conditions of processing, and at the same time the fractions are used to blend one or more products. It is in producer's interest to do the blending in an optimal way, namely, to satisfy the requirements for the oil products quality and quantity with a maximal usage of the available fractions and, of course, with a maximal profit out of the sold products. This could be accomplished by applying linear programming, that is by using a linear model for oil products blending optimization. (Author)
Towards lexicographic multi-objective linear programming using grossone methodology
Cococcioni, Marco; Pappalardo, Massimo; Sergeyev, Yaroslav D.
2016-10-01
Lexicographic Multi-Objective Linear Programming (LMOLP) problems can be solved in two ways: preemptive and nonpreemptive. The preemptive approach requires the solution of a series of LP problems, with changing constraints (each time the next objective is added, a new constraint appears). The nonpreemptive approach is based on a scalarization of the multiple objectives into a single-objective linear function by a weighted combination of the given objectives. It requires the specification of a set of weights, which is not straightforward and can be time consuming. In this work we present both mathematical and software ingredients necessary to solve LMOLP problems using a recently introduced computational methodology (allowing one to work numerically with infinities and infinitesimals) based on the concept of grossone. The ultimate goal of such an attempt is an implementation of a simplex-like algorithm, able to solve the original LMOLP problem by solving only one single-objective problem and without the need to specify finite weights. The expected advantages are therefore obvious.
C-program LINOP for the evaluation of film dosemeters by linear optimization. User manual
International Nuclear Information System (INIS)
Kragh, P.
1995-11-01
Linear programming results in an optimal measuring value for film dosemeters. The Linop program was developed to be used for linear programming. The program permits the evaluation and control of film dosemeters and of all other multi-component dosemeters. This user manual for the Linop program contains the source program, a description of the program and installation and use instructions. The data sets with programs and examples are available upon request. (orig.) [de
One testing method of dynamic linearity of an accelerometer
Directory of Open Access Journals (Sweden)
Lei Jing-Yu
2015-01-01
Full Text Available To effectively test dynamic linearity of an accelerometer over a wide rang of 104 g to about 20 × 104g, one published patent technology is first experimentally verified and analysed, and its deficient is presented, then based on stress wave propagation theory on the thin long bar, the relation between the strain signal and the corresponding acceleration signal is obtained, one special link of two coaxial projectile is developed. These two coaxial metal cylinders (inner cylinder and circular tube are used as projectiles, to prevent their mutual slip inside the gun barrel during movement, the one end of two projectiles is always fastened by small screws. Ti6-AL4-V bar with diameter of 30 mm is used to propagate loading stress pulse. The resultant compression wave can be measured by the strain gauges on the bar, and a half –sine strain pulse is obtained. The measuring accelerometer is attached on the other end of the bar by a vacuum clamp. In this clamp, the accelerometer only bear compression wave, the reflected tension pulse make the accelerometer off the bar. Using this system, dynamic linearity measurement of accelerometer can be easily tested in wider range of acceleration values. And a really measuring results are presented.
Guo, Sangang
2017-09-01
There are two stages in solving security-constrained unit commitment problems (SCUC) within Lagrangian framework: one is to obtain feasible units’ states (UC), the other is power economic dispatch (ED) for each unit. The accurate solution of ED is more important for enhancing the efficiency of the solution to SCUC for the fixed feasible units’ statues. Two novel methods named after Convex Combinatorial Coefficient Method and Power Increment Method respectively based on linear programming problem for solving ED are proposed by the piecewise linear approximation to the nonlinear convex fuel cost functions. Numerical testing results show that the methods are effective and efficient.
Minimum Leakage Condenser Test Program
International Nuclear Information System (INIS)
1978-05-01
This report presents the results and analysis of tests performed on four critical areas of large surface condensers: the tubes, tubesheets, tube/tubesheet joints and the water chambers. Significant changes in operation, service duty and the reliability considerations require that certain existing design criteria be verified and that improved design features be developed. The four critical areas were treated analytically and experimentally. The ANSYS finite element computer program was the basic analytical method and strain gages were used for obtaining experimental data. The results of test and analytical data are compared and recommendations made regarding potential improvement in condenser design features and analytical techniques
A test to evaluation non-linear soil structure interaction
International Nuclear Information System (INIS)
Hagiwara, T.; Kitada, Y.
2005-01-01
JNES is planning a new project to study non-linear soil-structure interaction (SSI) effect under large earthquake ground motions equivalent to and/or over a design earthquake ground motion of S2. Concerning the SSI test, it is pointed out that handling of the scale effect of the specimen taking into account the surrounding soil on the earthquake response evaluation to the actual structure is essential issue for the scaled model test. Thus, for the test, the largest specimen possible and the biggest input motion possible are necessary. Taking into account the above issues, new test methodology, which utilizes artificial earthquake ground motion, is considered desirable if it can be performed at a realistic cost. With this motivation, we have studied the test methodology which applying blasting power as for a big earthquake ground motion. The information from a coalmine company in the U.S.A. indicates that the works performed in the surface coalmine to blast a rock covering a coal layer generates a big artificial ground motion, which is similar to earthquake ground motion. Application of this artificial earthquake ground motion for the SSI test is considered very promising because the blasting work is carried out periodically for mining coal so that we can apply artificial motions generated by the work if we construct a building model at a closed point to the blasting work area. The major purposes of the test are to understand (a) basic earthquake response characteristics of a Nuclear Power Plant (NPP) reactor building when a large earthquake strikes the NPP site and (b) nonlinear characteristics of SSI phenomenon during a big earthquake. In the paper of ICONE-13, we will introduce the test method and basic characteristics of measured artificial ground motions generated by the blasting works on an actual site. (authors)
Split diversity in constrained conservation prioritization using integer linear programming.
Chernomor, Olga; Minh, Bui Quang; Forest, Félix; Klaere, Steffen; Ingram, Travis; Henzinger, Monika; von Haeseler, Arndt
2015-01-01
Phylogenetic diversity (PD) is a measure of biodiversity based on the evolutionary history of species. Here, we discuss several optimization problems related to the use of PD, and the more general measure split diversity (SD), in conservation prioritization.Depending on the conservation goal and the information available about species, one can construct optimization routines that incorporate various conservation constraints. We demonstrate how this information can be used to select sets of species for conservation action. Specifically, we discuss the use of species' geographic distributions, the choice of candidates under economic pressure, and the use of predator-prey interactions between the species in a community to define viability constraints.Despite such optimization problems falling into the area of NP hard problems, it is possible to solve them in a reasonable amount of time using integer programming. We apply integer linear programming to a variety of models for conservation prioritization that incorporate the SD measure.We exemplarily show the results for two data sets: the Cape region of South Africa and a Caribbean coral reef community. Finally, we provide user-friendly software at http://www.cibiv.at/software/pda.
A linear programming approach for placement of applicants to academic programs
Kassa, Biniyam Asmare
2013-01-01
This paper reports a linear programming approach for placement of applicants to study programs developed and implemented at the college of Business & Economics, Bahir Dar University, Bahir Dar, Ethiopia. The approach is estimated to significantly streamline the placement decision process at the college by reducing required man hour as well as the time it takes to announce placement decisions. Compared to the previous manual system where only one or two placement criteria were considered, the ...
An Improved Search Approach for Solving Non-Convex Mixed-Integer Non Linear Programming Problems
Sitopu, Joni Wilson; Mawengkang, Herman; Syafitri Lubis, Riri
2018-01-01
The nonlinear mathematical programming problem addressed in this paper has a structure characterized by a subset of variables restricted to assume discrete values, which are linear and separable from the continuous variables. The strategy of releasing nonbasic variables from their bounds, combined with the “active constraint” method, has been developed. This strategy is used to force the appropriate non-integer basic variables to move to their neighbourhood integer points. Successful implementation of these algorithms was achieved on various test problems.
Next Linear Collider Test Accelerator conceptual design report
International Nuclear Information System (INIS)
1993-08-01
This document presents the scientific justification and the conceptual design for the open-quotes Next Linear Collider Test Acceleratorclose quotes (NLCTA) at SLAC. The goals of the NLCTA are to integrate the new technologies of X-band accelerator structures and rf systems being developed for the Next Linear Collider, to measure the growth of the open-quotes dark currentclose quotes generated by rf field emission in the accelerator, to demonstrate multi-bunch beam-loading energy compensation and suppression of higher-order deflecting modes, and to measure any transverse components of the accelerating field. The NLCTA will be a 42-meter-long beam line consisting, consecutively, of a thermionic-cathode gun, an X-band buncher, a magnetic chicane, six 1.8-meter-long sections of 11.4-GHz accelerator structure, and a magnetic spectrometer. Initially, the unloaded accelerating gradient will be 50 MV/m. A higher-gradient upgrade option eventually would increase the unloaded gradient to 100 MV/m
Linear shrinkage test: justification for its reintroduction as a standard South African test method
CSIR Research Space (South Africa)
Sampson, LR
2009-06-04
Full Text Available Several problems with the linear shrinkage test specified in Method A4 of the THM 1 1979 were addressed as part of this investigation in an effort to improve the alleged poor reproducibility of the test and justify its reintroduction into THM 1. A...
An overview of solution methods for multi-objective mixed integer linear programming programs
DEFF Research Database (Denmark)
Andersen, Kim Allan; Stidsen, Thomas Riis
Multiple objective mixed integer linear programming (MOMIP) problems are notoriously hard to solve to optimality, i.e. finding the complete set of non-dominated solutions. We will give an overview of existing methods. Among those are interactive methods, the two phases method and enumeration...... methods. In particular we will discuss the existing branch and bound approaches for solving multiple objective integer programming problems. Despite the fact that branch and bound methods has been applied successfully to integer programming problems with one criterion only a few attempts has been made...
Approximate labeling via graph cuts based on linear programming.
Komodakis, Nikos; Tziritas, Georgios
2007-08-01
A new framework is presented for both understanding and developing graph-cut-based combinatorial algorithms suitable for the approximate optimization of a very wide class of Markov Random Fields (MRFs) that are frequently encountered in computer vision. The proposed framework utilizes tools from the duality theory of linear programming in order to provide an alternative and more general view of state-of-the-art techniques like the \\alpha-expansion algorithm, which is included merely as a special case. Moreover, contrary to \\alpha-expansion, the derived algorithms generate solutions with guaranteed optimality properties for a much wider class of problems, for example, even for MRFs with nonmetric potentials. In addition, they are capable of providing per-instance suboptimality bounds in all occasions, including discrete MRFs with an arbitrary potential function. These bounds prove to be very tight in practice (that is, very close to 1), which means that the resulting solutions are almost optimal. Our algorithms' effectiveness is demonstrated by presenting experimental results on a variety of low-level vision tasks, such as stereo matching, image restoration, image completion, and optical flow estimation, as well as on synthetic problems.
Flow discharge prediction in compound channels using linear genetic programming
Azamathulla, H. Md.; Zahiri, A.
2012-08-01
SummaryFlow discharge determination in rivers is one of the key elements in mathematical modelling in the design of river engineering projects. Because of the inundation of floodplains and sudden changes in river geometry, flow resistance equations are not applicable for compound channels. Therefore, many approaches have been developed for modification of flow discharge computations. Most of these methods have satisfactory results only in laboratory flumes. Due to the ability to model complex phenomena, the artificial intelligence methods have recently been employed for wide applications in various fields of water engineering. Linear genetic programming (LGP), a branch of artificial intelligence methods, is able to optimise the model structure and its components and to derive an explicit equation based on the variables of the phenomena. In this paper, a precise dimensionless equation has been derived for prediction of flood discharge using LGP. The proposed model was developed using published data compiled for stage-discharge data sets for 394 laboratories, and field of 30 compound channels. The results indicate that the LGP model has a better performance than the existing models.
Periodic inventory system in cafeteria using linear programming
Usop, Mohd Fais; Ishak, Ruzana; Hamdan, Ahmad Ridhuan
2017-11-01
Inventory management is an important factor in running a business. It plays a big role of managing the stock in cafeteria. If the inventories are failed to be managed wisely, it will affect the profit of the cafeteria. Therefore, the purpose of this study is to find the solution of the inventory management in cafeteria. Most of the cafeteria in Malaysia did not manage their stock well. Therefore, this study is to propose a database system of inventory management and to develop the inventory model in cafeteria management. In this study, new database system to improve the management of the stock in a weekly basis will be provided using Linear Programming Model to get the optimal range of the inventory needed for selected categories. Data that were collected by using the Periodic Inventory System at the end of the week within three months period being analyzed by using the Food Stock-take Database. The inventory model was developed from the collected data according to the category of the inventory in the cafeteria. Results showed the effectiveness of using the Periodic Inventory System and will be very helpful to the cafeteria management in organizing the inventory. Moreover, the findings in this study can reduce the cost of operation and increased the profit.
Near-Regular Structure Discovery Using Linear Programming
Huang, Qixing
2014-06-02
Near-regular structures are common in manmade and natural objects. Algorithmic detection of such regularity greatly facilitates our understanding of shape structures, leads to compact encoding of input geometries, and enables efficient generation and manipulation of complex patterns on both acquired and synthesized objects. Such regularity manifests itself both in the repetition of certain geometric elements, as well as in the structured arrangement of the elements. We cast the regularity detection problem as an optimization and efficiently solve it using linear programming techniques. Our optimization has a discrete aspect, that is, the connectivity relationships among the elements, as well as a continuous aspect, namely the locations of the elements of interest. Both these aspects are captured by our near-regular structure extraction framework, which alternates between discrete and continuous optimizations. We demonstrate the effectiveness of our framework on a variety of problems including near-regular structure extraction, structure-preserving pattern manipulation, and markerless correspondence detection. Robustness results with respect to geometric and topological noise are presented on synthesized, real-world, and also benchmark datasets. © 2014 ACM.
Maximum likelihood pedigree reconstruction using integer linear programming.
Cussens, James; Bartlett, Mark; Jones, Elinor M; Sheehan, Nuala A
2013-01-01
Large population biobanks of unrelated individuals have been highly successful in detecting common genetic variants affecting diseases of public health concern. However, they lack the statistical power to detect more modest gene-gene and gene-environment interaction effects or the effects of rare variants for which related individuals are ideally required. In reality, most large population studies will undoubtedly contain sets of undeclared relatives, or pedigrees. Although a crude measure of relatedness might sometimes suffice, having a good estimate of the true pedigree would be much more informative if this could be obtained efficiently. Relatives are more likely to share longer haplotypes around disease susceptibility loci and are hence biologically more informative for rare variants than unrelated cases and controls. Distant relatives are arguably more useful for detecting variants with small effects because they are less likely to share masking environmental effects. Moreover, the identification of relatives enables appropriate adjustments of statistical analyses that typically assume unrelatedness. We propose to exploit an integer linear programming optimisation approach to pedigree learning, which is adapted to find valid pedigrees by imposing appropriate constraints. Our method is not restricted to small pedigrees and is guaranteed to return a maximum likelihood pedigree. With additional constraints, we can also search for multiple high-probability pedigrees and thus account for the inherent uncertainty in any particular pedigree reconstruction. The true pedigree is found very quickly by comparison with other methods when all individuals are observed. Extensions to more complex problems seem feasible. © 2012 Wiley Periodicals, Inc.
Discovery of Boolean metabolic networks: integer linear programming based approach.
Qiu, Yushan; Jiang, Hao; Ching, Wai-Ki; Cheng, Xiaoqing
2018-04-11
Traditional drug discovery methods focused on the efficacy of drugs rather than their toxicity. However, toxicity and/or lack of efficacy are produced when unintended targets are affected in metabolic networks. Thus, identification of biological targets which can be manipulated to produce the desired effect with minimum side-effects has become an important and challenging topic. Efficient computational methods are required to identify the drug targets while incurring minimal side-effects. In this paper, we propose a graph-based computational damage model that summarizes the impact of enzymes on compounds in metabolic networks. An efficient method based on Integer Linear Programming formalism is then developed to identify the optimal enzyme-combination so as to minimize the side-effects. The identified target enzymes for known successful drugs are then verified by comparing the results with those in the existing literature. Side-effects reduction plays a crucial role in the study of drug development. A graph-based computational damage model is proposed and the theoretical analysis states the captured problem is NP-completeness. The proposed approaches can therefore contribute to the discovery of drug targets. Our developed software is available at " http://hkumath.hku.hk/~wkc/APBC2018-metabolic-network.zip ".
Storage and distribution/Linear programming for storage operations
Energy Technology Data Exchange (ETDEWEB)
Coleman, D
1978-07-15
The techniques of linear programing to solve storage problems as applied in a tank farm tie-in with refinery throughput operation include: (1) the time-phased model which works on storage and refinery operations input parameters, e.g., production, distribution, cracking, etc., and is capable of representing product stockpiling in slack periods to meet future peak demands, and investigating alternative strategies such as exchange deals and purchase and leasing of additional storage, and (2) the Monte Carlo simulation method, which inputs parameters, e.g., arrival of crude products at refinery, tankage size, likely demand for products, etc., as probability distributions rather than single values, and is capable of showing the average utilization of facilities, potential bottlenecks, investment required to achieve an increase in utilization, and to enable the user to predict total investment, cash flow, and profit emanating from the original financing decision. The increasing use of computer techniques to solve refinery and storage problems is attributed to potential savings resulting from more effective planning, reduced computer costs, ease of access and more usable software. Diagrams.
Mixed integer linear programming for maximum-parsimony phylogeny inference.
Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell
2008-01-01
Reconstruction of phylogenetic trees is a fundamental problem in computational biology. While excellent heuristic methods are available for many variants of this problem, new advances in phylogeny inference will be required if we are to be able to continue to make effective use of the rapidly growing stores of variation data now being gathered. In this paper, we present two integer linear programming (ILP) formulations to find the most parsimonious phylogenetic tree from a set of binary variation data. One method uses a flow-based formulation that can produce exponential numbers of variables and constraints in the worst case. The method has, however, proven extremely efficient in practice on datasets that are well beyond the reach of the available provably efficient methods, solving several large mtDNA and Y-chromosome instances within a few seconds and giving provably optimal results in times competitive with fast heuristics than cannot guarantee optimality. An alternative formulation establishes that the problem can be solved with a polynomial-sized ILP. We further present a web server developed based on the exponential-sized ILP that performs fast maximum parsimony inferences and serves as a front end to a database of precomputed phylogenies spanning the human genome.
Fitting boxes to Manhattan scenes using linear integer programming
Li, Minglei
2016-02-19
We propose an approach for automatic generation of building models by assembling a set of boxes using a Manhattan-world assumption. The method first aligns the point cloud with a per-building local coordinate system, and then fits axis-aligned planes to the point cloud through an iterative regularization process. The refined planes partition the space of the data into a series of compact cubic cells (candidate boxes) spanning the entire 3D space of the input data. We then choose to approximate the target building by the assembly of a subset of these candidate boxes using a binary linear programming formulation. The objective function is designed to maximize the point cloud coverage and the compactness of the final model. Finally, all selected boxes are merged into a lightweight polygonal mesh model, which is suitable for interactive visualization of large scale urban scenes. Experimental results and a comparison with state-of-the-art methods demonstrate the effectiveness of the proposed framework.
Pilkey, W. D.; Chen, Y. H.
1974-01-01
An indirect synthesis method is used in the efficient optimal design of multi-degree of freedom, multi-design element, nonlinear, transient systems. A limiting performance analysis which requires linear programming for a kinematically linear system is presented. The system is selected using system identification methods such that the designed system responds as closely as possible to the limiting performance. The efficiency is a result of the method avoiding the repetitive systems analyses accompanying other numerical optimization methods.
NP-Hardness of optimizing the sum of Rational Linear Functions over an Asymptotic-Linear-Program
Chermakani, Deepak Ponvel
2012-01-01
We convert, within polynomial-time and sequential processing, an NP-Complete Problem into a real-variable problem of minimizing a sum of Rational Linear Functions constrained by an Asymptotic-Linear-Program. The coefficients and constants in the real-variable problem are 0, 1, -1, K, or -K, where K is the time parameter that tends to positive infinity. The number of variables, constraints, and rational linear functions in the objective, of the real-variable problem is bounded by a polynomial ...
DEFF Research Database (Denmark)
Sokoler, Leo Emil; Frison, Gianluca; Skajaa, Anders
2015-01-01
We develop an efficient homogeneous and self-dual interior-point method (IPM) for the linear programs arising in economic model predictive control of constrained linear systems with linear objective functions. The algorithm is based on a Riccati iteration procedure, which is adapted to the linear...... system of equations solved in homogeneous and self-dual IPMs. Fast convergence is further achieved using a warm-start strategy. We implement the algorithm in MATLAB and C. Its performance is tested using a conceptual power management case study. Closed loop simulations show that 1) the proposed algorithm...
Linear programming model can explain respiration of fermentation products
Möller, Philip; Liu, Xiaochen; Schuster, Stefan
2018-01-01
Many differentiated cells rely primarily on mitochondrial oxidative phosphorylation for generating energy in the form of ATP needed for cellular metabolism. In contrast most tumor cells instead rely on aerobic glycolysis leading to lactate to about the same extent as on respiration. Warburg found that cancer cells to support oxidative phosphorylation, tend to ferment glucose or other energy source into lactate even in the presence of sufficient oxygen, which is an inefficient way to generate ATP. This effect also occurs in striated muscle cells, activated lymphocytes and microglia, endothelial cells and several mammalian cell types, a phenomenon termed the “Warburg effect”. The effect is paradoxical at first glance because the ATP production rate of aerobic glycolysis is much slower than that of respiration and the energy demands are better to be met by pure oxidative phosphorylation. We tackle this question by building a minimal model including three combined reactions. The new aspect in extension to earlier models is that we take into account the possible uptake and oxidation of the fermentation products. We examine the case where the cell can allocate protein on several enzymes in a varying distribution and model this by a linear programming problem in which the objective is to maximize the ATP production rate under different combinations of constraints on enzymes. Depending on the cost of reactions and limitation of the substrates, this leads to pure respiration, pure fermentation, and a mixture of respiration and fermentation. The model predicts that fermentation products are only oxidized when glucose is scarce or its uptake is severely limited. PMID:29415045
Optimizing Biorefinery Design and Operations via Linear Programming Models
Energy Technology Data Exchange (ETDEWEB)
Talmadge, Michael; Batan, Liaw; Lamers, Patrick; Hartley, Damon; Biddy, Mary; Tao, Ling; Tan, Eric
2017-03-28
The ability to assess and optimize economics of biomass resource utilization for the production of fuels, chemicals and power is essential for the ultimate success of a bioenergy industry. The team of authors, consisting of members from the National Renewable Energy Laboratory (NREL) and the Idaho National Laboratory (INL), has developed simple biorefinery linear programming (LP) models to enable the optimization of theoretical or existing biorefineries. The goal of this analysis is to demonstrate how such models can benefit the developing biorefining industry. It focuses on a theoretical multi-pathway, thermochemical biorefinery configuration and demonstrates how the biorefinery can use LP models for operations planning and optimization in comparable ways to the petroleum refining industry. Using LP modeling tools developed under U.S. Department of Energy's Bioenergy Technologies Office (DOE-BETO) funded efforts, the authors investigate optimization challenges for the theoretical biorefineries such as (1) optimal feedstock slate based on available biomass and prices, (2) breakeven price analysis for available feedstocks, (3) impact analysis for changes in feedstock costs and product prices, (4) optimal biorefinery operations during unit shutdowns / turnarounds, and (5) incentives for increased processing capacity. These biorefinery examples are comparable to crude oil purchasing and operational optimization studies that petroleum refiners perform routinely using LPs and other optimization models. It is important to note that the analyses presented in this article are strictly theoretical and they are not based on current energy market prices. The pricing structure assigned for this demonstrative analysis is consistent with $4 per gallon gasoline, which clearly assumes an economic environment that would favor the construction and operation of biorefineries. The analysis approach and examples provide valuable insights into the usefulness of analysis tools for
Linear programming model can explain respiration of fermentation products.
Möller, Philip; Liu, Xiaochen; Schuster, Stefan; Boley, Daniel
2018-01-01
Many differentiated cells rely primarily on mitochondrial oxidative phosphorylation for generating energy in the form of ATP needed for cellular metabolism. In contrast most tumor cells instead rely on aerobic glycolysis leading to lactate to about the same extent as on respiration. Warburg found that cancer cells to support oxidative phosphorylation, tend to ferment glucose or other energy source into lactate even in the presence of sufficient oxygen, which is an inefficient way to generate ATP. This effect also occurs in striated muscle cells, activated lymphocytes and microglia, endothelial cells and several mammalian cell types, a phenomenon termed the "Warburg effect". The effect is paradoxical at first glance because the ATP production rate of aerobic glycolysis is much slower than that of respiration and the energy demands are better to be met by pure oxidative phosphorylation. We tackle this question by building a minimal model including three combined reactions. The new aspect in extension to earlier models is that we take into account the possible uptake and oxidation of the fermentation products. We examine the case where the cell can allocate protein on several enzymes in a varying distribution and model this by a linear programming problem in which the objective is to maximize the ATP production rate under different combinations of constraints on enzymes. Depending on the cost of reactions and limitation of the substrates, this leads to pure respiration, pure fermentation, and a mixture of respiration and fermentation. The model predicts that fermentation products are only oxidized when glucose is scarce or its uptake is severely limited.
Parameter estimation and hypothesis testing in linear models
Koch, Karl-Rudolf
1999-01-01
The necessity to publish the second edition of this book arose when its third German edition had just been published. This second English edition is there fore a translation of the third German edition of Parameter Estimation and Hypothesis Testing in Linear Models, published in 1997. It differs from the first English edition by the addition of a new chapter on robust estimation of parameters and the deletion of the section on discriminant analysis, which has been more completely dealt with by the author in the book Bayesian In ference with Geodetic Applications, Springer-Verlag, Berlin Heidelberg New York, 1990. Smaller additions and deletions have been incorporated, to im prove the text, to point out new developments or to eliminate errors which became apparent. A few examples have been also added. I thank Springer-Verlag for publishing this second edition and for the assistance in checking the translation, although the responsibility of errors remains with the author. I also want to express my thanks...
Very Low-Cost Nutritious Diet Plans Designed by Linear Programming.
Foytik, Jerry
1981-01-01
Provides procedural details of Linear Programing, developed by the U.S. Department of Agriculture to devise a dietary guide for consumers that minimizes food costs without sacrificing nutritional quality. Compares Linear Programming with the Thrifty Food Plan, which has been a basis for allocating coupons under the Food Stamp Program. (CS)
The use of linear programming in optimization of HDR implant dose distributions
International Nuclear Information System (INIS)
Jozsef, Gabor; Streeter, Oscar E.; Astrahan, Melvin A.
2003-01-01
The introduction of high dose rate brachytherapy enabled optimization of dose distributions to be used on a routine basis. The objective of optimization is to homogenize the dose distribution within the implant while simultaneously satisfying dose constraints on certain points. This is accomplished by varying the time the source dwells at different locations. As the dose at any point is a linear function of the dwell times, a linear programming approach seems to be a natural choice. The dose constraints are inherently linear inequalities. Homogeneity requirements are linearized by minimizing the maximum deviation of the doses at points inside the implant from a prescribed dose. The revised simplex method was applied for the solution of this linear programming problem. In the homogenization process the possible source locations were chosen as optimization points. To avoid the problem of the singular value of the dose at a source location from the source itself we define the 'self-contribution' as the dose at a small distance from the source. The effect of varying this distance is discussed. Test cases were optimized for planar, biplanar and cylindrical implants. A semi-irregular, fan-like implant with diverging needles was also investigated. Mean central dose calculation based on 3D Delaunay-triangulation of the source locations was used to evaluate the dose distributions. The optimization method resulted in homogeneous distributions (for brachytherapy). Additional dose constraints--when applied--were satisfied. The method is flexible enough to include other linear constraints such as the inclusion of the centroids of the Delaunay-triangulation for homogenization, or limiting the maximum allowable dwell time
AAPM Medical Physics Practice Guideline 8.a.: Linear accelerator performance tests.
Smith, Koren; Balter, Peter; Duhon, John; White, Gerald A; Vassy, David L; Miller, Robin A; Serago, Christopher F; Fairobent, Lynne A
2017-07-01
The purpose of this guideline is to provide a list of critical performance tests in order to assist the Qualified Medical Physicist (QMP) in establishing and maintaining a safe and effective quality assurance (QA) program. The performance tests on a linear accelerator (linac) should be selected to fit the clinical patterns of use of the accelerator and care should be given to perform tests which are relevant to detecting errors related to the specific use of the accelerator. A risk assessment was performed on tests from current task group reports on linac QA to highlight those tests that are most effective at maintaining safety and quality for the patient. Recommendations are made on the acquisition of reference or baseline data, the establishment of machine isocenter on a routine basis, basing performance tests on clinical use of the linac, working with vendors to establish QA tests and performing tests after maintenance. The recommended tests proposed in this guideline were chosen based on the results from the risk analysis and the consensus of the guideline's committee. The tests are grouped together by class of test (e.g., dosimetry, mechanical, etc.) and clinical parameter tested. Implementation notes are included for each test so that the QMP can understand the overall goal of each test. This guideline will assist the QMP in developing a comprehensive QA program for linacs in the external beam radiation therapy setting. The committee sought to prioritize tests by their implication on quality and patient safety. The QMP is ultimately responsible for implementing appropriate tests. In the spirit of the report from American Association of Physicists in Medicine Task Group 100, individual institutions are encouraged to analyze the risks involved in their own clinical practice and determine which performance tests are relevant in their own radiotherapy clinics. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on
Energy Technology Data Exchange (ETDEWEB)
Berdaky, Mafalda Feliciano
2000-07-01
This work presents the operational part of the final process of the establishment of a radiotherapy service with a linear accelerator (6 MeV photon beams), including the acceptance tests, commissioning tests and the implementation of a quality control program through routine mechanical and radiation tests. All acceptance tests were satisfactory, showing results below the allowed limits of the manufacturer, the commissioning tests presented results within those of the international recommendations. The quality control program was performed during 34 months and showed an excellent stability of this accelerator. (author)
Varadharajan, Ekambaram; Ramasubramanian, Velayudham
2013-01-01
Aim The RapidArc commissioning and Acceptance Testing program will test and ensure accuracy in DMLC position, precise dose-rate control during gantry rotation and accurate control of gantry speed. Background Recently, we have upgraded our linear accelerator capable of performing IMRT which was functional from 2007 with image guided RapidArc facility. The installation of VMAT in the existing linear accelerator is a tedious process which requires many quality assurance procedures before the proper commissioning of the facility and these procedures are discussed in this study. Materials and methods Output of the machine at different dose rates was measured to verify its consistency at different dose rates. Monitor and chamber linearity at different dose rates were checked. DMLC QA comprising of MLC transmission factor measurement and dosimetric leaf gap measurements were performed using 0.13 cm3 and 0.65 cm3 Farmer type ionization chamber, dose 1 dosimeter, and IAEA 30 cm × 30 cm × 30 cm water phantom. Picket fence test, garden fence test, tests to check leaf positioning accuracy due to carriage movement, calibration of the leaves, leaf speed stability effects due to the acceleration and deceleration of leaves, accuracy and calibration of leaves in producing complex fields, effects of interleaf friction, etc. were verified using EDR2 therapy films, Vidar scanner, Omnipro accept software, amorphous silicon based electronic portal imaging device and EPIQA software.1–8 Results All the DMLC related quality assurance tests were performed and evaluated by film dosimetry, portal dosimetry and EPIQA.7 Conclusion Results confirmed that the linear accelerator is capable of performing accurate VMAT. PMID:24416566
Varadharajan, Ekambaram; Ramasubramanian, Velayudham
2013-01-01
The RapidArc commissioning and Acceptance Testing program will test and ensure accuracy in DMLC position, precise dose-rate control during gantry rotation and accurate control of gantry speed. Recently, we have upgraded our linear accelerator capable of performing IMRT which was functional from 2007 with image guided RapidArc facility. The installation of VMAT in the existing linear accelerator is a tedious process which requires many quality assurance procedures before the proper commissioning of the facility and these procedures are discussed in this study. Output of the machine at different dose rates was measured to verify its consistency at different dose rates. Monitor and chamber linearity at different dose rates were checked. DMLC QA comprising of MLC transmission factor measurement and dosimetric leaf gap measurements were performed using 0.13 cm(3) and 0.65 cm(3) Farmer type ionization chamber, dose 1 dosimeter, and IAEA 30 cm × 30 cm × 30 cm water phantom. Picket fence test, garden fence test, tests to check leaf positioning accuracy due to carriage movement, calibration of the leaves, leaf speed stability effects due to the acceleration and deceleration of leaves, accuracy and calibration of leaves in producing complex fields, effects of interleaf friction, etc. were verified using EDR2 therapy films, Vidar scanner, Omnipro accept software, amorphous silicon based electronic portal imaging device and EPIQA software.(1-8.) All the DMLC related quality assurance tests were performed and evaluated by film dosimetry, portal dosimetry and EPIQA.(7.) Results confirmed that the linear accelerator is capable of performing accurate VMAT.
A versatile program for the calculation of linear accelerator room shielding.
Hassan, Zeinab El-Taher; Farag, Nehad M; Elshemey, Wael M
2018-03-22
This work aims at designing a computer program to calculate the necessary amount of shielding for a given or proposed linear accelerator room design in radiotherapy. The program (Shield Calculation in Radiotherapy, SCR) has been developed using Microsoft Visual Basic. It applies the treatment room shielding calculations of NCRP report no. 151 to calculate proper shielding thicknesses for a given linear accelerator treatment room design. The program is composed of six main user-friendly interfaces. The first enables the user to upload their choice of treatment room design and to measure the distances required for shielding calculations. The second interface enables the user to calculate the primary barrier thickness in case of three-dimensional conventional radiotherapy (3D-CRT), intensity modulated radiotherapy (IMRT) and total body irradiation (TBI). The third interface calculates the required secondary barrier thickness due to both scattered and leakage radiation. The fourth and fifth interfaces provide a means to calculate the photon dose equivalent for low and high energy radiation, respectively, in door and maze areas. The sixth interface enables the user to calculate the skyshine radiation for photons and neutrons. The SCR program has been successfully validated, precisely reproducing all of the calculated examples presented in NCRP report no. 151 in a simple and fast manner. Moreover, it easily performed the same calculations for a test design that was also calculated manually, and produced the same results. The program includes a new and important feature that is the ability to calculate required treatment room thickness in case of IMRT and TBI. It is characterised by simplicity, precision, data saving, printing and retrieval, in addition to providing a means for uploading and testing any proposed treatment room shielding design. The SCR program provides comprehensive, simple, fast and accurate room shielding calculations in radiotherapy.
An innovative approach for testing bioinformatics programs using metamorphic testing
Directory of Open Access Journals (Sweden)
Liu Huai
2009-01-01
Full Text Available Abstract Background Recent advances in experimental and computational technologies have fueled the development of many sophisticated bioinformatics programs. The correctness of such programs is crucial as incorrectly computed results may lead to wrong biological conclusion or misguide downstream experimentation. Common software testing procedures involve executing the target program with a set of test inputs and then verifying the correctness of the test outputs. However, due to the complexity of many bioinformatics programs, it is often difficult to verify the correctness of the test outputs. Therefore our ability to perform systematic software testing is greatly hindered. Results We propose to use a novel software testing technique, metamorphic testing (MT, to test a range of bioinformatics programs. Instead of requiring a mechanism to verify whether an individual test output is correct, the MT technique verifies whether a pair of test outputs conform to a set of domain specific properties, called metamorphic relations (MRs, thus greatly increases the number and variety of test cases that can be applied. To demonstrate how MT is used in practice, we applied MT to test two open-source bioinformatics programs, namely GNLab and SeqMap. In particular we show that MT is simple to implement, and is effective in detecting faults in a real-life program and some artificially fault-seeded programs. Further, we discuss how MT can be applied to test programs from various domains of bioinformatics. Conclusion This paper describes the application of a simple, effective and automated technique to systematically test a range of bioinformatics programs. We show how MT can be implemented in practice through two real-life case studies. Since many bioinformatics programs, particularly those for large scale simulation and data analysis, are hard to test systematically, their developers may benefit from using MT as part of the testing strategy. Therefore our work
Fleming, P.
1985-01-01
A design technique is proposed for linear regulators in which a feedback controller of fixed structure is chosen to minimize an integral quadratic objective function subject to the satisfaction of integral quadratic constraint functions. Application of a non-linear programming algorithm to this mathematically tractable formulation results in an efficient and useful computer-aided design tool. Particular attention is paid to computational efficiency and various recommendations are made. Two design examples illustrate the flexibility of the approach and highlight the special insight afforded to the designer.
Sensitivity analysis of linear programming problem through a recurrent neural network
Das, Raja
2017-11-01
In this paper we study the recurrent neural network for solving linear programming problems. To achieve optimality in accuracy and also in computational effort, an algorithm is presented. We investigate the sensitivity analysis of linear programming problem through the neural network. A detailed example is also presented to demonstrate the performance of the recurrent neural network.
Schmitt, M. A.; And Others
1994-01-01
Compares traditional manure application planning techniques calculated to meet agronomic nutrient needs on a field-by-field basis with plans developed using computer-assisted linear programming optimization methods. Linear programming provided the most economical and environmentally sound manure application strategy. (Contains 15 references.) (MDH)
Fundamental solution of the problem of linear programming and method of its determination
Petrunin, S. V.
1978-01-01
The idea of a fundamental solution to a problem in linear programming is introduced. A method of determining the fundamental solution and of applying this method to the solution of a problem in linear programming is proposed. Numerical examples are cited.
DEFF Research Database (Denmark)
Ren, Jingzheng; Dong, Liang; Sun, Lu
2015-01-01
in this model, and the price of the resources, the yield of grain and the market demands were regarded as interval numbers instead of constants. An interval linear programming was developed, and a method for solving interval linear programming was presented. An illustrative case was studied by the proposed...
The essential multiobjectivity of linear programming | Stewart | ORiON
African Journals Online (AJOL)
It is argued that any non-trivial real world problems involve multiple objectives. The simplistic approach of combining objectives in linear form can generate highly misleading and biased results, and is poor operational research practice. Such biases are illustrated by means of a simple example, and it is demonstrated that ...
International Nuclear Information System (INIS)
Piacentino, A.; Cardona, F.
2008-01-01
The optimization of synthesis, design and operation in trigeneration systems for building applications is a quite complex task, due to the high number of decision variables, the presence of irregular heat, cooling and electric load profiles and the variable electricity price. Consequently, computer-aided techniques are usually adopted to achieve the optimal solution, based either on iterative techniques, linear or non-linear programming or evolutionary search. Large efforts have been made in improving algorithm efficiency, which have resulted in an increasingly rapid convergence to the optimal solution and in reduced calculation time; robust algorithm have also been formulated, assuming stochastic behaviour for energy loads and prices. This paper is based on the assumption that margins for improvements in the optimization of trigeneration systems still exist, which require an in-depth understanding of plant's energetic behaviour. Robustness in the optimization of trigeneration systems has more to do with a 'correct and comprehensive' than with an 'efficient' modelling, being larger efforts required to energy specialists rather than to experts in efficient algorithms. With reference to a mixed integer linear programming model implemented in MatLab for a trigeneration system including a pressurized (medium temperature) heat storage, the relevant contribute of thermoeconomics and energo-environmental analysis in the phase of mathematical modelling and code testing are shown
International Nuclear Information System (INIS)
Cullen, D.E.
1979-01-01
Program LINEAR converts evaluated cross sections in the ENDF/B format into a tabular form that is subject to linear-linear interpolation in energy and cross section. The code also thins tables of cross sections already in that form (i.e., removes points not needed for linear interpolability). The main advantage of the code is that it allows subsequent codes to consider only linear-linear data. A listing of the source deck is available on request
Effective radiological safety program for electron linear accelerators
International Nuclear Information System (INIS)
Swanson, W.P.
1980-10-01
An outline is presented of some of the main elements of an electron accelerator radiological safety program. The discussion includes types of accelerator facilities, types of radiations to be anticipated, activity induced in components, air and water, and production of toxic gases. Concepts of radiation shielding design are briefly discussed and organizational aspects are considered as an integral part of the overall safety program
Portfolio optimization by using linear programing models based on genetic algorithm
Sukono; Hidayat, Y.; Lesmana, E.; Putra, A. S.; Napitupulu, H.; Supian, S.
2018-01-01
In this paper, we discussed the investment portfolio optimization using linear programming model based on genetic algorithms. It is assumed that the portfolio risk is measured by absolute standard deviation, and each investor has a risk tolerance on the investment portfolio. To complete the investment portfolio optimization problem, the issue is arranged into a linear programming model. Furthermore, determination of the optimum solution for linear programming is done by using a genetic algorithm. As a numerical illustration, we analyze some of the stocks traded on the capital market in Indonesia. Based on the analysis, it is shown that the portfolio optimization performed by genetic algorithm approach produces more optimal efficient portfolio, compared to the portfolio optimization performed by a linear programming algorithm approach. Therefore, genetic algorithms can be considered as an alternative on determining the investment portfolio optimization, particularly using linear programming models.
Shen, Peiping; Zhang, Tongli; Wang, Chunfeng
2017-01-01
This article presents a new approximation algorithm for globally solving a class of generalized fractional programming problems (P) whose objective functions are defined as an appropriate composition of ratios of affine functions. To solve this problem, the algorithm solves an equivalent optimization problem (Q) via an exploration of a suitably defined nonuniform grid. The main work of the algorithm involves checking the feasibility of linear programs associated with the interesting grid points. It is proved that the proposed algorithm is a fully polynomial time approximation scheme as the ratio terms are fixed in the objective function to problem (P), based on the computational complexity result. In contrast to existing results in literature, the algorithm does not require the assumptions on quasi-concavity or low-rank of the objective function to problem (P). Numerical results are given to illustrate the feasibility and effectiveness of the proposed algorithm.
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
Mixed-Integer Conic Linear Programming: Challenges and Perspectives
2013-10-01
The novel DCCs for MISOCO may be used in branch- and-cut algorithms when solving MISOCO problems. The experimental software CICLO was developed to...perform limited, but rigorous computational experiments. The CICLO solver utilizes continuous SOCO solvers, MOSEK, CPLES or SeDuMi, builds on the open...submitted Fall 2013. Software: 1. CICLO : Integer conic linear optimization package. Authors: J.C. Góez, T.K. Ralphs, Y. Fu, and T. Terlaky
47 CFR 73.713 - Program tests.
2010-10-01
... International Broadcast Stations § 73.713 Program tests. (a) Upon completion of construction of an international.... The Commission reserves the right to change the date of the beginning of such tests or to suspend or revoke the authority for program tests as and when such action may appear to be in the public interest...
Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri
2016-01-01
This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality.
The PUMA test program and data analysis
International Nuclear Information System (INIS)
Han, J.T.; Morrison, D.L.
1997-01-01
The PUMA test program is sponsored by the U.S. Nuclear Regulatory Commission to provide data that are relevant to various Boiling Water Reactor phenomena. The author briefly describes the PUMA test program and facility, presents the objective of the program, provides data analysis for a large-break loss-of-coolant accident test, and compares the data with a RELAP5/MOD 3.1.2 calculation
International Nuclear Information System (INIS)
Zavaljevski, N.
1985-01-01
Proposed optimization procedure is fast due to application of linear programming. Non-linear constraints which demand iterative application of linear programming are slowing down the calculation. Linearization can be done by different procedures starting from simple empirical rules for fuel in-core management to complicated general perturbation theory with higher order of corrections. A mathematical model was formulated for optimization of improved fuel cycle. A detailed algorithm for determining minimum of fresh fuel at the beginning of each fuel cycle is shown and the problem is linearized by first order perturbation theory and it is optimized by linear programming. Numerical illustration of the proposed method was done for the experimental reactor mostly for saving computer time
Accelerated bridge paint test program.
2011-07-06
The accelerated bridge paint (AB-Paint) program evaluated a new Sherwin-Williams two-coat, : fast-curing paint system. The system is comprised of an organic zinc-rich primer (SW Corothane I : Galvapac One-Pack Zinc-Rich Primer B65 G11) and a polyurea...
A linear programming approach for placement of applicants to academic programs.
Kassa, Biniyam Asmare
2013-01-01
This paper reports a linear programming approach for placement of applicants to study programs developed and implemented at the college of Business & Economics, Bahir Dar University, Bahir Dar, Ethiopia. The approach is estimated to significantly streamline the placement decision process at the college by reducing required man hour as well as the time it takes to announce placement decisions. Compared to the previous manual system where only one or two placement criteria were considered, the new approach allows the college's management to easily incorporate additional placement criteria, if needed. Comparison of our approach against manually constructed placement decisions based on actual data for the 2012/13 academic year suggested that about 93 percent of the placements from our model concur with the actual placement decisions. For the remaining 7 percent of placements, however, the actual placements made by the manual system display inconsistencies of decisions judged against the very criteria intended to guide placement decisions by the college's program management office. Overall, the new approach proves to be a significant improvement over the manual system in terms of efficiency of the placement process and the quality of placement decisions.
Linearity and Misspecification Tests for Vector Smooth Transition Regression Models
DEFF Research Database (Denmark)
Teräsvirta, Timo; Yang, Yukai
The purpose of the paper is to derive Lagrange multiplier and Lagrange multiplier type specification and misspecification tests for vector smooth transition regression models. We report results from simulation studies in which the size and power properties of the proposed asymptotic tests in small...
TBM performance prediction in Yucca Mountain welded tuff from linear cutter tests
International Nuclear Information System (INIS)
Gertsch, R.; Ozdemir, L.; Gertsch, L.
1992-01-01
This paper discusses performance prediction which were developed for tunnel boring machines operating in welded tuff for the construction of the experimental study facility and the potential nuclear waste repository at Yucca Mountain. The predictions were based on test data obtained from an extensive series of linear cutting tests performed on samples of Topopah String welded tuff from the Yucca Mountain Project site. Using the cutter force, spacing, and penetration data from the experimental program, the thrust, torque, power, and rate of penetration were estimated for a 25 ft diameter tunnel boring machine (TBM) operating in welded tuff. The result show that the Topopah Spring welded tuff (TSw2) can be excavated at relatively high rates of advance with state-of-the-art TBMs. The result also show, however, that the TBM torque and power requirements will be higher than estimated based on rock physical properties and past tunneling experience in rock formations of similar strength
Bruhn, Peter; Geyer-Schulz, Andreas
2002-01-01
In this paper, we introduce genetic programming over context-free languages with linear constraints for combinatorial optimization, apply this method to several variants of the multidimensional knapsack problem, and discuss its performance relative to Michalewicz's genetic algorithm with penalty functions. With respect to Michalewicz's approach, we demonstrate that genetic programming over context-free languages with linear constraints improves convergence. A final result is that genetic programming over context-free languages with linear constraints is ideally suited to modeling complementarities between items in a knapsack problem: The more complementarities in the problem, the stronger the performance in comparison to its competitors.
International Nuclear Information System (INIS)
Hutchinson, W.
1983-04-01
The report takes the form of a user guide to a computer program using linear programming techniques to aid the assignment and scheduling of radioactive wastes for disposal to sea. The program is aimed at the identification of 'optimum' amounts of each waste stream for disposal to sea without violating specific constraints values and/or fairness parameters. (author)
The Yucca Mountain Project Prototype Testing Program
International Nuclear Information System (INIS)
1989-10-01
The Yucca Mountain Project is conducting a Prototype Testing Program to ensure that the Exploratory Shaft Facility (ESF) tests can be completed in the time available and to develop instruments, equipment, and procedures so the ESF tests can collect reliable and representative site characterization data. This report summarizes the prototype tests and their status and location and emphasizes prototype ESF and surface tests, which are required in the early stages of the ESF site characterization tests. 14 figs
Standard Test Method for Measuring Dose for Use in Linear Accelerator Pulsed Radiation Effects Tests
American Society for Testing and Materials. Philadelphia
2011-01-01
1.1 This test method covers a calorimetric measurement of the total dose delivered in a single pulse of electrons from an electron linear accelerator or a flash X-ray machine (FXR, e-beam mode) used as an ionizing source in radiation-effects testing. The test method is designed for use with pulses of electrons in the energy range from 10 to 50 MeV and is only valid for cases in which both the calorimeter and the test specimen to be irradiated are“thin” compared to the range of these electrons in the materials of which they are constructed. 1.2 The procedure described can be used in those cases in which (1) the dose delivered in a single pulse is 5 Gy (matl) (500 rd (matl)) or greater, or (2) multiple pulses of a lower dose can be delivered in a short time compared to the thermal time constant of the calorimeter. Matl refers to the material of the calorimeter. The minimum dose per pulse that can be acceptably monitored depends on the variables of the particular test, including pulse rate, pulse uniformity...
Testing for marginal linear effects in quantile regression
Wang, Huixia Judy
2017-10-23
The paper develops a new marginal testing procedure to detect significant predictors that are associated with the conditional quantiles of a scalar response. The idea is to fit the marginal quantile regression on each predictor one at a time, and then to base the test on the t-statistics that are associated with the most predictive predictors. A resampling method is devised to calibrate this test statistic, which has non-regular limiting behaviour due to the selection of the most predictive variables. Asymptotic validity of the procedure is established in a general quantile regression setting in which the marginal quantile regression models can be misspecified. Even though a fixed dimension is assumed to derive the asymptotic results, the test proposed is applicable and computationally feasible for large dimensional predictors. The method is more flexible than existing marginal screening test methods based on mean regression and has the added advantage of being robust against outliers in the response. The approach is illustrated by using an application to a human immunodeficiency virus drug resistance data set.
Testing for marginal linear effects in quantile regression
Wang, Huixia Judy; McKeague, Ian W.; Qian, Min
2017-01-01
The paper develops a new marginal testing procedure to detect significant predictors that are associated with the conditional quantiles of a scalar response. The idea is to fit the marginal quantile regression on each predictor one at a time, and then to base the test on the t-statistics that are associated with the most predictive predictors. A resampling method is devised to calibrate this test statistic, which has non-regular limiting behaviour due to the selection of the most predictive variables. Asymptotic validity of the procedure is established in a general quantile regression setting in which the marginal quantile regression models can be misspecified. Even though a fixed dimension is assumed to derive the asymptotic results, the test proposed is applicable and computationally feasible for large dimensional predictors. The method is more flexible than existing marginal screening test methods based on mean regression and has the added advantage of being robust against outliers in the response. The approach is illustrated by using an application to a human immunodeficiency virus drug resistance data set.
Testing programs for the Multimission Modular Spacecraft
Greenwell, T. J.
1978-01-01
The Multimission Modular Spacecraft (MMS) provides a standard spacecraft bus to a user for a variety of space missions ranging from near-earth to synchronous orbits. The present paper describes the philosophy behind the MMS module test program and discusses the implementation of the test program. It is concluded that the MMS module test program provides an effective and comprehensive customer buy-off at the subsystem contractor's plant, is an optimum approach for checkout of the subsystems prior to use for on-orbit servicing in the Shuttle Cargo Bay, and is a cost-effective technique for environmental testing.
Implementing and testing program PLOTTAB
International Nuclear Information System (INIS)
Cullen, D.E.; McLaughlin, P.K.
1988-01-01
Enclosed is a description of the magnetic tape or floppy diskette containing the PLOTTAB code package. In addition detailed information is provided on implementation and testing of this code. See part I for mainframe computers; part II for personal computers. These codes are documented in IAEA-NDS-82. (author)
adapta~k>n -11 of the surrogate memods for linear programming ...
African Journals Online (AJOL)
2005-08-02
Aug 2, 2005 ... inequality problem is made uj~ of the primal and dual optimal solutions for the given primal ... KEYWORDS: Linear Programming, Duality Theory, Surrogate Methods. ..... replaces x and the process IS repeated with the new x.
Development of demand functions and their inclusion in linear programming forecasting models
International Nuclear Information System (INIS)
Chamberlin, J.H.
1976-05-01
The purpose of the paper is to present a method for including demand directly within a linear programming model, and to use this method to analyze the effect of the Liquid Metal Fast Breeder Reactor upon the nuclear energy system
An introduction to fuzzy linear programming problems theory, methods and applications
Kaur, Jagdeep
2016-01-01
The book presents a snapshot of the state of the art in the field of fully fuzzy linear programming. The main focus is on showing current methods for finding the fuzzy optimal solution of fully fuzzy linear programming problems in which all the parameters and decision variables are represented by non-negative fuzzy numbers. It presents new methods developed by the authors, as well as existing methods developed by others, and their application to real-world problems, including fuzzy transportation problems. Moreover, it compares the outcomes of the different methods and discusses their advantages/disadvantages. As the first work to collect at one place the most important methods for solving fuzzy linear programming problems, the book represents a useful reference guide for students and researchers, providing them with the necessary theoretical and practical knowledge to deal with linear programming problems under uncertainty.
Fuzzy Multi Objective Linear Programming Problem with Imprecise Aspiration Level and Parameters
Directory of Open Access Journals (Sweden)
Zahra Shahraki
2015-07-01
Full Text Available This paper considers the multi-objective linear programming problems with fuzzygoal for each of the objective functions and constraints. Most existing works deal withlinear membership functions for fuzzy goals. In this paper, exponential membershipfunction is used.
International Nuclear Information System (INIS)
Ureba, A.; Palma, B. A.; Leal, A.
2011-01-01
Develop a more efficient method of optimization in relation to time, based on linear programming designed to implement a multi objective penalty function which also permits a simultaneous solution integrated boost situations considering two white volumes simultaneously.
Maillot, Matthieu; Ferguson, Elaine L; Drewnowski, Adam; Darmon, Nicole
2008-06-01
Nutrient profiling ranks foods based on their nutrient content. They may help identify foods with a good nutritional quality for their price. This hypothesis was tested using diet modeling with linear programming. Analyses were undertaken using food intake data from the nationally representative French INCA (enquête Individuelle et Nationale sur les Consommations Alimentaires) survey and its associated food composition and price database. For each food, a nutrient profile score was defined as the ratio between the previously published nutrient density score (NDS) and the limited nutrient score (LIM); a nutritional quality for price indicator was developed and calculated from the relationship between its NDS:LIM and energy cost (in euro/100 kcal). We developed linear programming models to design diets that fulfilled increasing levels of nutritional constraints at a minimal cost. The median NDS:LIM values of foods selected in modeled diets increased as the levels of nutritional constraints increased (P = 0.005). In addition, the proportion of foods with a good nutritional quality for price indicator was higher (P linear programming and the nutrient profiling approaches indicates that nutrient profiling can help identify foods of good nutritional quality for their price. Linear programming is a useful tool for testing nutrient profiling systems and validating the concept of nutrient profiling.
International Nuclear Information System (INIS)
Gertsch, R.; Ozdemir, L.
1992-09-01
The performances of mechanical excavators are predicted for excavations in welded tuff. Emphasis is given to tunnel boring machine evaluations based on linear cutting machine test data obtained on samples of Topopah Spring welded tuff. The tests involve measurement of forces as cutters are applied to the rock surface at certain spacing and penetrations. Two disc and two point-attack cutters representing currently available technology are thus evaluated. The performance predictions based on these direct experimental measurements are believed to be more accurate than any previous values for mechanical excavation of welded tuff. The calculations of performance are predicated on minimizing the amount of energy required to excavate the welded tuff. Specific energy decreases with increasing spacing and penetration, and reaches its lowest at the widest spacing and deepest penetration used in this test program. Using the force, spacing, and penetration data from this experimental program, the thrust, torque, power, and rate of penetration are calculated for several types of mechanical excavators. The results of this study show that the candidate excavators will require higher torque and power than heretofore estimated
Linear Accelerator Test Facility at LNF Conceptual Design Report
Valente, Paolo; Bolli, Bruno; Buonomo, Bruno; Cantarella, Sergio; Ceccarelli, Riccardo; Cecchinelli, Alberto; Cerafogli, Oreste; Clementi, Renato; Di Giulio, Claudio; Esposito, Adolfo; Frasciello, Oscar; Foggetta, Luca; Ghigo, Andrea; Incremona, Simona; Iungo, Franco; Mascio, Roberto; Martelli, Stefano; Piermarini, Graziano; Sabbatini, Lucia; Sardone, Franco; Sensolini, Giancarlo; Ricci, Ruggero; Rossi, Luis Antonio; Rotundo, Ugo; Stella, Angelo; Strabioli, Serena; Zarlenga, Raffaele
2016-01-01
Test beam and irradiation facilities are the key enabling infrastructures for research in high energy physics (HEP) and astro-particles. In the last 11 years the Beam-Test Facility (BTF) of the DA{\\Phi}NE accelerator complex in the Frascati laboratory has gained an important role in the European infrastructures devoted to the development and testing of particle detectors. At the same time the BTF operation has been largely shadowed, in terms of resources, by the running of the DA{\\Phi}NE electron-positron collider. The present proposal is aimed at improving the present performance of the facility from two different points of view: extending the range of application for the LINAC beam extracted to the BTF lines, in particular in the (in some sense opposite) directions of hosting fundamental physics and providing electron irradiation also for industrial users; extending the life of the LINAC beyond or independently from its use as injector of the DA{\\Phi}NE collider, as it is also a key element of the electron/...
Structural Dynamic Analyses And Test Predictions For Spacecraft Structures With Non-Linearities
Vergniaud, Jean-Baptiste; Soula, Laurent; Newerla, Alfred
2012-07-01
The overall objective of the mechanical development and verification process is to ensure that the spacecraft structure is able to sustain the mechanical environments encountered during launch. In general the spacecraft structures are a-priori assumed to behave linear, i.e. the responses to a static load or dynamic excitation, respectively, will increase or decrease proportionally to the amplitude of the load or excitation induced. However, past experiences have shown that various non-linearities might exist in spacecraft structures and the consequences of their dynamic effects can significantly affect the development and verification process. Current processes are mainly adapted to linear spacecraft structure behaviour. No clear rules exist for dealing with major structure non-linearities. They are handled outside the process by individual analysis and margin policy, and analyses after tests to justify the CLA coverage. Non-linearities can primarily affect the current spacecraft development and verification process on two aspects. Prediction of flights loads by launcher/satellite coupled loads analyses (CLA): only linear satellite models are delivered for performing CLA and no well-established rules exist how to properly linearize a model when non- linearities are present. The potential impact of the linearization on the results of the CLA has not yet been properly analyzed. There are thus difficulties to assess that CLA results will cover actual flight levels. Management of satellite verification tests: the CLA results generated with a linear satellite FEM are assumed flight representative. If the internal non- linearities are present in the tested satellite then there might be difficulties to determine which input level must be passed to cover satellite internal loads. The non-linear behaviour can also disturb the shaker control, putting the satellite at risk by potentially imposing too high levels. This paper presents the results of a test campaign performed in
MHD diffuser model test program
Energy Technology Data Exchange (ETDEWEB)
Idzorek, J J
1976-07-01
Experimental results of the aerodynamic performance of seven candidate diffusers are presented to assist in determining their suitability for joining an MHD channel to a steam generator at minimum spacing. The three dimensional diffusers varied in area ratio from 2 to 3.8 and wall half angle from 2 to 5 degrees. The program consisted of five phases: (1) tailoring a diffuser inlet nozzle to a 15 percent blockage; (2) comparison of isolated diffusers at enthalpy ratios 0.5 to 1.0 with respect to separation characteristics and pressure recovery coefficients; (3) recording the optimum diffuser exit flow distribution; (4) recording the internal flow distribution within the steam generator when attached to the diffuser; and (5) observing isolated diffuser exhaust dynamic characteristics. The 2 and 2-1/3 degree half angle rectangular diffusers showed recovery coefficients equal to 0.48 with no evidence of flow separation or instability. Diffusion at angles greater than these produced flow instabilities and with angles greater than 3 degrees random flow separation and reattachment.
MHD diffuser model test program
International Nuclear Information System (INIS)
Idzorek, J.J.
1976-07-01
Experimental results of the aerodynamic performance of seven candidate diffusers are presented to assist in determining their suitability for joining an MHD channel to a steam generator at minimum spacing. The three dimensional diffusers varied in area ratio from 2 to 3.8 and wall half angle from 2 to 5 degrees. The program consisted of five phases: (1) tailoring a diffuser inlet nozzle to a 15 percent blockage; (2) comparison of isolated diffusers at enthalpy ratios 0.5 to 1.0 with respect to separation characteristics and pressure recovery coefficients; (3) recording the optimum diffuser exit flow distribution; (4) recording the internal flow distribution within the steam generator when attached to the diffuser; and (5) observing isolated diffuser exhaust dynamic characteristics. The 2 and 2-1/3 degree half angle rectangular diffusers showed recovery coefficients equal to 0.48 with no evidence of flow separation or instability. Diffusion at angles greater than these produced flow instabilities and with angles greater than 3 degrees random flow separation and reattachment
Diet models with linear goal programming: impact of achievement functions.
Gerdessen, J C; de Vries, J H M
2015-11-01
Diet models based on goal programming (GP) are valuable tools in designing diets that comply with nutritional, palatability and cost constraints. Results derived from GP models are usually very sensitive to the type of achievement function that is chosen.This paper aims to provide a methodological insight into several achievement functions. It describes the extended GP (EGP) achievement function, which enables the decision maker to use either a MinSum achievement function (which minimizes the sum of the unwanted deviations) or a MinMax achievement function (which minimizes the largest unwanted deviation), or a compromise between both. An additional advantage of EGP models is that from one set of data and weights multiple solutions can be obtained. We use small numerical examples to illustrate the 'mechanics' of achievement functions. Then, the EGP achievement function is demonstrated on a diet problem with 144 foods, 19 nutrients and several types of palatability constraints, in which the nutritional constraints are modeled with fuzzy sets. Choice of achievement function affects the results of diet models. MinSum achievement functions can give rise to solutions that are sensitive to weight changes, and that pile all unwanted deviations on a limited number of nutritional constraints. MinMax achievement functions spread the unwanted deviations as evenly as possible, but may create many (small) deviations. EGP comprises both types of achievement functions, as well as compromises between them. It can thus, from one data set, find a range of solutions with various properties.
Multi-Objective Fuzzy Linear Programming In Agricultural Production Planning
Directory of Open Access Journals (Sweden)
H.M.I.U. Herath
2015-08-01
Full Text Available Abstract Modern agriculture is characterized by a series of conflicting optimization criteria that obstruct the decision-making process in the planning of agricultural production. Such criteria are usually net profit total cost total production etc. At the same time the decision making process in the agricultural production planning is often conducted with data that accidentally occur in nature or that are fuzzy not deterministic. Such data are the yields of various crops the prices of products and raw materials demand for the product the available quantities of production factors such as water labor etc. In this paper a fuzzy multi-criteria mathematical programming model is presented. This model is applied in a region of 10 districts in Sri Lanka where paddy is cultivated under irrigated and rain fed water in the two main seasons called Yala and Maha and the optimal production plan is achieved. This study was undertaken to find out the optimal allocation of land for paddy to get a better yield while satisfying the two conflicting objectives profit maximizing and cost minimizing subjected to the utilizing of water constraint and the demand constraint. Only the availability of land constraint is considered as a crisp in nature while objectives and other constraints are treated as fuzzy. It is observed that the MOFLP is an effective method to handle more than a single objective occurs in an uncertain vague environment.
In-situ thermal testing program strategy
International Nuclear Information System (INIS)
1995-06-01
In the past year the Yucca Mountain Site Characterization Project has implemented a new Program Approach to the licensing process. The Program Approach suggests a step-wise approach to licensing in which the early phases will require less site information than previously planned and necessitate a lesser degree of confidence in the longer-term performance of the repository. Under the Program Approach, the thermal test program is divided into two principal phases: (1) short-term in situ tests (in the 1996 to 2000 time period) and laboratory thermal tests to obtain preclosure information, parameters, and data along with bounding information for postclosure performance; and (2) longer-term in situ tests to obtain additional data regarding postclosure performance. This effort necessitates a rethinking of the testing program because the amount of information needed for the initial licensing phase is less than previously planned. This document proposes a revised and consolidated in situ thermal test program (including supporting laboratory tests) that is structured to meet the needs of the Program Approach. A customer-supplier model is used to define the Project data needs. These data needs, along with other requirements, were then used to define a set of conceptual experiments that will provide the required data within the constraints of the Program Approach schedule. The conceptual thermal tests presented in this document represent a consolidation and update of previously defined tests that should result in a more efficient use of Project resources. This document focuses on defining the requirements and tests needed to satisfy the goal of a successful license application in 2001, should the site be found suitable
FSILP: fuzzy-stochastic-interval linear programming for supporting municipal solid waste management.
Li, Pu; Chen, Bing
2011-04-01
Although many studies on municipal solid waste management (MSW management) were conducted under uncertain conditions of fuzzy, stochastic, and interval coexistence, the solution to the conventional linear programming problems of integrating fuzzy method with the other two was inefficient. In this study, a fuzzy-stochastic-interval linear programming (FSILP) method is developed by integrating Nguyen's method with conventional linear programming for supporting municipal solid waste management. The Nguyen's method was used to convert the fuzzy and fuzzy-stochastic linear programming problems into the conventional linear programs, by measuring the attainment values of fuzzy numbers and/or fuzzy random variables, as well as superiority and inferiority between triangular fuzzy numbers/triangular fuzzy-stochastic variables. The developed method can effectively tackle uncertainties described in terms of probability density functions, fuzzy membership functions, and discrete intervals. Moreover, the method can also improve upon the conventional interval fuzzy programming and two-stage stochastic programming approaches, with advantageous capabilities that are easily achieved with fewer constraints and significantly reduces consumption time. The developed model was applied to a case study of municipal solid waste management system in a city. The results indicated that reasonable solutions had been generated. The solution can help quantify the relationship between the change of system cost and the uncertainties, which could support further analysis of tradeoffs between the waste management cost and the system failure risk. Copyright © 2010 Elsevier Ltd. All rights reserved.
A linear goal programming model for urban energy-economy-environment interaction
Energy Technology Data Exchange (ETDEWEB)
Kambo, N.S.; Handa, B.R. (Indian Inst. of Tech., New Delhi (India). Dept. of Mathematics); Bose, R.K. (Tata Energy Research Inst., New Delhi (India))
1991-01-01
This paper provides a comprehensive and systematic analysis of energy and pollution problems interconnected with the economic structure, by using a multi-objective sectoral end-use model for addressing regional energy policy issues. The multi-objective model proposed for the study is a 'linear goal programming (LGP)' technique of analysing a 'reference energy system (RES)' in a framework within which alternative policies and technical strategies may be evaluated. The model so developed has further been tested for the city of Delhi (India) for the period 1985 - 86, and a scenario analysis has been carried out by assuming different policy options. (orig./BWJ).
Highlights of the SLD Physics Program at the SLAC Linear Collider
International Nuclear Information System (INIS)
Willocq, Stephane
2001-01-01
Starting in 1989, and continuing through the 1990s, high-energy physics witnessed a flowering of precision measurements in general and tests of the standard model in particular, led by e + e - collider experiments operating at the Z 0 resonance. Key contributions to this work came from the SLD collaboration at the SLAC Linear Collider. By exploiting the unique capabilities of this pioneering accelerator and the SLD detector, including a polarized electron beam, exceptionally small beam dimensions, and a CCD pixel vertex detector, SLD produced a broad array of electroweak, heavy-flavor, and QCD measurements. Many of these results are one of a kind or represent the world's standard in precision. This article reviews the highlights of the SLD physics program, with an eye toward associated advances in experimental technique, and the contribution of these measurements to our dramatically improved present understanding of the standard model and its possible extensions
Highlights of the SLD Physics Program at the SLAC Linear Collider
Energy Technology Data Exchange (ETDEWEB)
Willocq, Stephane
2001-09-07
Starting in 1989, and continuing through the 1990s, high-energy physics witnessed a flowering of precision measurements in general and tests of the standard model in particular, led by e{sup +}e{sup -} collider experiments operating at the Z{sup 0} resonance. Key contributions to this work came from the SLD collaboration at the SLAC Linear Collider. By exploiting the unique capabilities of this pioneering accelerator and the SLD detector, including a polarized electron beam, exceptionally small beam dimensions, and a CCD pixel vertex detector, SLD produced a broad array of electroweak, heavy-flavor, and QCD measurements. Many of these results are one of a kind or represent the world's standard in precision. This article reviews the highlights of the SLD physics program, with an eye toward associated advances in experimental technique, and the contribution of these measurements to our dramatically improved present understanding of the standard model and its possible extensions.
Study and program implementation of transient curves' piecewise linearization
International Nuclear Information System (INIS)
Shi Yang; Zu Hongbiao
2014-01-01
Background: Transient curves are essential for the stress analysis of related equipment in nuclear power plant (NPP). The actually operating data or the design transient data of a NPP usually consist of a large number of data points with very short time intervals. To simplify the analysis, transient curves are generally piecewise linearized in advance. Up to now, the piecewise linearization of transient curves is accomplished manually, Purpose: The aim is to develop a method for the piecewise linearization of transient curves, and to implement it by programming. Methods: First of all, the fitting line of a number of data points was obtained by the least square method. The segment of the fitting line is set while the accumulation error of linearization exceeds the preset limit with the increasing number of points. Then the linearization of subsequent data points was begun from the last point of the preceding curve segment to get the next segment in the same way, and continue until the final data point involved. Finally, averaging of junction points is taken for the segment connection. Results: A computer program named PLTC (Piecewise Linearization for Transient Curves) was implemented and verified by the linearization of the standard sine curve and typical transient curves of a NPP. Conclusion: The method and the PLTC program can be well used to the piecewise linearization of transient curves, with improving efficiency and precision. (authors)
Interpretation of a seismic test of the IPIRG2 program
International Nuclear Information System (INIS)
Blay, N.; Gantenbein, F.
1995-01-01
In the framework of the linear and non linear analysis of PWR cracked pipes under seismic loading, the calculations of the 1.2 seismic test of the important IPIRG2 program (International Piping Integrity Research Group) was undertaken. This seismic test was performed on a pipe with a surface crack and loaded by an imposed displacement. A low level and a high level of excitation were applied to the pipe. The calculations are made with a global model including a through wall crack pipe finite element. The modal analysis made for the non-cracked pipe and the real geometrical characteristics gives a first frequency of the pipe with pressure and temperature in good agreement with the test. For the cracked pipe, the first frequency decrease is less than 0.5%. The low level response was calculated with a linear model by modal combination in order to study the importance of the both inertial and differential displacement responses in the total response. For both configurations, non-cracked and cracked, the inertial contribution to the moment at the crack location is approximately equal to 80% of the total moment. For the linear behaviour, the influence of the crack appears weak. The non linear calculations are performed with the equivalent crack previously defined up to penetration. To study the behaviour after penetration, various hypothesis for the crack size are taken. (authors). 3 refs., 6 figs., 4 tabs
Kocjan, Boštjan; Poljak, Mario; Oštrbenk, Anja
2015-01-01
Introduction: HPV-52 is one of the most frequent human papillomavirus (HPV) genotypes causing significant cervical pathology. The most widely used HPV genotyping assay, the Roche Linear Array HPV Genotyping Test (Linear Array), is unable to identify HPV- 52 status in samples containing HPV-33, HPV-35, and/or HPV-58. Methods: Linear Array HPV-52 analytical specificity was established by testing 100 specimens reactive with the Linear Array HPV- 33/35/52/58 cross-reactive probe, but not with the...
Combinatorial therapy discovery using mixed integer linear programming.
Pang, Kaifang; Wan, Ying-Wooi; Choi, William T; Donehower, Lawrence A; Sun, Jingchun; Pant, Dhruv; Liu, Zhandong
2014-05-15
Combinatorial therapies play increasingly important roles in combating complex diseases. Owing to the huge cost associated with experimental methods in identifying optimal drug combinations, computational approaches can provide a guide to limit the search space and reduce cost. However, few computational approaches have been developed for this purpose, and thus there is a great need of new algorithms for drug combination prediction. Here we proposed to formulate the optimal combinatorial therapy problem into two complementary mathematical algorithms, Balanced Target Set Cover (BTSC) and Minimum Off-Target Set Cover (MOTSC). Given a disease gene set, BTSC seeks a balanced solution that maximizes the coverage on the disease genes and minimizes the off-target hits at the same time. MOTSC seeks a full coverage on the disease gene set while minimizing the off-target set. Through simulation, both BTSC and MOTSC demonstrated a much faster running time over exhaustive search with the same accuracy. When applied to real disease gene sets, our algorithms not only identified known drug combinations, but also predicted novel drug combinations that are worth further testing. In addition, we developed a web-based tool to allow users to iteratively search for optimal drug combinations given a user-defined gene set. Our tool is freely available for noncommercial use at http://www.drug.liuzlab.org/. zhandong.liu@bcm.edu Supplementary data are available at Bioinformatics online.
A Nutritional Analysis of the Food Basket in BIH: A Linear Programming Approach
Directory of Open Access Journals (Sweden)
Arnaut-Berilo Almira
2017-04-01
Full Text Available This paper presents linear and goal programming optimization models for determining and analyzing the food basket in Bosnia and Herzegovina (BiH in terms of adequate nutritional needs according to World Health Organization (WHO standards and World Bank (WB recommendations. A linear programming (LP model and goal linear programming model (GLP are adequate since price and nutrient contents are linearly related to food weight. The LP model provides information about the minimal value and the structure of the food basket for an average person in BiH based on nutrient needs. GLP models are designed to give us information on minimal deviations from nutrient needs if the budget is fixed. Based on these results, poverty analysis can be performed. The data used for the models consisted of 158 food items from the general consumption of the population of BiH according to COICOP classifications, with average prices in 2015 for these products.
The Computer Program LIAR for Beam Dynamics Calculations in Linear Accelerators
International Nuclear Information System (INIS)
Assmann, R.W.; Adolphsen, C.; Bane, K.; Raubenheimer, T.O.; Siemann, R.H.; Thompson, K.
2011-01-01
Linear accelerators are the central components of the proposed next generation of linear colliders. They need to provide acceleration of up to 750 GeV per beam while maintaining very small normalized emittances. Standard simulation programs, mainly developed for storage rings, do not meet the specific requirements for high energy linear accelerators. We present a new program LIAR ('LInear Accelerator Research code') that includes wakefield effects, a 6D coupled beam description, specific optimization algorithms and other advanced features. Its modular structure allows to use and to extend it easily for different purposes. The program is available for UNIX workstations and Windows PC's. It can be applied to a broad range of accelerators. We present examples of simulations for SLC and NLC.
Linearized Programming of Memristors for Artificial Neuro-Sensor Signal Processing.
Yang, Changju; Kim, Hyongsuk
2016-08-19
A linearized programming method of memristor-based neural weights is proposed. Memristor is known as an ideal element to implement a neural synapse due to its embedded functions of analog memory and analog multiplication. Its resistance variation with a voltage input is generally a nonlinear function of time. Linearization of memristance variation about time is very important for the easiness of memristor programming. In this paper, a method utilizing an anti-serial architecture for linear programming is proposed. The anti-serial architecture is composed of two memristors with opposite polarities. It linearizes the variation of memristance due to complimentary actions of two memristors. For programming a memristor, additional memristor with opposite polarity is employed. The linearization effect of weight programming of an anti-serial architecture is investigated and memristor bridge synapse which is built with two sets of anti-serial memristor architecture is taken as an application example of the proposed method. Simulations are performed with memristors of both linear drift model and nonlinear model.
Quantum tests for the linearity and permutation invariance of Boolean functions
Energy Technology Data Exchange (ETDEWEB)
Hillery, Mark [Department of Physics, Hunter College of the City University of New York, 695 Park Avenue, New York, New York 10021 (United States); Andersson, Erika [SUPA, School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh EH14 4AS (United Kingdom)
2011-12-15
The goal in function property testing is to determine whether a black-box Boolean function has a certain property or is {epsilon}-far from having that property. The performance of the algorithm is judged by how many calls need to be made to the black box in order to determine, with high probability, which of the two alternatives is the case. Here we present two quantum algorithms, the first to determine whether the function is linear and the second to determine whether it is symmetric (invariant under permutations of the arguments). Both require order {epsilon}{sup -2/3} calls to the oracle, which is better than known classical algorithms. In addition, in the case of linearity testing, if the function is linear, the quantum algorithm identifies which linear function it is. The linearity test combines the Bernstein-Vazirani algorithm and amplitude amplification, while the test to determine whether a function is symmetric uses projective measurements and amplitude amplification.
Patt, P. J.
1985-01-01
The design of a coaxial linear magnetic spring which incorporates a linear motor to control axial motion and overcome system damping is presented, and the results of static and dynamic tests are reported. The system has nominal stiffness 25,000 N/m and is designed to oscillate a 900-g component over a 4.6-mm stroke in a Stirling-cycle cryogenic refrigerator being developed for long-service (5-10-yr) space applications (Stolfi et al., 1983). Mosaics of 10 radially magnetized high-coercivity SmCO5 segments enclosed in Ti cans are employed, and the device is found to have quality factor 70-100, corresponding to energy-storage efficiency 91-94 percent. Drawings, diagrams, and graphs are provided.
The NRU blowdown test facility commissioning program
Energy Technology Data Exchange (ETDEWEB)
Walsworth, J A; Zanatta, R J; Yamazaki, A R; Semeniuk, D D; Wong, W; Dickson, L W; Ferris, C E; Burton, D H [Atomic Energy of Canada Ltd., Chalk River, ON (Canada). Chalk River Nuclear Labs.
1990-12-31
A major experimental program has been established at the Chalk River Nuclear Laboratories (CRL) that will provide essential data on the thermal and mechanical behaviour of nuclear fuel under abnormal reactor operating conditions and on the transient release, transport and deposition of fission product activity from severely degraded fuel. A number of severe fuel damage (SFD) experiments will be conducted within the Blowdown Test Facility (BTF) at CRL. A series of experiments are being conducted to commission this new facility prior to the SFD program. This paper describes the features and the commissioning program for the BTF. A development and testing program is described for critical components used on the reactor test section. In-reactor commissioning with a fuel assembly simulator commenced in 1989 June and preliminary results are given. The paper also outlines plans for future all-effects, in-reactor tests of CANDU-designed fuel. (author). 11 refs., 3 tabs., 7 figs.
Programmed Death-Ligand 1 Immunohistochemistry Testing
DEFF Research Database (Denmark)
Büttner, Reinhard; Gosney, John R; Skov, Birgit Guldhammer
2017-01-01
Purpose Three programmed death-1/programmed death-ligand 1 (PD-L1) inhibitors are currently approved for treatment of non-small-cell lung cancer (NSCLC). Treatment with pembrolizumab in NSCLC requires PD-L1 immunohistochemistry (IHC) testing. Nivolumab and atezolizumab are approved without PD-L1...
Micosoft Excel Sensitivity Analysis for Linear and Stochastic Program Feed Formulation
Sensitivity analysis is a part of mathematical programming solutions and is used in making nutritional and economic decisions for a given feed formulation problem. The terms, shadow price and reduced cost, are familiar linear program (LP) terms to feed formulators. Because of the nonlinear nature of...
Design and performance testing of an ultrasonic linear motor with dual piezoelectric actuators.
Smithmaitrie, Pruittikorn; Suybangdum, Panumas; Laoratanakul, Pitak; Muensit, Nantakan
2012-05-01
In this work, design and performance testing of an ultrasonic linear motor with dual piezoelectric actuator patches are studied. The motor system consists of a linear stator, a pre-load weight, and two piezoelectric actuator patches. The piezoelectric actuators are bonded with the linear elastic stator at specific locations. The stator generates propagating waves when the piezoelectric actuators are subjected to harmonic excitations. Vibration characteristics of the linear stator are analyzed and compared with finite element and experimental results. The analytical, finite element, and experimental results show agreement. In the experiments, performance of the ultrasonic linear motor is tested. Relationships between velocity and pre-load weight, velocity and applied voltage, driving force and applied voltage, and velocity and driving force are reported. The design of the dual piezoelectric actuators yields a simpler structure with a smaller number of actuators and lower stator stiffness compared with a conventional design of an ultrasonic linear motor with fully laminated piezoelectric actuators.
Program Helps Design Tests Of Developmental Software
Hops, Jonathan
1994-01-01
Computer program called "A Formal Test Representation Language and Tool for Functional Test Designs" (TRL) provides automatic software tool and formal language used to implement category-partition method and produce specification of test cases in testing phase of development of software. Category-partition method useful in defining input, outputs, and purpose of test-design phase of development and combines benefits of choosing normal cases having error-exposing properties. Traceability maintained quite easily by creating test design for each objective in test plan. Effort to transform test cases into procedures simplified by use of automatic software tool to create cases based on test design. Method enables rapid elimination of undesired test cases from consideration and facilitates review of test designs by peer groups. Written in C language.
Optimisation of substrate blends in anaerobic co-digestion using adaptive linear programming.
García-Gen, Santiago; Rodríguez, Jorge; Lema, Juan M
2014-12-01
Anaerobic co-digestion of multiple substrates has the potential to enhance biogas productivity by making use of the complementary characteristics of different substrates. A blending strategy based on a linear programming optimisation method is proposed aiming at maximising COD conversion into methane, but simultaneously maintaining a digestate and biogas quality. The method incorporates experimental and heuristic information to define the objective function and the linear restrictions. The active constraints are continuously adapted (by relaxing the restriction boundaries) such that further optimisations in terms of methane productivity can be achieved. The feasibility of the blends calculated with this methodology was previously tested and accurately predicted with an ADM1-based co-digestion model. This was validated in a continuously operated pilot plant, treating for several months different mixtures of glycerine, gelatine and pig manure at organic loading rates from 1.50 to 4.93 gCOD/Ld and hydraulic retention times between 32 and 40 days at mesophilic conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.
Linear Aerospike SR-71 Experiment (LASRE) dumps water after first in-flight cold flow test
1998-01-01
The NASA SR-71A successfully completed its first cold flow flight as part of the NASA/Rocketdyne/Lockheed Martin Linear Aerospike SR-71 Experiment (LASRE) at NASA's Dryden Flight Research Center, Edwards, California on March 4, 1998. During a cold flow flight, gaseous helium and liquid nitrogen are cycled through the linear aerospike engine to check the engine's plumbing system for leaks and to check the engine operating characterisitics. Cold-flow tests must be accomplished successfully before firing the rocket engine experiment in flight. The SR-71 took off at 10:16 a.m. PST. The aircraft flew for one hour and fifty-seven minutes, reaching a maximum speed of Mach 1.58 before landing at Edwards at 12:13 p.m. PST. 'I think all in all we had a good mission today,' Dryden LASRE Project Manager Dave Lux said. Flight crew member Bob Meyer agreed, saying the crew 'thought it was a really good flight.' Dryden Research Pilot Ed Schneider piloted the SR-71 during the mission. Lockheed Martin LASRE Project Manager Carl Meade added, 'We are extremely pleased with today's results. This will help pave the way for the first in-flight engine data-collection flight of the LASRE.' The LASRE experiment was designed to provide in-flight data to help Lockheed Martin evaluate the aerodynamic characteristics and the handling of the SR-71 linear aerospike experiment configuration. The goal of the project was to provide in-flight data to help Lockheed Martin validate the computational predictive tools it was using to determine the aerodynamic performance of a future reusable launch vehicle. The joint NASA, Rocketdyne (now part of Boeing), and Lockheed Martin Linear Aerospike SR-71 Experiment (LASRE) completed seven initial research flights at Dryden Flight Research Center. Two initial flights were used to determine the aerodynamic characteristics of the LASRE apparatus (pod) on the back of the SR-71. Five later flights focused on the experiment itself. Two were used to cycle gaseous
Object matching using a locally affine invariant and linear programming techniques.
Li, Hongsheng; Huang, Xiaolei; He, Lei
2013-02-01
In this paper, we introduce a new matching method based on a novel locally affine-invariant geometric constraint and linear programming techniques. To model and solve the matching problem in a linear programming formulation, all geometric constraints should be able to be exactly or approximately reformulated into a linear form. This is a major difficulty for this kind of matching algorithm. We propose a novel locally affine-invariant constraint which can be exactly linearized and requires a lot fewer auxiliary variables than other linear programming-based methods do. The key idea behind it is that each point in the template point set can be exactly represented by an affine combination of its neighboring points, whose weights can be solved easily by least squares. Errors of reconstructing each matched point using such weights are used to penalize the disagreement of geometric relationships between the template points and the matched points. The resulting overall objective function can be solved efficiently by linear programming techniques. Our experimental results on both rigid and nonrigid object matching show the effectiveness of the proposed algorithm.
Analysis of separation test for automatic brake adjuster based on linear radon transformation
Luo, Zai; Jiang, Wensong; Guo, Bin; Fan, Weijun; Lu, Yi
2015-01-01
The linear Radon transformation is applied to extract inflection points for online test system under the noise conditions. The linear Radon transformation has a strong ability of anti-noise and anti-interference by fitting the online test curve in several parts, which makes it easy to handle consecutive inflection points. We applied the linear Radon transformation to the separation test system to solve the separating clearance of automatic brake adjuster. The experimental results show that the feature point extraction error of the gradient maximum optimal method is approximately equal to ±0.100, while the feature point extraction error of linear Radon transformation method can reach to ±0.010, which has a lower error than the former one. In addition, the linear Radon transformation is robust.
An experimental test of the linear no-threshold theory of radiation carcinogenesis
International Nuclear Information System (INIS)
Cohen, B.L.
1990-01-01
There is a substantial body of quantitative information on radiation-induced cancer at high dose, but there are no data at low dose. The usual method for estimating effects of low-level radiation is to assume a linear no-threshold dependence. if this linear no-threshold assumption were not used, essentially all fears about radiation would disappear. Since these fears are costing tens of billions of dollars, it is most important that the linear no-threshold theory be tested at low dose. An opportunity for possibly testing the linear no-threshold concept is now available at low dose due to radon in homes. The purpose of this paper is to attempt to use this data to test the linear no-threshold theory
The advanced test reactor strategic evaluation program
International Nuclear Information System (INIS)
Buescher, B.J.
1989-01-01
Since the Chernobly accident, the safety of test reactors and irradiation facilities has been critically evaluated from the public's point of view. A systematic evaluation of all safety, environmental, and operational issues must be made in an integrated manner to prioritize actions to maximize benefits while minimizing costs. Such a proactive program has been initiated at the Advanced Test Reactor (ATR). This program, called the Strategic Evaluation Program (STEP), is being conducted for the ATR to provide integrated safety and operational reviews of the reactor against the standards applied to licensed commercial power reactors. This has taken into consideration the lessons learned by the US Nuclear Regulatory Commission (NRC) in its Systematic Evaluation Program (SEP) and the follow-on effort known as the Integrated Safety Assessment Program (ISAP). The SEP was initiated by the NRC to review the designs of older operating nuclear power plants to confirm and document their safety. The ATR STEP objectives are discussed
Radiation testing of thick-wall objects using a linear accelerator or Co-60
International Nuclear Information System (INIS)
Depending on the energy required, a 60 Co source or various types of betatrons and linear accelerators may be used for radiation testing of thick-walled metal parts. While 60 Co sources are easily transported, accelerators are not, but a transportable linear accelerator is described
Periodical test program in depth revision
International Nuclear Information System (INIS)
Feltin, C.; Zermizoglou, R.
1987-11-01
Inspection visits made to different sites during 1980 and 1981 evidenced the need to extend and define more precisely the periodical tests performed on safety related systems; thus Electricite de France was requested by the Safety Authorities to re-examine the periodical test program for all safety related systems. This paper presents the methodology adopted by Electricite de France in order to perform an exhaustive analysis of the periodical test program for the 900 and 1300 MWe plants, and the organization set up at the IPSN at one hand and Electricite de France on the other hand for the purpose of elaborating a periodical test program which would be ratified by the Safety Authorities
Lyubetsky, Vassily; Gershgorin, Roman; Gorbunov, Konstantin
2017-12-06
problems can be reduced to integer linear programming formulations, which allows an algorithm to redefine the problems to implement a very special case of the integer linear programming tool. The results were tested on synthetic and biological samples. Three well-known problems were reduced to a very special case of integer linear programming, which is a new method of their solutions. Integer linear programming is clearly among the main computational methods and, as generally accepted, is fast on average; in particular, computation systems specifically targeted at it are available. The challenges are to reduce the size of the corresponding integer linear programming formulations and to incorporate a more detailed biological concept in our model of the reconstruction.
Recommended well drilling and testing program
International Nuclear Information System (INIS)
Long, J.; Wilson, C.
1978-07-01
A well drilling and testing program is recommended by Lawrence Berkeley Laboratory to identify the hydrology of deep basalts in the Pasco Basin. The ultimate objective of this program is to assist in determining the feasibility of locating a nuclear waste repository on the Hanford Reservation. The recommended program has been staged for maximum effectiveness. In the first stage, six wells have been identified for drilling and testing which, when coupled with existing wells, will provide sufficient data for a preliminary overview of basin hydrology and a preliminary determination of the hydrologic suitability of the deep basalt for a repository site. The rate at which the first stage wells are drilled and tested will depend upon the date at which a preliminary determination of site suitability is required. It was assumed that a preliminary determination of suitability would be required in 1980, in which case all six first stage wells would be drilled in FY 1979. If the results of the first stage analysis are favorable for repository siting, tentative repository sites can be identified and a second stage hydrology program can be implemented to provide the necessary details of the flow system. To accomplish this stage, a number of deep wells would be required at locations both inside and outside the basin, with specific sites to be identified as the work progresses to obtain maximum utility of existing data. A program is recommended for testing in each new well and for completion of testing in each existing well. Recommended tests include borehole geophysics, pressure and permeability testing, geochemical sampling, tracer testing, hydrofracturing and borehole fracture logging. The entire data collection program is oriented toward providing the information required to establish and verify an accurate numerical model of the Pasco Basin
Directory of Open Access Journals (Sweden)
Animesh Biswas
2016-04-01
Full Text Available This paper deals with fuzzy goal programming approach to solve fuzzy linear bilevel integer programming problems with fuzzy probabilistic constraints following Pareto distribution and Frechet distribution. In the proposed approach a new chance constrained programming methodology is developed from the view point of managing those probabilistic constraints in a hybrid fuzzy environment. A method of defuzzification of fuzzy numbers using ?-cut has been adopted to reduce the problem into a linear bilevel integer programming problem. The individual optimal value of the objective of each DM is found in isolation to construct the fuzzy membership goals. Finally, fuzzy goal programming approach is used to achieve maximum degree of each of the membership goals by minimizing under deviational variables in the decision making environment. To demonstrate the efficiency of the proposed approach, a numerical example is provided.
Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen
2018-01-01
With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP
Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen
2018-01-05
With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP
DESIGN OF EDUCATIONAL PROBLEMS ON LINEAR PROGRAMMING USING SYSTEMS OF COMPUTER MATHEMATICS
Directory of Open Access Journals (Sweden)
Volodymyr M. Mykhalevych
2013-11-01
Full Text Available From a perspective of the theory of educational problems a problem of substitution in the conditions of ICT use of one discipline by an educational problem of another discipline is represented. Through the example of mathematical problems of linear programming it is showed that a student’s method of operation in the course of an educational problem solving is determinant in the identification of an educational problem in relation to a specific discipline: linear programming, informatics, mathematical modeling, methods of optimization, automatic control theory, calculus etc. It is substantiated the necessity of linear programming educational problems renovation with the purpose of making students free of bulky similar arithmetic calculations and notes which often becomes a barrier to a deeper understanding of key ideas taken as a basis of algorithms used by them.
Power properties of invariant tests for spatial autocorrelation in linear regression
Martellosio, F.
2006-01-01
Many popular tests for residual spatial autocorrelation in the context of the linear regression model belong to the class of invariant tests. This paper derives a number of exact properties of the power function of such tests. In particular, we extend the work of Krämer (2005, Journal of Statistical
Accommodation of practical constraints by a linear programming jet select. [for Space Shuttle
Bergmann, E.; Weiler, P.
1983-01-01
An experimental spacecraft control system will be incorporated into the Space Shuttle flight software and exercised during a forthcoming mission to evaluate its performance and handling qualities. The control system incorporates a 'phase space' control law to generate rate change requests and a linear programming jet select to compute jet firings. Posed as a linear programming problem, jet selection must represent the rate change request as a linear combination of jet acceleration vectors where the coefficients are the jet firing times, while minimizing the fuel expended in satisfying that request. This problem is solved in real time using a revised Simplex algorithm. In order to implement the jet selection algorithm in the Shuttle flight control computer, it was modified to accommodate certain practical features of the Shuttle such as limited computer throughput, lengthy firing times, and a large number of control jets. To the authors' knowledge, this is the first such application of linear programming. It was made possible by careful consideration of the jet selection problem in terms of the properties of linear programming and the Simplex algorithm. These modifications to the jet select algorithm may by useful for the design of reaction controlled spacecraft.
Energy Technology Data Exchange (ETDEWEB)
Lee, C. S.; Seo, C. K.; Lee, B. C.; Kim, H. N.; Kang, B. W. [KAERI, Taejon (Korea, Republic of)
2000-10-01
The HANARO fuel, U{sub 3}Si-Al, has been developed by AECL and tested in NRU reactor. Due to the lack of the data performed under the high power, the repetitive conduct of the irradiation test was required under the power greater than 108kW/m, which is the estimated maximum linear power in the design stage. Accordingly, the instrumented test bundle with SPND(Self Powered Neutron Detector) was fabricated and its irradiation test was performed in IR2 of HANARO. The measured thermal neutron flux with SPND is compared with calculation results by HANAFMS(HANARO Fuel Management System). The difference in the measured and calculated thermal flux values are below {+-}11% and the accuracy of the linear power predicted by HANAFMS is consequently accompanied. Therefore, it is believed that the maximum linear power above 120kW/m is achieved during the irradiation test of the test bundle.
Quality assurance in the nuclear test program
International Nuclear Information System (INIS)
Shearer, J.N.
1979-01-01
In February 1979 Test Program laid the ground work for a new quality assurance structure. The new approach was based on the findings and recommendations of the Ad Hoc QA Program Review panel, which are summarized in this report. The new structure places the responsibility for quality assurance in the hands of the line organizations, both in the programmatic and functional elements of the LLL matrix
Method for solving fully fuzzy linear programming problems using deviation degree measure
Institute of Scientific and Technical Information of China (English)
Haifang Cheng; Weilai Huang; Jianhu Cai
2013-01-01
A new ful y fuzzy linear programming (FFLP) prob-lem with fuzzy equality constraints is discussed. Using deviation degree measures, the FFLP problem is transformed into a crispδ-parametric linear programming (LP) problem. Giving the value of deviation degree in each constraint, the δ-fuzzy optimal so-lution of the FFLP problem can be obtained by solving this LP problem. An algorithm is also proposed to find a balance-fuzzy optimal solution between two goals in conflict: to improve the va-lues of the objective function and to decrease the values of the deviation degrees. A numerical example is solved to il ustrate the proposed method.
A novel recurrent neural network with finite-time convergence for linear programming.
Liu, Qingshan; Cao, Jinde; Chen, Guanrong
2010-11-01
In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.
Fault detection and initial state verification by linear programming for a class of Petri nets
Rachell, Traxon; Meyer, David G.
1992-01-01
The authors present an algorithmic approach to determining when the marking of a LSMG (live safe marked graph) or a LSFC (live safe free choice) net is in the set of live safe markings M. Hence, once the marking of a net is determined to be in M, then if at some time thereafter the marking of this net is determined not to be in M, this indicates a fault. It is shown how linear programming can be used to determine if m is an element of M. The worst-case computational complexity of each algorithm is bounded by the number of linear programs necessary to compute.
BEAMPATH: a program library for beam dynamics simulation in linear accelerators
International Nuclear Information System (INIS)
Batygin, Y.K.
1992-01-01
A structured programming technique was used to develop software for space charge dominated beams investigation in linear accelerators. The method includes hierarchical program design using program independent modules and a flexible combination of modules to provide a most effective version of structure for every specific case of simulation. A modular program BEAMPATH was developed for 2D and 3D particle-in-cell simulation of beam dynamics in a structure containing RF gaps, radio-frequency quadrupoles (RFQ), multipole lenses, waveguides, bending magnets and solenoids. (author) 5 refs.; 2 figs
Error Analysis Of Students Working About Word Problem Of Linear Program With NEA Procedure
Santoso, D. A.; Farid, A.; Ulum, B.
2017-06-01
Evaluation and assessment is an important part of learning. In evaluation process of learning, written test is still commonly used. However, the tests usually do not following-up by further evaluation. The process only up to grading stage not to evaluate the process and errors which done by students. Whereas if the student has a pattern error and process error, actions taken can be more focused on the fault and why is that happen. NEA procedure provides a way for educators to evaluate student progress more comprehensively. In this study, students’ mistakes in working on some word problem about linear programming have been analyzed. As a result, mistakes are often made students exist in the modeling phase (transformation) and process skills (process skill) with the overall percentage distribution respectively 20% and 15%. According to the observations, these errors occur most commonly due to lack of precision of students in modeling and in hastiness calculation. Error analysis with students on this matter, it is expected educators can determine or use the right way to solve it in the next lesson.
Tritium systems test assembly quality assurance program
International Nuclear Information System (INIS)
Kerstiens, F.L.; Wilhelm, R.C.
1986-07-01
A quality assurance program should establish the planned and systematic actions necessary to provide adequate confidence that fusion facilities and their subsystems will perform satisfactorily in service. The Tritium Systems Test Assembly (TSTA) Quality Assurance Program has been designed to assure that the designs, tests, data, and interpretive reports developed at TSTA are valid, accurate, and consistent with formally specified procedures and reviews. The quality consideration in all TSTA activities is directed toward the early detection of quality problems, coupled with timely and positive disposition and corrective action
Virtual instrumention-based linearity test platform for DCCT of digital power supply at SSRF
International Nuclear Information System (INIS)
Tang Junlong; Li Deming; Shen Tianjian; Liu Hong
2008-01-01
Based on virtual instrumentation, a reliable and effective test platform, performing instrument control, data acquisition and data recording, has been established to evaluate linearity of high performance DCCT (DC current transducer) for digital power supply at Shanghai Synchrotron Radiation Facility (SSRF). The software in LabVIEW language was developed to perform computer communication via serial communication (RS232) and GPIB, providing a friendly user interface to the linearity test platform. This makes it easy to test the linearity and control power on or off and current output of high-precision and high-current DC constant current output power supply. The experimental data, stored in an EXCEL file, can be processed to obtain DCCT linearity, and provide basis to further analyze DCCT performance in the future. (authors)
TRM performance prediction in Yucca Mountain welded tuff from linear cutter tests
International Nuclear Information System (INIS)
Gertsch, R.; Ozdemir, L.; Gertsch, L.
1992-01-01
Performance predictions were developed for tunnel boring machines operating in welded tuff for the construction of the experimental study facility and the potential nuclear waste repository at Yucca Mountain. The predictions were based on test data obtained from an extensive series of linear cutting tests performed on samples of Topopah Spring welded tuff from the Yucca Mountain Project site. Using the cutter force, spacing, and penetration data from the experimental program, the thrust, torque, power, and rate of penetration were estimated for a 25 ft diameter tunnel boring machine (TBM) operating in welded tuff. Guidelines were developed for the optimal design of the TBM cutterhead to achieve high production rates at the lowest possible excavation costs. The results show that the Topopah Spring welded tuff (TSw2) can be excavated at relatively high rates of advance with state-of-the-art TBMs. The results also show, however, that the TBM torque and power requirements will be higher than estimated based on rock physical properties and past tunneling experience in rock formations of similar strength
Performance Testing of a High Temperature Linear Alternator for Stirling Convertors
Metscher, Jonathan F.; Geng, Steven M.
2016-01-01
The NASA Glenn Research Center has conducted performance testing of a high temperature linear alternator (HTLA) in support of Stirling power convertor development for potential future Radioisotope Power Systems (RPS). The high temperature linear alternator is a modified version of that used in Sunpower's Advanced Stirling Convertor (ASC), and is capable of operation at temperatures up to 200 deg. Increasing the temperature capability of the linear alternator could expand the mission set of future Stirling RPS designs. High temperature Neodymium-Iron-Boron (Nd-Fe-B) magnets were selected for the HTLA application, and were fully characterized and tested prior to use. Higher temperature epoxy for alternator assembly was also selected and tested for thermal stability and strength. A characterization test was performed on the HTLA to measure its performance at various amplitudes, loads, and temperatures. HTLA endurance testing at 200 deg is currently underway.
International Nuclear Information System (INIS)
Reibel, R.R.; Barber, Z.W.; Fischer, J.A.; Tian, M.; Babbitt, W.R.
2004-01-01
Linear sideband chirped (LSC) programming is introduced as a means of configuring spatial-spectral holographic gratings for optical coherent transient processors. Similar to linear frequency chirped programming, LSC programming allows the use of broadband integrated electro-optic phase modulators to produce chirps instead of using elaborate broadband chirped lasers. This approach has several advantages including the ability to use a stabilized laser for the optical carrier as well as stable, reproducible chirped optical signals when the modulator is driven digitally. Using LSC programming, we experimentally demonstrate broadband true-time delay as a proof of principle for the optical control of phased array radars. Here both cw phase modulated and binary phase shift keyed probe signals are true-time delayed with bandwidths of 1 GHz and delay resolutions better than 60 ps
Directory of Open Access Journals (Sweden)
Chandra Nagasuma R
2009-02-01
Full Text Available Abstract Background A genetic network can be represented as a directed graph in which a node corresponds to a gene and a directed edge specifies the direction of influence of one gene on another. The reconstruction of such networks from transcript profiling data remains an important yet challenging endeavor. A transcript profile specifies the abundances of many genes in a biological sample of interest. Prevailing strategies for learning the structure of a genetic network from high-dimensional transcript profiling data assume sparsity and linearity. Many methods consider relatively small directed graphs, inferring graphs with up to a few hundred nodes. This work examines large undirected graphs representations of genetic networks, graphs with many thousands of nodes where an undirected edge between two nodes does not indicate the direction of influence, and the problem of estimating the structure of such a sparse linear genetic network (SLGN from transcript profiling data. Results The structure learning task is cast as a sparse linear regression problem which is then posed as a LASSO (l1-constrained fitting problem and solved finally by formulating a Linear Program (LP. A bound on the Generalization Error of this approach is given in terms of the Leave-One-Out Error. The accuracy and utility of LP-SLGNs is assessed quantitatively and qualitatively using simulated and real data. The Dialogue for Reverse Engineering Assessments and Methods (DREAM initiative provides gold standard data sets and evaluation metrics that enable and facilitate the comparison of algorithms for deducing the structure of networks. The structures of LP-SLGNs estimated from the INSILICO1, INSILICO2 and INSILICO3 simulated DREAM2 data sets are comparable to those proposed by the first and/or second ranked teams in the DREAM2 competition. The structures of LP-SLGNs estimated from two published Saccharomyces cerevisae cell cycle transcript profiling data sets capture known
A novel approach based on preference-based index for interval bilevel linear programming problem
Aihong Ren; Yuping Wang; Xingsi Xue
2017-01-01
This paper proposes a new methodology for solving the interval bilevel linear programming problem in which all coefficients of both objective functions and constraints are considered as interval numbers. In order to keep as much uncertainty of the original constraint region as possible, the original problem is first converted into an interval bilevel programming problem with interval coefficients in both objective functions only through normal variation of interval number and chance-constrain...
The WIPP research and development test program
International Nuclear Information System (INIS)
Tyler, L.D.
1985-01-01
The WIPP (Waste Isolation Pilot Plant) is a DOE RandD Facility for the purpose of developing the technology needed for the safe disposal of the United States defense-related radioactive waste. The in-situ test program is defined for the thermal-structural interactions, plugging and sealing, and waste package interactions in a salt environment. An integrated series of large-scale underground tests address the issues of both systems and long-term isolation performance of a repository
Cooperative field test program for wind systems
Energy Technology Data Exchange (ETDEWEB)
Bollmeier, W.S. II; Dodge, D.M.
1992-03-01
The objectives of the Federal Wind Energy Program, managed by the US Department of Energy (DOE), are (1) to assist industry and utilities in achieving a multi-regional US market penetration of wind systems, and (2) to establish the United States as the world leader in the development of advanced wind turbine technology. In 1984, the program conducted a series of planning workshops with representatives from the wind energy industry to obtain input on the Five-Year Research Plan then being prepared by DOE. One specific suggestion that came out of these meetings was that the federal program should conduct cooperative research tests with industry to enhance the technology transfer process. It was also felt that the active involvement of industry in DOE-funded research would improve the state of the art of wind turbine technology. DOE established the Cooperative Field Test Program (CFTP) in response to that suggestion. This program was one of the first in DOE to feature joint industry-government research test teams working toward common objectives.
47 CFR 73.1620 - Program tests.
2010-10-01
... integration of ownership and management and diversification of the media of mass communciation contained in... 47 Telecommunication 4 2010-10-01 2010-10-01 false Program tests. 73.1620 Section 73.1620 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES RADIO BROADCAST SERVICES...
The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.
Pang, Haotian; Liu, Han; Vanderbei, Robert
2014-02-01
We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.
A linear programming model for protein inference problem in shotgun proteomics.
Huang, Ting; He, Zengyou
2012-11-15
Assembling peptides identified from tandem mass spectra into a list of proteins, referred to as protein inference, is an important issue in shotgun proteomics. The objective of protein inference is to find a subset of proteins that are truly present in the sample. Although many methods have been proposed for protein inference, several issues such as peptide degeneracy still remain unsolved. In this article, we present a linear programming model for protein inference. In this model, we use a transformation of the joint probability that each peptide/protein pair is present in the sample as the variable. Then, both the peptide probability and protein probability can be expressed as a formula in terms of the linear combination of these variables. Based on this simple fact, the protein inference problem is formulated as an optimization problem: minimize the number of proteins with non-zero probabilities under the constraint that the difference between the calculated peptide probability and the peptide probability generated from peptide identification algorithms should be less than some threshold. This model addresses the peptide degeneracy issue by forcing some joint probability variables involving degenerate peptides to be zero in a rigorous manner. The corresponding inference algorithm is named as ProteinLP. We test the performance of ProteinLP on six datasets. Experimental results show that our method is competitive with the state-of-the-art protein inference algorithms. The source code of our algorithm is available at: https://sourceforge.net/projects/prolp/. zyhe@dlut.edu.cn. Supplementary data are available at Bioinformatics Online.
Mass Optimization of Battery/Supercapacitors Hybrid Systems Based on a Linear Programming Approach
Fleury, Benoit; Labbe, Julien
2014-08-01
The objective of this paper is to show that, on a specific launcher-type mission profile, a 40% gain of mass is expected using a battery/supercapacitors active hybridization instead of a single battery solution. This result is based on the use of a linear programming optimization approach to perform the mass optimization of the hybrid power supply solution.
Optimal local dimming for LED-backlit LCD displays via linear programming
DEFF Research Database (Denmark)
Shu, Xiao; Wu, Xiaolin; Forchhammer, Søren
2012-01-01
and the attenuations of LCD pixels. The objective is to minimize the distortion in luminance reproduction due to the leakage of LCD and the coarse granularity of the LED lights. The optimization problem is formulated as one of linear programming, and both exact and approximate algorithms are proposed. Simulation...
DEFF Research Database (Denmark)
Hernández, Adriana Carolina Luna; Aldana, Nelson Leonardo Diaz; Graells, Moises
2017-01-01
-side strategy, defined as a general mixed-integer linear programming by taking into account two stages for proper charging of the storage units. This model is considered as a deterministic problem that aims to minimize operating costs and promote self-consumption based on 24-hour ahead forecast data...
Linear Programming Approaches for Power Savings in Software-defined Networks
Moghaddam, F.A.; Grosso, P.
2016-01-01
Software-defined networks have been proposed as a viable solution to decrease the power consumption of the networking component in data center networks. Still the question remains on which scheduling algorithms are most suited to achieve this goal. We propose 4 different linear programming
Velazquez-Marti, B.; Annevelink, E.
2009-01-01
Much bio-energy can be obtained from wood pruning operations in forests and fruit orchards. Several spatial studies have been carried out for biomass surveys, and many linear programming models have been developed to model the logistics of bio-energy chains. These models can assist in determining
Discounted semi-Markov decision processes : linear programming and policy iteration
Wessels, J.; van Nunen, J.A.E.E.
1975-01-01
For semi-Markov decision processes with discounted rewards we derive the well known results regarding the structure of optimal strategies (nonrandomized, stationary Markov strategies) and the standard algorithms (linear programming, policy iteration). Our analysis is completely based on a primal
Fuzzy chance constrained linear programming model for scrap charge optimization in steel production
DEFF Research Database (Denmark)
Rong, Aiying; Lahdelma, Risto
2008-01-01
the uncertainty based on fuzzy set theory and constrain the failure risk based on a possibility measure. Consequently, the scrap charge optimization problem is modeled as a fuzzy chance constrained linear programming problem. Since the constraints of the model mainly address the specification of the product...
Visual, Algebraic and Mixed Strategies in Visually Presented Linear Programming Problems.
Shama, Gilli; Dreyfus, Tommy
1994-01-01
Identified and classified solution strategies of (n=49) 10th-grade students who were presented with linear programming problems in a predominantly visual setting in the form of a computerized game. Visual strategies were developed more frequently than either algebraic or mixed strategies. Appendix includes questionnaires. (Contains 11 references.)…
Discounted semi-Markov decision processes : linear programming and policy iteration
Wessels, J.; van Nunen, J.A.E.E.
1974-01-01
For semi-Markov decision processes with discounted rewards we derive the well known results regarding the structure of optimal strategies (nonrandomized, stationary Markov strategies) and the standard algorithms (linear programming, policy iteration). Our analysis is completely based on a primal
Eric J. Gustafson; L. Jay Roberts; Larry A. Leefers
2006-01-01
Forest management planners require analytical tools to assess the effects of alternative strategies on the sometimes disparate benefits from forests such as timber production and wildlife habitat. We assessed the spatial patterns of alternative management strategies by linking two models that were developed for different purposes. We used a linear programming model (...
Nutrient density score of typical Indonesian foods and dietary formulation using linear programming.
Jati, Ignasius Radix A P; Vadivel, Vellingiri; Nöhr, Donatus; Biesalski, Hans Konrad
2012-12-01
The present research aimed to analyse the nutrient density (ND), nutrient adequacy score (NAS) and energy density (ED) of Indonesian foods and to formulate a balanced diet using linear programming. Data on typical Indonesian diets were obtained from the Indonesian Socio-Economic Survey 2008. ND was investigated for 122 Indonesian foods. NAS was calculated for single nutrients such as Fe, Zn and vitamin A. Correlation analysis was performed between ND and ED, as well as between monthly expenditure class and food consumption pattern in Indonesia. Linear programming calculations were performed using the software POM-QM for Windows version 3. Republic of Indonesia, 2008. Public households (n 68 800). Vegetables had the highest ND of the food groups, followed by animal-based foods, fruits and staple foods. Based on NAS, the top ten food items for each food group were identified. Most of the staple foods had high ED and contributed towards daily energy fulfillment, followed by animal-based foods, vegetables and fruits. Commodities with high ND tended to have low ED. Linear programming could be used to formulate a balanced diet. In contrast to staple foods, purchases of fruit, vegetables and animal-based foods increased with the rise of monthly expenditure. People should select food items based on ND and NAS to alleviate micronutrient deficiencies in Indonesia. Dietary formulation calculated using linear programming to achieve RDA levels for micronutrients could be recommended for different age groups of the Indonesian population.
Secret Message Decryption: Group Consulting Projects Using Matrices and Linear Programming
Gurski, Katharine F.
2009-01-01
We describe two short group projects for finite mathematics students that incorporate matrices and linear programming into fictional consulting requests presented as a letter to the students. The students are required to use mathematics to decrypt secret messages in one project involving matrix multiplication and inversion. The second project…
Research and evaluation of the effectiveness of e-learning in the case of linear programming
Directory of Open Access Journals (Sweden)
Ljiljana Miletić
2016-04-01
Full Text Available The paper evaluates the effectiveness of the e-learning approach to linear programming. The goal was to investigate how proper use of information and communication technologies (ICT and interactive learning helps to improve high school students’ understanding, learning and retention of advanced non-curriculum material. The hypothesis was that ICT and e-learning is helpful in teaching linear programming methods. In the first phase of the research, a module of lessons for linear programming (LP was created using the software package Loomen Moodle and other interactive software packages such as Geogebra. In the second phase, the LP module was taught as a short course to two groups of high school students. These two groups of students were second-grade students in a Croatian high school. In Class 1, the module was taught using ICT and e-learning, while the module was taught using classical methods in Class 2. The action research methodology was an integral part in delivering the course to both student groups. The sample student groups were carefully selected to ensure that differences in background knowledge and learning potential were statistically negligible. Relevant data was collected while delivering the course. Statistical analysis of the collected data showed that the student group using the e-learning method produced better results than the group using a classical learning method. These findings support previous results on the effectiveness of e-learning, and also establish a specific approach to e-learning in linear programming.
Korman, Jonathan; McCann, Robert J.; Seis, Christian
2013-01-01
A new approach to linear programming duality is proposed which relies on quadratic penalization, so that the relation between solutions to the penalized primal and dual problems becomes affine. This yields a new proof of Levin's duality theorem for capacity-constrained optimal transport as an infinite-dimensional application.
The effect of workload constraints in linear programming models for production planning
Jansen, M.M.; Kok, de A.G.; Adan, I.J.B.F.
2011-01-01
Linear programming (LP) models for production planning incorporate a model of the manufacturing system that is necessarily deterministic. Although these deterministic models are the current state-of-the-art, it should be recognized that they are used in an environment that is inherently stochastic.
NRC valve performance test program - check valve testing
International Nuclear Information System (INIS)
Jeanmougin, N.M.
1987-01-01
The Valve Performance Test Program addresses the current requirements for testing of pressure isolation valves (PIVs) in light water reactors. Leak rate monitoring is the current method used by operating commercial power plants to survey the condition of their PIVs. ETEC testing of three check valves (4-inch, 6-inch, and 12-inch nominal diameters) indicates that leak rate testing is not a reliable method for detecting impending valve failure. Acoustic emission monitoring of check valves shows promise as a method of detecting loosened internals damage. Future efforts will focus on evaluation of acoustic emission monitoring as a technique for determining check valve condition. Three gate valves also will be tested to evaluate whether the check valve results are applicable to gate type PIVs
Cooke, C. H.
1975-01-01
STICAP (Stiff Circuit Analysis Program) is a FORTRAN 4 computer program written for the CDC-6400-6600 computer series and SCOPE 3.0 operating system. It provides the circuit analyst a tool for automatically computing the transient responses and frequency responses of large linear time invariant networks, both stiff and nonstiff (algorithms and numerical integration techniques are described). The circuit description and user's program input language is engineer-oriented, making simple the task of using the program. Engineering theories underlying STICAP are examined. A user's manual is included which explains user interaction with the program and gives results of typical circuit design applications. Also, the program structure from a systems programmer's viewpoint is depicted and flow charts and other software documentation are given.
Dyehouse, Melissa; Bennett, Deborah; Harbor, Jon; Childress, Amy; Dark, Melissa
2009-01-01
Logic models are based on linear relationships between program resources, activities, and outcomes, and have been used widely to support both program development and evaluation. While useful in describing some programs, the linear nature of the logic model makes it difficult to capture the complex relationships within larger, multifaceted…
International Nuclear Information System (INIS)
Aref'ev, A.V.; Blokhov, M.V.; Gerasimov, V.F.
1981-01-01
A program of physical investigations and the corresponding requirements to accelerated beam parameters are discussed in brief. The state and working capacity of separate units and the accelerator as a whole for the 8-year operating period are analyzed. The aim and principal program points of linear electron accelerator modernization are defined. The program of accelerator modernization assumes: electron beam energy increase up to 100-120 MeV; mounting of three additional accelerating sections; clystron efficiency increase; development of a highly reliable modulator; stabilized power supply sources; a system of synchronous start-up; a focusing system; a beam separation system and etc [ru
The High Level Vibration Test Program
International Nuclear Information System (INIS)
Hofmayer, C.H.; Curreri, J.R.; Park, Y.J.; Kato, W.Y.; Kawakami, S.
1989-01-01
As part of cooperative agreements between the United States and Japan, tests have been performed on the seismic vibration table at the Tadotsu Engineering Laboratory of Nuclear Power Engineering Test Center (NUPEC) in Japan. The objective of the test program was to use the NUPEC vibration table to drive large diameter nuclear power piping to substantial plastic strain with an earthquake excitation and to compare the results with state-of-the-art analysis of the problem. The test model was designed by modifying the 1/2.5 scale model of the PWR primary coolant loop. Elastic and inelastic seismic response behavior of the test model was measured in a number of test runs with an increasing excitation input level up to the limit of the vibration table. In the maximum input condition, large dynamic plastic strains were obtained in the piping. Crack initiation was detected following the second maximum excitation run. The test model was subjected to a maximum acceleration well beyond what nuclear power plants are designed to withstand. This paper describes the overall plan, input motion development, test procedure, test results and comparisons with pre-test analysis. 4 refs., 16 figs., 2 tabs
The High Level Vibration Test program
International Nuclear Information System (INIS)
Hofmayer, C.H.; Curreri, J.R.; Park, Y.J.; Kato, W.Y.; Kawakami, S.
1990-01-01
As part of cooperative agreements between the United States and Japan, tests have been performed on the seismic vibration table at the Tadotsu Engineering Laboratory of Nuclear Power Engineering Test Center (NUPEC) in Japan. The objective of the test program was to use the NUPEC vibration table to drive large diameter nuclear power piping to substantial plastic strain with an earthquake excitation and to compare the results with state-of-the-art analysis of the problem. The test model was designed by modifying the 1/2.5 scale model of the pressurized water reactor primary coolant loop. Elastic and inelastic seismic response behavior of the test model was measured in a number of test runs with an increasing excitation input level up to the limit of the vibration table. In the maximum input condition, large dynamic plastic strains were obtained in the piping. Crack initiation was detected following the second maximum excitation run. The test model was subjected to a maximum acceleration well beyond what nuclear power plants are designed to withstand. This paper describes the overall plan, input motion development, test procedure, test results and comparisons with pre-test analysis
1/3-scale model testing program
International Nuclear Information System (INIS)
Yoshimura, H.R.; Attaway, S.W.; Bronowski, D.R.; Uncapher, W.L.; Huerta, M.; Abbott, D.G.
1989-01-01
This paper describes the drop testing of a one-third scale model transport cask system. Two casks were supplied by Transnuclear, Inc. (TN) to demonstrate dual purpose shipping/storage casks. These casks will be used to ship spent fuel from DOEs West Valley demonstration project in New York to the Idaho National Engineering Laboratory (INEL) for long term spent fuel dry storage demonstration. As part of the certification process, one-third scale model tests were performed to obtain experimental data. Two 9-m (30-ft) drop tests were conducted on a mass model of the cask body and scaled balsa and redwood filled impact limiters. In the first test, the cask system was tested in an end-on configuration. In the second test, the system was tested in a slap-down configuration where the axis of the cask was oriented at a 10 degree angle with the horizontal. Slap-down occurs for shallow angle drops where the primary impact at one end of the cask is followed by a secondary impact at the other end. The objectives of the testing program were to (1) obtain deceleration and displacement information for the cask and impact limiter system, (2) obtain dynamic force-displacement data for the impact limiters, (3) verify the integrity of the impact limiter retention system, and (4) examine the crush behavior of the limiters. This paper describes both test results in terms of measured deceleration, post test deformation measurements, and the general structural response of the system
Testing program for burning plasma experiment vacuum vessel bolted joint
International Nuclear Information System (INIS)
Hsueh, P.K.; Khan, M.Z.; Swanson, J.; Feng, T.; Dinkevich, S.; Warren, J.
1992-01-01
As presently designed, the Burning Plasma Experiment vacuum vessel will be segmentally fabricated and assembled by bolted joints in the field. Due to geometry constraints, most of the bolted joints have significant eccentricity which causes the joint behavior to be sensitive to joint clamping forces. Experience indicates that as a result of this eccentricity, the joint will tend to open at the side closest to the applied load with the extent of the opening being dependent on the initial preload. In this paper analytical models coupled with a confirmatory testing program are developed to investigate and predict the non-linear behavior of the vacuum vessel bolted joint
Linear programming models and methods of matrix games with payoffs of triangular fuzzy numbers
Li, Deng-Feng
2016-01-01
This book addresses two-person zero-sum finite games in which the payoffs in any situation are expressed with fuzzy numbers. The purpose of this book is to develop a suite of effective and efficient linear programming models and methods for solving matrix games with payoffs in fuzzy numbers. Divided into six chapters, it discusses the concepts of solutions of matrix games with payoffs of intervals, along with their linear programming models and methods. Furthermore, it is directly relevant to the research field of matrix games under uncertain economic management. The book offers a valuable resource for readers involved in theoretical research and practical applications from a range of different fields including game theory, operational research, management science, fuzzy mathematical programming, fuzzy mathematics, industrial engineering, business and social economics. .
User's Guide to the Weighted-Multiple-Linear Regression Program (WREG version 1.0)
Eng, Ken; Chen, Yin-Yu; Kiang, Julie.E.
2009-01-01
Streamflow is not measured at every location in a stream network. Yet hydrologists, State and local agencies, and the general public still seek to know streamflow characteristics, such as mean annual flow or flood flows with different exceedance probabilities, at ungaged basins. The goals of this guide are to introduce and familiarize the user with the weighted multiple-linear regression (WREG) program, and to also provide the theoretical background for program features. The program is intended to be used to develop a regional estimation equation for streamflow characteristics that can be applied at an ungaged basin, or to improve the corresponding estimate at continuous-record streamflow gages with short records. The regional estimation equation results from a multiple-linear regression that relates the observable basin characteristics, such as drainage area, to streamflow characteristics.
SLFP: a stochastic linear fractional programming approach for sustainable waste management.
Zhu, H; Huang, G H
2011-12-01
A stochastic linear fractional programming (SLFP) approach is developed for supporting sustainable municipal solid waste management under uncertainty. The SLFP method can solve ratio optimization problems associated with random information, where chance-constrained programming is integrated into a linear fractional programming framework. It has advantages in: (1) comparing objectives of two aspects, (2) reflecting system efficiency, (3) dealing with uncertainty expressed as probability distributions, and (4) providing optimal-ratio solutions under different system-reliability conditions. The method is applied to a case study of waste flow allocation within a municipal solid waste (MSW) management system. The obtained solutions are useful for identifying sustainable MSW management schemes with maximized system efficiency under various constraint-violation risks. The results indicate that SLFP can support in-depth analysis of the interrelationships among system efficiency, system cost and system-failure risk. Copyright © 2011 Elsevier Ltd. All rights reserved.
An improved exploratory search technique for pure integer linear programming problems
Fogle, F. R.
1990-01-01
The development is documented of a heuristic method for the solution of pure integer linear programming problems. The procedure draws its methodology from the ideas of Hooke and Jeeves type 1 and 2 exploratory searches, greedy procedures, and neighborhood searches. It uses an efficient rounding method to obtain its first feasible integer point from the optimal continuous solution obtained via the simplex method. Since this method is based entirely on simple addition or subtraction of one to each variable of a point in n-space and the subsequent comparison of candidate solutions to a given set of constraints, it facilitates significant complexity improvements over existing techniques. It also obtains the same optimal solution found by the branch-and-bound technique in 44 of 45 small to moderate size test problems. Two example problems are worked in detail to show the inner workings of the method. Furthermore, using an established weighted scheme for comparing computational effort involved in an algorithm, a comparison of this algorithm is made to the more established and rigorous branch-and-bound method. A computer implementation of the procedure, in PC compatible Pascal, is also presented and discussed.
A depth-first search algorithm to compute elementary flux modes by linear programming.
Quek, Lake-Ee; Nielsen, Lars K
2014-07-30
The decomposition of complex metabolic networks into elementary flux modes (EFMs) provides a useful framework for exploring reaction interactions systematically. Generating a complete set of EFMs for large-scale models, however, is near impossible. Even for moderately-sized models (linear programming (LP) to enumerate EFMs in an exhaustive fashion. Constraints can be introduced to directly generate a subset of EFMs satisfying the set of constraints. The depth-first search algorithm has a constant memory overhead. Using flux constraints, a large LP problem can be massively divided and parallelized into independent sub-jobs for deployment into computing clusters. Since the sub-jobs do not overlap, the approach scales to utilize all available computing nodes with minimal coordination overhead or memory limitations. The speed of the algorithm was comparable to efmtool, a mainstream Double Description method, when enumerating all EFMs; the attrition power gained from performing flux feasibility tests offsets the increased computational demand of running an LP solver. Unlike the Double Description method, the algorithm enables accelerated enumeration of all EFMs satisfying a set of constraints.
Integer Linear Programming for Constrained Multi-Aspect Committee Review Assignment
Karimzadehgan, Maryam; Zhai, ChengXiang
2011-01-01
Automatic review assignment can significantly improve the productivity of many people such as conference organizers, journal editors and grant administrators. A general setup of the review assignment problem involves assigning a set of reviewers on a committee to a set of documents to be reviewed under the constraint of review quota so that the reviewers assigned to a document can collectively cover multiple topic aspects of the document. No previous work has addressed such a setup of committee review assignments while also considering matching multiple aspects of topics and expertise. In this paper, we tackle the problem of committee review assignment with multi-aspect expertise matching by casting it as an integer linear programming problem. The proposed algorithm can naturally accommodate any probabilistic or deterministic method for modeling multiple aspects to automate committee review assignments. Evaluation using a multi-aspect review assignment test set constructed using ACM SIGIR publications shows that the proposed algorithm is effective and efficient for committee review assignments based on multi-aspect expertise matching. PMID:22711970
Plutonium Immobilization Program cold pour tests
International Nuclear Information System (INIS)
Hovis, G.L.; Stokes, M.W.; Smith, M.E.; Wong, J.W.
1999-01-01
The Plutonium Immobilization Program (PIP) is a joint venture between the Savannah River Site, Lawrence Livermore National Laboratory, Argonne National Laboratory, and Pacific Northwest National Laboratory to carry out the disposition of excess weapons-grade plutonium. This program uses the can-in-canister (CIC) approach. CIC involves encapsulating plutonium in ceramic forms (or pucks), placing the pucks in sealed stainless steel cans, placing the cans in long cylindrical magazines, latching the magazines to racks inside Defense Waste Processing Facility (DWPF) canisters, and filling the DWPF canisters with high-level waste glass. This process puts the plutonium in a stable form and makes it attractive for reuse. At present, the DWPF pours glass into empty canisters. In the CIC approach, the addition of a stainless steel rack, magazines, cans, and ceramic pucks to the canisters introduces a new set of design and operational challenges: All of the hardware installed in the canisters must maintain structural integrity at elevated (molten-glass) temperatures. This suggests that a robust design is needed. However, the amount of material added to the DWPF canister must be minimized to prevent premature glass cooling and excessive voiding caused by a large internal thermal mass. High metal temperatures, minimizing thermal mass, and glass flow paths are examples of the types of technical considerations of the equipment design process. To determine the effectiveness of the design in terms of structural integrity and glass-flow characteristics, full-scale testing will be conducted. A cold (nonradioactive) pour test program is planned to assist in the development and verification of a baseline design for the immobilization canister to be used in the PIP process. The baseline design resulting from the cold pour test program and CIC equipment development program will provide input to Title 1 design for second-stage immobilization. The cold pour tests will be conducted in two
Nevada Test Site Radiation Protection Program
Energy Technology Data Exchange (ETDEWEB)
Radiological Control Managers' Council, Nevada Test Site
2007-08-09
Title 10 Code of Federal Regulations (CFR) 835, 'Occupational Radiation Protection', establishes radiation protection standards, limits, and program requirements for protecting individuals from ionizing radiation resulting from the conduct of U.S. Department of Energy (DOE) activities. 10 CFR 835.101(a) mandates that DOE activities be conducted in compliance with a documented Radiation Protection Program (RPP) as approved by DOE. This document promulgates the RPP for the Nevada Test Site (NTS), related (onsite or offsite) DOE National Nuclear Security Administration Nevada Site Office (NNSA/NSO) operations, and environmental restoration offsite projects.
RF power source for the compact linear collider test facility (CTF3)
McMonagle, G; Brown, Peter; Carron, G; Hanni, R; Mourier, J; Rossat, G; Syratchev, I V; Tanner, L; Thorndahl, L
2004-01-01
The CERN CTF3 facility will test and demonstrate many vital components of CLIC (Compact Linear Collider). This paper describes the pulsed RF power source at 2998.55 MHz for the drive-beam accelerator (DBA), which produces a beam with an energy of 150 MeV and a current of 3.5 Amps. Where possible, existing equipment from the LEP preinjector, especially the modulators and klystrons, is being used and upgraded to achieve this goal. A high power RF pulse compression system is used at the output of each klystron, which requires sophisticated RF phase programming on the low level side to achieve the required RF pulse. In addition to the 3 GHz system two pulsed RF sources operating at 1.5 GHz are being built. The first is a wide-band, low power, travelling wave tube (TWT) for the subharmonic buncher (SHB) system that produces a train of "phase coded" subpulses as part of the injector scheme. The second is a high power narrow band system to produce 20 MW RF power to the 1.5 GHz RF deflectors in the delay loop situate...
The high level vibration test program
International Nuclear Information System (INIS)
Hofmayer, C.H.; Curreri, J.R.; Park, Y.J.; Kato, W.Y.; Kawakami, S.
1989-01-01
As part of cooperative agreements between the US and Japan, tests have been performed on the seismic vibration table at the Tadotsu Engineering Laboratory of Nuclear Power Engineering Test Center (NUPEC) in Japan. The objective of the test program was to use the NUPEC vibration table to drive large diameter nuclear power piping to substantial plastic strain with an earthquake excitation and to compare the results with state-of-the-art analysis of the problem. The test model was subjected to a maximum acceleration well beyond what nuclear power plants are designed to withstand. A modified earthquake excitation was applied and the excitation level was increased carefully to minimize the cumulative fatigue damage due to the intermediate level excitations. Since the piping was pressurized, and the high level earthquake excitation was repeated several times, it was possible to investigate the effects of ratchetting and fatigue as well. Elastic and inelastic seismic response behavior of the test model was measured in a number of test runs with an increasing excitation input level up to the limit of the vibration table. In the maximum input condition, large dynamic plastic strains were obtained in the piping. Crack initiation was detected following the second maximum excitation run. Crack growth was carefully monitored during the next two additional maximum excitation runs. The final test resulted in a maximum crack depth of approximately 94% of the wall thickness. The HLVT (high level vibration test) program has enhanced understanding of the behavior of piping systems under severe earthquake loading. As in other tests to failure of piping components, it has demonstrated significant seismic margin in nuclear power plant piping
American Society for Testing and Materials. Philadelphia
2005-01-01
1.1 This test method determines the degree of linearity of a photovoltaic device parameter with respect to a test parameter, for example, short-circuit current with respect to irradiance. 1.2 The linearity determined by this test method applies only at the time of testing, and implies no past or future performance level. 1.3 This test method applies only to non-concentrator terrestrial photovoltaic devices. 1.4 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.5 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
The Advanced Test Reactor Strategic Evaluation Program
International Nuclear Information System (INIS)
Buescher, B.J.
1990-01-01
A systematic evaluation of safety, environmental, and operational issues has been initiated at the Advanced Test Reactor (ATR). This program, the Strategic Evaluation Program (STEP), provides an integrated review of safety and operational issues against the standards applied to licensed commercial facilities. In the review of safety issues, 18 deviations were identified which required prompt attention. Resolution of these items has been accelerated in the program. An integrated living schedule is being developed to address the remaining findings. A risk evaluation is being performed on the proposed corrective actions and these actions will then be formally ranked in order of priority based on considerations of safety and operational significance. Once the final ranking is completed, an integrated schedule will be developed, which will include considerations of availability of funding and operating schedule. 3 refs., 2 figs
Testing the existence of optical linear polarization in young brown dwarfs
Manjavacas, E.; Miles-Páez, P. A.; Zapatero-Osorio, M. R.; Goldman, B.; Buenzli, E.; Henning, T.; Pallé, E.; Fang, M.
2017-07-01
Linear polarization can be used as a probe of the existence of atmospheric condensates in ultracool dwarfs. Models predict that the observed linear polarization increases with the degree of oblateness, which is inversely proportional to the surface gravity. We aimed to test the existence of optical linear polarization in a sample of bright young brown dwarfs, with spectral types between M6 and L2, observable from the Calar Alto Observatory, and cataloged previously as low gravity objects using spectroscopy. Linear polarimetric images were collected in I and R band using CAFOS at the 2.2-m telescope in Calar Alto Observatory (Spain). The flux ratio method was employed to determine the linear polarization degrees. With a confidence of 3σ, our data indicate that all targets have a linear polarimetry degree in average below 0.69 per cent in the I band, and below 1.0 per cent in the R band, at the time they were observed. We detected significant (I.e. P/σ ≥ 3) linear polarization for the young M6 dwarf 2MASS J04221413+1530525 in the R band, with a degree of p* = 0.81 ± 0.17 per cent.
Stability of multi-objective bi-level linear programming problems under fuzziness
Directory of Open Access Journals (Sweden)
Abo-Sinna Mahmoud A.
2013-01-01
Full Text Available This paper deals with multi-objective bi-level linear programming problems under fuzzy environment. In the proposed method, tentative solutions are obtained and evaluated by using the partial information on preference of the decision-makers at each level. The existing results concerning the qualitative analysis of some basic notions in parametric linear programming problems are reformulated to study the stability of multi-objective bi-level linear programming problems. An algorithm for obtaining any subset of the parametric space, which has the same corresponding Pareto optimal solution, is presented. Also, this paper established the model for the supply-demand interaction in the age of electronic commerce (EC. First of all, the study uses the individual objectives of both parties as the foundation of the supply-demand interaction. Subsequently, it divides the interaction, in the age of electronic commerce, into the following two classifications: (i Market transactions, with the primary focus on the supply demand relationship in the marketplace; and (ii Information service, with the primary focus on the provider and the user of information service. By applying the bi-level programming technique of interaction process, the study will develop an analytical process to explain how supply-demand interaction achieves a compromise or why the process fails. Finally, a numerical example of information service is provided for the sake of illustration.
NRC Confirmatory Testing Program for SBWR
International Nuclear Information System (INIS)
Han, J.T.; Bessette, D.E.; Shotkin, L.M.
1994-01-01
The objective of the NRC Confirmatory Testing Program for SBWR is to provide integral data for code assessment, which reasonably reproduce the important phenomena and processes expected in the SBWR under various loss-of-coolant accident (LOCA) and transient conditions. To achieve this objective, the Program consists of four coupled elements: (1) to design and construct an integral, carefully-scaled SBWR test facility at Purdue Univ., (2) to provide pre-construction RELAP5/CONTAIN predictions of the facility design, (3) to provide confirmatory data for code assessment, and (4) to assess the RELAP5/CONTAIN code with data. A description of the open-quotes preliminary designclose quotes of the Purdue test facility and test matrix is presented. The facility is scheduled to be built by December 1994. Approximately 50 tests will be performed from April 1995 through April 1996 and documented by interim data reports. A final and complete data report is scheduled to be published by July 31, 1996
Tharrey, Marion; Olaya, Gilma A; Fewtrell, Mary; Ferguson, Elaine
2017-12-01
The aim of the study was to use linear programming (LP) analyses to adapt New Complementary Feeding Guidelines (NCFg) designed for infants aged 6 to 12 months living in poor socioeconomic circumstances in Bogota to ensure dietary adequacy for young children aged 12 to 23 months. A secondary data analysis was performed using dietary and anthropometric data collected from 12-month-old infants (n = 72) participating in a randomized controlled trial. LP analyses were performed to identify nutrients whose requirements were difficult to achieve using local foods as consumed; and to test and compare the NCFg and alternative food-based recommendations (FBRs) on the basis of dietary adequacy, for 11 micronutrients, at the population level. Thiamine recommended nutrient intakes for these young children could not be achieved given local foods as consumed. NCFg focusing only on meat, fruits, vegetables, and breast milk ensured dietary adequacy at the population level for only 4 micronutrients, increasing to 8 of 11 modelled micronutrients when the FBRs promoted legumes, dairy, vitamin A-rich vegetables, and chicken giblets. None of the FBRs tested ensured population-level dietary adequacy for thiamine, niacin, and iron unless a fortified infant food was recommended. The present study demonstrated the value of using LP to adapt NCFg for a different age group than the one for which they were designed. Our analyses suggest that to ensure dietary adequacy for 12- to 23-month olds these adaptations should include legumes, dairy products, vitamin A-rich vegetables, organ meat, and a fortified food.
Design of Linear Control System for Wind Turbine Blade Fatigue Testing
DEFF Research Database (Denmark)
Toft, Anders; Roe-Poulsen, Bjarke Nørskov; Christiansen, Rasmus
2016-01-01
This paper proposes a linear method for wind turbine blade fatigue testing at Siemens Wind Power. The setup consists of a blade, an actuator (motor and load mass) that acts on the blade with a sinusoidal moment, and a distribution of strain gauges to measure the blade flexure. Based...... difficult to control. To make a linear controller, a different approach has been chosen, namely making a controller which is not regulating on the input frequency, but on the input amplitude. A non-linear mechanical model for the blade and the motor has been constructed. This model has been simplified based...... on the desired output, namely the amplitude of the blade. Furthermore, the model has been linearised to make it suitable for linear analysis and control design methods.\\\\ The controller is designed based on a simplified and linearised model, and its gain parameter determined using pole placement. The model...
Polzin, K. A.; Pearson, J. B.; Webster, K.; Godfoy, T. J.; Bossard, J. A.
2013-01-01
Results of performance testing of an annular linear induction pump that has been designed for integration into a fission surface power technology demonstration unit are presented. The pump electromagnetically pushes liquid metal (NaK) through a specially-designed apparatus that permits quantification of pump performance over a range of operating conditions. Testing was conducted for frequencies of 40, 55, and 70 Hz, liquid metal temperatures of 125, 325, and 525 C, and input voltages from 30 to 120 V. Pump performance spanned a range of flow rates from roughly 0.3 to 3.1 L/s (4.8 to 49 gpm), and pressure heads of <1 to 104 kPa (<0.15 to 15 psi). The maximum efficiency measured during testing was 5.4%. At the technology demonstration unit operating temperature of 525 C the pump operated over a narrower envelope, with flow rates from 0.3 to 2.75 L/s (4.8 to 43.6 gpm), developed pressure heads from <1 to 55 kPa (<0.15 to 8 psi), and a maximum efficiency of 3.5%. The pump was supplied with three-phase power at 40 and 55 Hz using a variable-frequency motor drive, while power at 55 and 70 Hz was supplied using a variable-frequency power supply. Measured performance of the pump at 55 Hz using either supply exhibited good quantitative agreement. For a given temperature, the peak in efficiency occurred at different flow rates as the frequency was changed, but the maximum value of efficiency was relative insensitive within 0.3% over the frequency range tested, including a scan from 45 to 78 Hz. The objectives of the FSP technology project are as follows:5 • Develop FSP concepts that meet expected surface power requirements at reasonable cost with added benefits over other options. • Establish a nonnuclear hardware-based technical foundation for FSP design concepts to reduce overall development risk. • Reduce the cost uncertainties for FSP and establish greater credibility for flight system cost estimates. • Generate the key nonnuclear products to allow Agency
Linear programming to build food-based dietary guidelines: Romanian food baskets
DEFF Research Database (Denmark)
Parlesak, Alexandr; Robertson, Aileen; Hondru, Gabriela
approach using linear programming methodology to design national dietary recommendations which aim to prevent both NCDs and micronutrient deficiencies and still be affordable by low income groups. This new approach is applied within the context of food availability in Romania in 2014. Eating the same food...... every day is unrealistic and too monotonous to be maintained, so this novel approach is used to select a wide range of diverse foods that can be recommended for a period of up to, for example, one month. The following are the key findings of this report. • The simplest version of the Romanian food.......65 lei (~€ 4.46) for a day. • Key nutrients, primarily vitamin D, calcium, potassium and iron, were found to control the overall price. • The least expensive basket (one day’s rations) is monotonous and the linear programming approach is used to select a wide range of foods that can be recommended...
International Nuclear Information System (INIS)
Shimizu, Yoshiaki
1981-01-01
A mathematical procedure is proposed to make a radioactive waste management plan comprehensively. Since such planning is relevant to some different goals in management, decision making has to be formulated as a multiobjective optimization problem. A mathematical programming method was introduced to make a decision through an interactive manner which enables us to assess the preference of decision maker step by step among the conflicting objectives. The reference system taken as an example is the radioactive waste management system at the Research Reactor Institute of Kyoto University (KUR). Its linear model was built based on the experience in the actual management at KUR. The best-compromise model was then formulated as a multiobjective linear programming by the aid of the computational analysis through a conventional optimization. It was shown from the numerical results that the proposed approach could provide some useful informations to make an actual management plan. (author)
Mixed integer linear programming model for dynamic supplier selection problem considering discounts
Directory of Open Access Journals (Sweden)
Adi Wicaksono Purnawan
2018-01-01
Full Text Available Supplier selection is one of the most important elements in supply chain management. This function involves evaluation of many factors such as, material costs, transportation costs, quality, delays, supplier capacity, storage capacity and others. Each of these factors varies with time, therefore, supplier identified for one period is not necessarily be same for the next period to supply the same product. So, mixed integer linear programming (MILP was developed to overcome the dynamic supplier selection problem (DSSP. In this paper, a mixed integer linear programming model is built to solve the lot-sizing problem with multiple suppliers, multiple periods, multiple products and quantity discounts. The buyer has to make a decision for some products which will be supplied by some suppliers for some periods cosidering by discount. To validate the MILP model with randomly generated data. The model is solved by Lingo 16.
Visualizing measurement for 3D smooth density distributions by means of linear programming
International Nuclear Information System (INIS)
Tayama, Norio; Yang, Xue-dong
1994-01-01
This paper is concerned with a theoretical possibility of a new visualizing measurement method based on an optimum 3D reconstruction from a few selected projections. A theory of optimum 3D reconstruction by a linear programming is discussed, utilizing a few projections for sampled 3D smooth-density-distribution model which satisfies the condition of the 3D sampling theorem. First by use of the sampling theorem, it is shown that we can set up simultaneous simple equations which corresponds to the case of the parallel beams. Then we solve the simultaneous simple equations by means of linear programming algorithm, and we can get an optimum 3D density distribution images with minimum error in the reconstruction. The results of computer simulation with the algorithm are presented. (author)
Ren, Jingzheng; Dong, Liang; Sun, Lu; Goodsite, Michael Evan; Tan, Shiyu; Dong, Lichun
2015-01-01
The aim of this work was to develop a model for optimizing the life cycle cost of biofuel supply chain under uncertainties. Multiple agriculture zones, multiple transportation modes for the transport of grain and biofuel, multiple biofuel plants, and multiple market centers were considered in this model, and the price of the resources, the yield of grain and the market demands were regarded as interval numbers instead of constants. An interval linear programming was developed, and a method for solving interval linear programming was presented. An illustrative case was studied by the proposed model, and the results showed that the proposed model is feasible for designing biofuel supply chain under uncertainties. Copyright © 2015 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Romeijn, H Edwin; Ahuja, Ravindra K; Dempsey, James F; Kumar, Arvind; Li, Jonathan G
2003-01-01
We present a novel linear programming (LP) based approach for efficiently solving the intensity modulated radiation therapy (IMRT) fluence-map optimization (FMO) problem to global optimality. Our model overcomes the apparent limitations of a linear-programming approach by approximating any convex objective function by a piecewise linear convex function. This approach allows us to retain the flexibility offered by general convex objective functions, while allowing us to formulate the FMO problem as a LP problem. In addition, a novel type of partial-volume constraint that bounds the tail averages of the differential dose-volume histograms of structures is imposed while retaining linearity as an alternative approach to improve dose homogeneity in the target volumes, and to attempt to spare as many critical structures as possible. The goal of this work is to develop a very rapid global optimization approach that finds high quality dose distributions. Implementation of this model has demonstrated excellent results. We found globally optimal solutions for eight 7-beam head-and-neck cases in less than 3 min of computational time on a single processor personal computer without the use of partial-volume constraints. Adding such constraints increased the running times by a factor of 2-3, but improved the sparing of critical structures. All cases demonstrated excellent target coverage (>95%), target homogeneity (<10% overdosing and <7% underdosing) and organ sparing using at least one of the two models
Directory of Open Access Journals (Sweden)
HERBERT POINSTINGL
2009-06-01
Full Text Available Based on the demand for new verbal reasoning tests to enrich psychological test inventory, a pilot version of a new test was analysed: the 'Family Relation Reasoning Test' (FRRT; Poinstingl, Kubinger, Skoda & Schechtner, forthcoming, in which several basic cognitive operations (logical rules have been embedded/implemented. Given family relationships of varying complexity embedded in short stories, testees had to logically conclude the correct relationship between two individuals within a family. Using empirical data, the linear logistic test model (LLTM; Fischer, 1972, a special case of the Rasch model, was used to test the construct validity of the test: The hypothetically assumed basic cognitive operations had to explain the Rasch model's item difficulty parameters. After being shaped in LLTM's matrices of weights ((qij, none of these operations were corroborated by means of the Andersen's Likelihood Ratio Test.
Efficient Market Hypothesis in South Africa: Evidence from Linear and Nonlinear Unit Root Tests
Directory of Open Access Journals (Sweden)
Andrew Phiri
2015-12-01
Full Text Available This study investigates the weak form efficient market hypothesis (EMH for five generalized stock indices in the Johannesburg Stock Exchange (JSE using weekly data collected from 31st January 2000 to 16th December 2014. In particular, we test for weak form market efficiency using a battery of linear and nonlinear unit root testing procedures comprising of the classical augmented Dickey-Fuller (ADF tests, the two-regime threshold autoregressive (TAR unit root tests described in Enders and Granger (1998 as well as the three-regime unit root tests described in Bec, Salem, and Carrasco (2004. Based on our empirical analysis, we are able to demonstrate that whilst the linear unit root tests advocate for unit roots within the time series, the nonlinear unit root tests suggest that most stock indices are threshold stationary processes. These results bridge two opposing contentions obtained from previous studies by concluding that under a linear framework the JSE stock indices offer support in favour of weak form market efficiency whereas when nonlinearity is accounted for, a majority of the indices violate the weak form EMH.
Directory of Open Access Journals (Sweden)
Weihua Jin
2013-01-01
Full Text Available This paper proposes a genetic-algorithms-based approach as an all-purpose problem-solving method for operation programming problems under uncertainty. The proposed method was applied for management of a municipal solid waste treatment system. Compared to the traditional interactive binary analysis, this approach has fewer limitations and is able to reduce the complexity in solving the inexact linear programming problems and inexact quadratic programming problems. The implementation of this approach was performed using the Genetic Algorithm Solver of MATLAB (trademark of MathWorks. The paper explains the genetic-algorithms-based method and presents details on the computation procedures for each type of inexact operation programming problems. A comparison of the results generated by the proposed method based on genetic algorithms with those produced by the traditional interactive binary analysis method is also presented.
Significance tests for the wavelet cross spectrum and wavelet linear coherence
Directory of Open Access Journals (Sweden)
Z. Ge
2008-12-01
Full Text Available This work attempts to develop significance tests for the wavelet cross spectrum and the wavelet linear coherence as a follow-up study on Ge (2007. Conventional approaches that are used by Torrence and Compo (1998 based on stationary background noise time series were used here in estimating the sampling distributions of the wavelet cross spectrum and the wavelet linear coherence. The sampling distributions are then used for establishing significance levels for these two wavelet-based quantities. In addition to these two wavelet quantities, properties of the phase angle of the wavelet cross spectrum of, or the phase difference between, two Gaussian white noise series are discussed. It is found that the tangent of the principal part of the phase angle approximately has a standard Cauchy distribution and the phase angle is uniformly distributed, which makes it impossible to establish significance levels for the phase angle. The simulated signals clearly show that, when there is no linear relation between the two analysed signals, the phase angle disperses into the entire range of [−π,π] with fairly high probabilities for values close to ±π to occur. Conversely, when linear relations are present, the phase angle of the wavelet cross spectrum settles around an associated value with considerably reduced fluctuations. When two signals are linearly coupled, their wavelet linear coherence will attain values close to one. The significance test of the wavelet linear coherence can therefore be used to complement the inspection of the phase angle of the wavelet cross spectrum. The developed significance tests are also applied to actual data sets, simultaneously recorded wind speed and wave elevation series measured from a NOAA buoy on Lake Michigan. Significance levels of the wavelet cross spectrum and the wavelet linear coherence between the winds and the waves reasonably separated meaningful peaks from those generated by randomness in the data set. As
DEFF Research Database (Denmark)
Cetin, Bilge Kartal; Prasad, Neeli R.; Prasad, Ramjee
2011-01-01
In wireless sensor networks, one of the key challenge is to achieve minimum energy consumption in order to maximize network lifetime. In fact, lifetime depends on many parameters: the topology of the sensor network, the data aggregation regime in the network, the channel access schemes, the routing...... protocols, and the energy model for transmission. In this paper, we tackle the routing challenge for maximum lifetime of the sensor network. We introduce a novel linear programming approach to the maximum lifetime routing problem. To the best of our knowledge, this is the first mathematical programming...
Mixed Integer Linear Programming model for Crude Palm Oil Supply Chain Planning
Sembiring, Pasukat; Mawengkang, Herman; Sadyadharma, Hendaru; Bu'ulolo, F.; Fajriana
2018-01-01
The production process of crude palm oil (CPO) can be defined as the milling process of raw materials, called fresh fruit bunch (FFB) into end products palm oil. The process usually through a series of steps producing and consuming intermediate products. The CPO milling industry considered in this paper does not have oil palm plantation, therefore the FFB are supplied by several public oil palm plantations. Due to the limited availability of FFB, then it is necessary to choose from which plantations would be appropriate. This paper proposes a mixed integer linear programming model the supply chain integrated problem, which include waste processing. The mathematical programming model is solved using neighborhood search approach.
A new neural network model for solving random interval linear programming problems.
Arjmandzadeh, Ziba; Safi, Mohammadreza; Nazemi, Alireza
2017-05-01
This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique. Copyright © 2017 Elsevier Ltd. All rights reserved.
CiOpt: a program for optimization of the frequency response of linear circuits
Miró Sans, Joan Maria; Palà Schönwälder, Pere
1991-01-01
An interactive personal-computer program for optimizing the frequency response of linear lumped circuits (CiOpt) is presented. CiOpt has proved to be an efficient tool in improving designs where the inclusion of more accurate device models distorts the desired frequency response, as well as in device modeling. The outputs of CiOpt are the element values which best match the obtained and the desired frequency response. The optimization algorithms used (the Fletcher-Powell and Newton's methods,...
Roslee Rajikan; Nurul Izza Ahmad Zaidi; Siti Masitah Elias; Suzana Shahar; Zahara Abd Manaf; Noor Aini Md Yusoff
2017-01-01
Differences in socioeconomic profile may influences healthy food choices, particularly among individuals with low socioeconomic status. Thus, high-energy dense foods become the preferences compared to high nutritional content foods due to their cheaper price. The present study aims to develop healthy and palatable diet at the minimum cost based on Malaysian Dietary Guidelines 2010 and Recommended Nutrient Intake 2005 via linear programming. A total of 96 female adults from low socioeconomic f...
A linear programming approach to characterizing norm bounded uncertainty from experimental data
Scheid, R. E.; Bayard, D. S.; Yam, Y.
1991-01-01
The linear programming spectral overbounding and factorization (LPSOF) algorithm, an algorithm for finding a minimum phase transfer function of specified order whose magnitude tightly overbounds a specified nonparametric function of frequency, is introduced. This method has direct application to transforming nonparametric uncertainty bounds (available from system identification experiments) into parametric representations required for modern robust control design software (i.e., a minimum-phase transfer function multiplied by a norm-bounded perturbation).
A Unique Technique to get Kaprekar Iteration in Linear Programming Problem
Sumathi, P.; Preethy, V.
2018-04-01
This paper explores about a frivolous number popularly known as Kaprekar constant and Kaprekar numbers. A large number of courses and the different classroom capacities with difference in study periods make the assignment between classrooms and courses complicated. An approach of getting the minimum value of number of iterations to reach the Kaprekar constant for four digit numbers and maximum value is also obtained through linear programming techniques.
Stress-constrained truss topology optimization problems that can be solved by linear programming
DEFF Research Database (Denmark)
Stolpe, Mathias; Svanberg, Krister
2004-01-01
We consider the problem of simultaneously selecting the material and determining the area of each bar in a truss structure in such a way that the cost of the structure is minimized subject to stress constraints under a single load condition. We show that such problems can be solved by linear...... programming to give the global optimum, and that two different materials are always sufficient in an optimal structure....
Fuzzy solution of the linear programming problem with interval coefficients in the constraints
Dorota Kuchta
2005-01-01
A fuzzy concept of solving the linear programming problem with interval coefficients is proposed. For each optimism level of the decision maker (where the optimism concerns the certainty that no errors have been committed in the estimation of the interval coefficients and the belief that optimistic realisations of the interval coefficients will occur) another interval solution of the problem will be generated and the decision maker will be able to choose the final solution having a complete v...
A Mixed Integer Linear Programming Model for the North Atlantic Aircraft Trajectory Planning
Sbihi , Mohammed; Rodionova , Olga; Delahaye , Daniel; Mongeau , Marcel
2015-01-01
International audience; This paper discusses the trajectory planning problem for ights in the North Atlantic oceanic airspace (NAT). We develop a mathematical optimization framework in view of better utilizing available capacity by re-routing aircraft. The model is constructed by discretizing the problem parameters. A Mixed integer linear program (MILP) is proposed. Based on the MILP a heuristic to solve real-size instances is also introduced
Learning Bayesian network structure: towards the essential graph by integer linear programming tools
Czech Academy of Sciences Publication Activity Database
Studený, Milan; Haws, D.
2014-01-01
Roč. 55, č. 4 (2014), s. 1043-1071 ISSN 0888-613X R&D Projects: GA ČR GA13-20012S Institutional support: RVO:67985556 Keywords : learning Bayesian network structure * integer linear programming * characteristic imset * essential graph Subject RIV: BA - General Mathematics Impact factor: 2.451, year: 2014 http://library.utia.cas.cz/separaty/2014/MTR/studeny-0427002.pdf
Accelerated Leach Test(s) Program. Annual report
International Nuclear Information System (INIS)
Dougherty, D.R.; Fuhrmann, M.; Colombo, P.
1985-09-01
This report summarizes the work performed for the Accelerated Leach Test(s) Program at Brookhaven National Laboratory in Fiscal Year 1985 under the sponsorship of the US Department of Energy's Low-Level Waste Management Program (LLWMP). Programmatic activities were concentrated in three areas, as listed and described in the following paragraphs. (1) A literature survey of reported leaching mechanisms, available mathematical models and factors that affect leaching of LLW forms has been compiled. Mechanisms which have been identified include diffusion, dissolution, ion exchange, corrosion and surface effects. Available mathematical models are based on diffusion as the predominant mechanism. Although numerous factors that affect leaching have been identified, they have been conveniently categorized as factors related to the entire leaching system, to the leachant or to the waste form. A report has been published on the results of this literature survey. (2) A computerized data base of LLW leaching data and mathematical models is being developed. The data are being used for model evaluation by curve fitting and statistical analysis according to standard procedures of statistical quality control. (3) Long-term tests on portland cement, bitumen and vinyl ester-styrene (VES) polymer waste forms are underway which are designed to identify and evaluate factors that accelerate leaching without changing the mechanisms. Results on the effect of temperature on leachability indicate that the leach rates of cement and VES waste forms increase with increasing temperature, whereas, the leach rate of bitumen is little affected
APPLYING ROBUST RANKING METHOD IN TWO PHASE FUZZY OPTIMIZATION LINEAR PROGRAMMING PROBLEMS (FOLPP
Directory of Open Access Journals (Sweden)
Monalisha Pattnaik
2014-12-01
Full Text Available Background: This paper explores the solutions to the fuzzy optimization linear program problems (FOLPP where some parameters are fuzzy numbers. In practice, there are many problems in which all decision parameters are fuzzy numbers, and such problems are usually solved by either probabilistic programming or multi-objective programming methods. Methods: In this paper, using the concept of comparison of fuzzy numbers, a very effective method is introduced for solving these problems. This paper extends linear programming based problem in fuzzy environment. With the problem assumptions, the optimal solution can still be theoretically solved using the two phase simplex based method in fuzzy environment. To handle the fuzzy decision variables can be initially generated and then solved and improved sequentially using the fuzzy decision approach by introducing robust ranking technique. Results and conclusions: The model is illustrated with an application and a post optimal analysis approach is obtained. The proposed procedure was programmed with MATLAB (R2009a version software for plotting the four dimensional slice diagram to the application. Finally, numerical example is presented to illustrate the effectiveness of the theoretical results, and to gain additional managerial insights.
International Nuclear Information System (INIS)
Kushwaha, Pratishtha; Jaiswal, Deeksha; Dheera, A.; Upreti, Udita; Chaudhari, Suresh; Kinhikar, Rajesh; Deshpande, Deepak; Shrivastava, Shyam
2016-01-01
Daily quality assurance (QA) for high precision radiotherapy equipments is very important to maintain the mechanical and dosimetric accuracy for patient treatments. Gross deviations in these parameters may have an adverse impact on the delivery of the treatments to patients. We report the results of daily QA tests performed over a period of three months for two Varian linear accelerators and a Tomotherapy machine
Testing linear growth rate formulas of non-scale endogenous growth models
Ziesemer, Thomas
2017-01-01
Endogenous growth theory has produced formulas for steady-state growth rates of income per capita which are linear in the growth rate of the population. Depending on the details of the models, slopes and intercepts are positive, zero or negative. Empirical tests have taken over the assumption of
Application of range-test in multiple linear regression analysis in ...
African Journals Online (AJOL)
Application of range-test in multiple linear regression analysis in the presence of outliers is studied in this paper. First, the plot of the explanatory variables (i.e. Administration, Social/Commercial, Economic services and Transfer) on the dependent variable (i.e. GDP) was done to identify the statistical trend over the years.
DEFF Research Database (Denmark)
Christensen, Bent Jesper; Kruse, Robinson; Sibbertsen, Philipp
We consider hypothesis testing in a general linear time series regression framework when the possibly fractional order of integration of the error term is unknown. We show that the approach suggested by Vogelsang (1998a) for the case of integer integration does not apply to the case of fractional...
Space and frequency-multiplexed optical linear algebra processor - Fabrication and initial tests
Casasent, D.; Jackson, J.
1986-01-01
A new optical linear algebra processor architecture is described. Space and frequency-multiplexing are used to accommodate bipolar and complex-valued data. A fabricated laboratory version of this processor is described, the electronic support system used is discussed, and initial test data obtained on it are presented.
How My Program Passed the Turing Test
Humphrys, Mark
In 1989, the author put an ELIZA-like chatbot on the Internet. The conversations this program had can be seen - depending on how one defines the rules (and how seriously one takes the idea of the test itself) - as a passing of the Turing Test. This is the first time this event has been properly written. This chatbot succeeded due to profanity, relentless aggression, prurient queries about the user, and implying that they were a liar when they responded. The element of surprise was also crucial. Most chatbots exist in an environment where people expectto find some bots among the humans. Not this one. What was also novel was the onlineelement. This was certainly one of the first AI programs online. It seems to have been the first (a) AI real-time chat program, which (b) had the element of surprise, and (c) was on the Internet. We conclude with some speculation that the future of all of AI is on the Internet, and a description of the "World- Wide-Mind" project that aims to bring this about.
The Linear Programming to evaluate the performance of Oral Health in Primary Care.
Colussi, Claudia Flemming; Calvo, Maria Cristina Marino; Freitas, Sergio Fernando Torres de
2013-01-01
To show the use of Linear Programming to evaluate the performance of Oral Health in Primary Care. This study used data from 19 municipalities of Santa Catarina city that participated of the state evaluation in 2009 and have more than 50,000 habitants. A total of 40 indicators were evaluated, calculated using the Microsoft Excel 2007, and converted to the interval [0, 1] in ascending order (one indicating the best situation and zero indicating the worst situation). Applying the Linear Programming technique municipalities were assessed and compared among them according to performance curve named "quality estimated frontier". Municipalities included in the frontier were classified as excellent. Indicators were gathered, and became synthetic indicators. The majority of municipalities not included in the quality frontier (values different of 1.0) had lower values than 0.5, indicating poor performance. The model applied to the municipalities of Santa Catarina city assessed municipal management and local priorities rather than the goals imposed by pre-defined parameters. In the final analysis three municipalities were included in the "perceived quality frontier". The Linear Programming technique allowed to identify gaps that must be addressed by city managers to enhance actions taken. It also enabled to observe each municipal performance and compare results among similar municipalities.
An Improved Method for Solving Multiobjective Integer Linear Fractional Programming Problem
Directory of Open Access Journals (Sweden)
Meriem Ait Mehdi
2014-01-01
Full Text Available We describe an improvement of Chergui and Moulaï’s method (2008 that generates the whole efficient set of a multiobjective integer linear fractional program based on the branch and cut concept. The general step of this method consists in optimizing (maximizing without loss of generality one of the fractional objective functions over a subset of the original continuous feasible set; then if necessary, a branching process is carried out until obtaining an integer feasible solution. At this stage, an efficient cut is built from the criteria’s growth directions in order to discard a part of the feasible domain containing only nonefficient solutions. Our contribution concerns firstly the optimization process where a linear program that we define later will be solved at each step rather than a fractional linear program. Secondly, local ideal and nadir points will be used as bounds to prune some branches leading to nonefficient solutions. The computational experiments show that the new method outperforms the old one in all the treated instances.
Schipper, R.A.; Stoorvogel, J.J.; Jansen, D.M.
1995-01-01
The paper deals with linear programming as a tool for land use analysis at the sub-regional level. A linear programming model of a case study area, the Neguev settlement in the Atlantic zone of Costa Rica, is presented. The matrix of the model includes five submatrices each encompassing a different
Implementation of a quality control program for a 6 MeV linear photon accelerator
International Nuclear Information System (INIS)
Berdaky, Mafalda F.; Caldas, Linda V.E.
2001-01-01
This paper describes the operational characteristics of the final process of implementation of a quality control program using routine mechanical and radiation tests. The quality control program was performed during 35 months and demonstrated the excellent stability of this accelerator. (author)
International Nuclear Information System (INIS)
Sergienko, I.V.; Golodnikov, A.N.
1984-01-01
This article applies the methods of decompositions, which are used to solve continuous linear problems, to integer and partially integer problems. The fall-vector method is used to solve the obtained coordinate problems. An algorithm of the fall-vector is described. The Kornai-Liptak decomposition principle is used to reduce the integer linear programming problem to integer linear programming problems of a smaller dimension and to a discrete coordinate problem with simple constraints
Directory of Open Access Journals (Sweden)
Selcuk Gumus
2016-12-01
Full Text Available Farm tractor skidding is one of the common methods of timber extraction in Turkey. However, the absence of an optimal skidding plan covering the entire production area can result in time loss and negative environmental impacts. In this study, the timber extraction by farm tractors was analyzed, and a new skid trail pattern design was developed using Linear Programming (LP and Geographical Information Systems (GIS. First, a sample skidding operation was evaluated with a time study, and an optimum skidding model was generated with LP. Then, the new skidding pattern was developed by an optimum skidding model and GIS analysis. At the end of the study, the developed new skid trail pattern was implemented in the study area and tested by running a time study. Using the newly developed “Direct Skid Trail Pattern (DSTP” model, a 16.84% increase in working time performance was observed when the products were extracted by farm tractors compared to the existing practices. On the other hand, the average soil compaction value measured in the study area at depths of 0–5 cm and 5–10 cm was found to be greater in the sample area skid trails than in the control points. The average density of the skid trails was 281 m/ha, while it decreased to 187 m/ha by using the developed pattern. It was also found that 44,829 ton/ha of soil losses were prevented by using the DSTP model; therefore, environmental damages were decreased.
Automatic identification of epileptic seizures from EEG signals using linear programming boosting.
Hassan, Ahnaf Rashik; Subasi, Abdulhamit
2016-11-01
Computerized epileptic seizure detection is essential for expediting epilepsy diagnosis and research and for assisting medical professionals. Moreover, the implementation of an epilepsy monitoring device that has low power and is portable requires a reliable and successful seizure detection scheme. In this work, the problem of automated epilepsy seizure detection using singe-channel EEG signals has been addressed. At first, segments of EEG signals are decomposed using a newly proposed signal processing scheme, namely complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN). Six spectral moments are extracted from the CEEMDAN mode functions and train and test matrices are formed afterward. These matrices are fed into the classifier to identify epileptic seizures from EEG signal segments. In this work, we implement an ensemble learning based machine learning algorithm, namely linear programming boosting (LPBoost) to perform classification. The efficacy of spectral features in the CEEMDAN domain is validated by graphical and statistical analyses. The performance of CEEMDAN is compared to those of its predecessors to further inspect its suitability. The effectiveness and the appropriateness of LPBoost are demonstrated as opposed to the commonly used classification models. Resubstitution and 10 fold cross-validation error analyses confirm the superior algorithm performance of the proposed scheme. The algorithmic performance of our epilepsy seizure identification scheme is also evaluated against state-of-the-art works in the literature. Experimental outcomes manifest that the proposed seizure detection scheme performs better than the existing works in terms of accuracy, sensitivity, specificity, and Cohen's Kappa coefficient. It can be anticipated that owing to its use of only one channel of EEG signal, the proposed method will be suitable for device implementation, eliminate the onus of clinicians for analyzing a large bulk of data manually, and
Ji, Zhiwei; Wang, Bing; Yan, Ke; Dong, Ligang; Meng, Guanmin; Shi, Lei
2017-12-21
In recent years, the integration of 'omics' technologies, high performance computation, and mathematical modeling of biological processes marks that the systems biology has started to fundamentally impact the way of approaching drug discovery. The LINCS public data warehouse provides detailed information about cell responses with various genetic and environmental stressors. It can be greatly helpful in developing new drugs and therapeutics, as well as improving the situations of lacking effective drugs, drug resistance and relapse in cancer therapies, etc. In this study, we developed a Ternary status based Integer Linear Programming (TILP) method to infer cell-specific signaling pathway network and predict compounds' treatment efficacy. The novelty of our study is that phosphor-proteomic data and prior knowledge are combined for modeling and optimizing the signaling network. To test the power of our approach, a generic pathway network was constructed for a human breast cancer cell line MCF7; and the TILP model was used to infer MCF7-specific pathways with a set of phosphor-proteomic data collected from ten representative small molecule chemical compounds (most of them were studied in breast cancer treatment). Cross-validation indicated that the MCF7-specific pathway network inferred by TILP were reliable predicting a compound's efficacy. Finally, we applied TILP to re-optimize the inferred cell-specific pathways and predict the outcomes of five small compounds (carmustine, doxorubicin, GW-8510, daunorubicin, and verapamil), which were rarely used in clinic for breast cancer. In the simulation, the proposed approach facilitates us to identify a compound's treatment efficacy qualitatively and quantitatively, and the cross validation analysis indicated good accuracy in predicting effects of five compounds. In summary, the TILP model is useful for discovering new drugs for clinic use, and also elucidating the potential mechanisms of a compound to targets.
Non-Linear Numerical Modeling and Experimental Testing of a Point Absorber Wave Energy Converter
DEFF Research Database (Denmark)
Zurkinden, Andrew Stephen; Ferri, Francesco; Beatty, S.
2014-01-01
the calculation of the non-linear hydrostatic restoring moment by a cubic polynomial function fit to laboratory test results. Moreover, moments due to viscous drag are evaluated on the oscillating hemisphere considering the horizontal and vertical drag force components. The influence on the motions of this non.......e. H/λ≤0.02. For steep waves, H/λ≥0.04 however, the relative velocities between the body and the waves increase thus requiring inclusion of the non-linear hydrostatic restoring moment to effectively predict the dynamics of the wave energy converter. For operation of the device with a passively damping...
IESIP - AN IMPROVED EXPLORATORY SEARCH TECHNIQUE FOR PURE INTEGER LINEAR PROGRAMMING PROBLEMS
Fogle, F. R.
1994-01-01
IESIP, an Improved Exploratory Search Technique for Pure Integer Linear Programming Problems, addresses the problem of optimizing an objective function of one or more variables subject to a set of confining functions or constraints by a method called discrete optimization or integer programming. Integer programming is based on a specific form of the general linear programming problem in which all variables in the objective function and all variables in the constraints are integers. While more difficult, integer programming is required for accuracy when modeling systems with small numbers of components such as the distribution of goods, machine scheduling, and production scheduling. IESIP establishes a new methodology for solving pure integer programming problems by utilizing a modified version of the univariate exploratory move developed by Robert Hooke and T.A. Jeeves. IESIP also takes some of its technique from the greedy procedure and the idea of unit neighborhoods. A rounding scheme uses the continuous solution found by traditional methods (simplex or other suitable technique) and creates a feasible integer starting point. The Hook and Jeeves exploratory search is modified to accommodate integers and constraints and is then employed to determine an optimal integer solution from the feasible starting solution. The user-friendly IESIP allows for rapid solution of problems up to 10 variables in size (limited by DOS allocation). Sample problems compare IESIP solutions with the traditional branch-and-bound approach. IESIP is written in Borland's TURBO Pascal for IBM PC series computers and compatibles running DOS. Source code and an executable are provided. The main memory requirement for execution is 25K. This program is available on a 5.25 inch 360K MS DOS format diskette. IESIP was developed in 1990. IBM is a trademark of International Business Machines. TURBO Pascal is registered by Borland International.
International Nuclear Information System (INIS)
Knudson, D.L.; Rempe, J.L.; Daw, J.E.
2009-01-01
The United States (U.S.) Department of Energy (DOE) designated the Advanced Test Reactor (ATR) as a National Scientific User Facility (NSUF) in April 2007 to promote nuclear science and technology in the U.S. Given this designation, the ATR is supporting new users from universities, laboratories, and industry as they conduct basic and applied nuclear research and development to advance the nation's energy security needs. A fundamental component of the ATR NSUF program is to develop in-pile instrumentation capable of providing real-time measurements of key parameters during irradiation experiments. Dimensional change is a key parameter that must be monitored during irradiation of new materials being considered for fuel, cladding, and structures in next generation and existing nuclear reactors. Such materials can experience significant changes during high temperature irradiation. Currently, dimensional changes are determined by repeatedly irradiating a specimen for a defined period of time in the ATR and then removing it from the reactor for evaluation. The time and labor to remove, examine, and return irradiated samples for each measurement makes this approach very expensive. In addition, such techniques provide limited data (i.e., only characterizing the end state when samples are removed from the reactor) and may disturb the phenomena of interest. To address these issues, the Idaho National Laboratory (INL) recently initiated efforts to evaluate candidate linear variable displacement transducers (LVDTs) for use during high temperature irradiation experiments in typical ATR test locations. Two nuclear grade LVDT vendor designs were identified for consideration - a smaller diameter design qualified for temperatures up to 350 C and a larger design with capabilities to 500 C. Initial evaluation efforts include collecting calibration data as a function of temperature, long duration testing of LVDT response while held at high temperature, and the assessment of changes
Energy Technology Data Exchange (ETDEWEB)
Ellis, J H; McBean, E A; Farquhar, G J
1985-01-01
A Linear Programming model is presented for development of acid rain abatement strategies in eastern North America. For a system comprised of 235 large controllable point sources and 83 uncontrolled area sources, it determines the least-cost method of reducing SO/sub 2/ emissions to satisfy maximum wet sulfur deposition limits at 20 sensitive receptor locations. In this paper, the purely deterministic model is extended to a probabilistic form by incorporating the effects of meteorologic variability on the long-range pollutant transport processes. These processes are represented by source-receptor-specific transfer coefficients. Experiments for quantifying the spatial variability of transfer coefficients showed their distributions to be approximately lognormal with logarithmic standard deviations consistently about unity. Three methods of incorporating second-moment random variable uncertainty into the deterministic LP framework are described: Two-Stage Programming Under Uncertainty, Chance-Constrained Programming and Stochastic Linear Programming. A composite CCP-SLP model is developed which embodies the two-dimensional characteristics of transfer coefficient uncertainty. Two probabilistic formulations are described involving complete colinearity and complete noncolinearity for the transfer coefficient covariance-correlation structure. The completely colinear and noncolinear formulations are considered extreme bounds in a meteorologic sense and yield abatement strategies of largely didactic value. Such strategies can be characterized as having excessive costs and undesirable deposition results in the completely colinear case and absence of a clearly defined system risk level (other than expected-value) in the noncolinear formulation.
Optimal placement of capacitors in a radial network using conic and mixed integer linear programming
Energy Technology Data Exchange (ETDEWEB)
Jabr, R.A. [Electrical, Computer and Communication Engineering Department, Notre Dame University, P.O. Box: 72, Zouk Mikhael, Zouk Mosbeh (Lebanon)
2008-06-15
This paper considers the problem of optimally placing fixed and switched type capacitors in a radial distribution network. The aim of this problem is to minimize the costs associated with capacitor banks, peak power, and energy losses whilst satisfying a pre-specified set of physical and technical constraints. The proposed solution is obtained using a two-phase approach. In phase-I, the problem is formulated as a conic program in which all nodes are candidates for placement of capacitor banks whose sizes are considered as continuous variables. A global solution of the phase-I problem is obtained using an interior-point based conic programming solver. Phase-II seeks a practical optimal solution by considering capacitor sizes as discrete variables. The problem in this phase is formulated as a mixed integer linear program based on minimizing the L1-norm of deviations from the phase-I state variable values. The solution to the phase-II problem is obtained using a mixed integer linear programming solver. The proposed method is validated via extensive comparisons with previously published results. (author)
Recovery coefficients as a test of system linearity of response in PET
International Nuclear Information System (INIS)
Geworski, L.; Munz, D.L.; Knoop, B.; Hofmann, M.; Knapp, W.H.
2002-01-01
Aim: New imaging protocols have created an increasing demand for quantitation in dedicated PET. Besides attenuation and scatter correction the recovery correction, accounting for the instrument's limited spatial resolution, has gained importance. For clinical practicability these corrections should work independent from the object, i.e. from the actual distribution of emitter and absorber. Aim of the study was to test this object independency, i.e. system linearity of response, by comparing recovery coefficients (RC) determined for different object geometries. In fact, this comparison may serve as a final test on system linearity of response, as measured on the quantitative accuracy by which the activity concentration in small lesions can be recovered. Method: For hot and cold spot imaging situations spatial distribution of activity is different. Therefore, scatter correction algorithm has to deal with different scatter distributions. If all factors disturbing system linearity, specifically scatter and attenuation, are corrected to a sufficient degree of accuracy, the system behaves linearly resulting in the theoretical relationship. CSRC = (1-HSRC). Thus, this equation, applied hot and cold spot measurements, will serve as a test on the effectiveness of the corrections and, hence, as a test of system linearity of response. Following IEC standard procedures (IEC 61675-1) measurements were done with and without interplane septa (2D/3D) on an ECAT EXACT 922 using a cylindrical phantom containing six spheres of different diameters (10 mm - 40 mm). All data were corrected for attenuation (transmission scan) and scatter (2D: deconvolution, 3D: scatter model), as implemented in the scanner's standard software. Recovery coefficients were determined for cold (CSRC) and hot (HSRC) lesions using both 2D and 3D acquisition mode. Results: CSRC directly measured versus CSRC calculated according to eq. (1) from HSRC resulted in an excellent agreement for both 2D and 3D data
Automated design and optimization of flexible booster autopilots via linear programming, volume 1
Hauser, F. D.
1972-01-01
A nonlinear programming technique was developed for the automated design and optimization of autopilots for large flexible launch vehicles. This technique, which resulted in the COEBRA program, uses the iterative application of linear programming. The method deals directly with the three main requirements of booster autopilot design: to provide (1) good response to guidance commands; (2) response to external disturbances (e.g. wind) to minimize structural bending moment loads and trajectory dispersions; and (3) stability with specified tolerances on the vehicle and flight control system parameters. The method is applicable to very high order systems (30th and greater per flight condition). Examples are provided that demonstrate the successful application of the employed algorithm to the design of autopilots for both single and multiple flight conditions.
Directory of Open Access Journals (Sweden)
Hideki Katagiri
2017-10-01
Full Text Available This paper considers linear programming problems (LPPs where the objective functions involve discrete fuzzy random variables (fuzzy set-valued discrete random variables. New decision making models, which are useful in fuzzy stochastic environments, are proposed based on both possibility theory and probability theory. In multi-objective cases, Pareto optimal solutions of the proposed models are newly defined. Computational algorithms for obtaining the Pareto optimal solutions of the proposed models are provided. It is shown that problems involving discrete fuzzy random variables can be transformed into deterministic nonlinear mathematical programming problems which can be solved through a conventional mathematical programming solver under practically reasonable assumptions. A numerical example of agriculture production problems is given to demonstrate the applicability of the proposed models to real-world problems in fuzzy stochastic environments.
Tests of the linearity assumption in the dose-effect relationship for radiation-induced cancer
International Nuclear Information System (INIS)
Cohen, A.F.; Cohen, B.L.
1980-01-01
The validity of the BEIR linear extrapolation to low doses of the dose-effect relationship for radiation induced cancer is tested by use of natural radiation making use of selectivity on type of cancer, smoking habits, sex, age group, geographic area and/or time period. For lung cancer, a linear interpolation between zero dose-zero effect and the data from radon-induced cancers in miners implies that the majority of all lung cancers among non-smokers are due to radon; since lung cancers in miners are mostly small-cell undifferentiated (SCU), a rather rare type in general, linearity over predicts the frequency of SCU lung cancers among non smokers by a factor of 10, and among non-smoking females age 25-44 by a factor of 24. Similarly, linearity predicts that the majority of all lung cancers early in this century were due to radon even after due consideration is given to cases missed by poor diagnostic efficiency (this matter is considered in some detail). For the 30-40 age range, linearity over predicts the total lung cancer rate at that time by a factor of 3-6; for SCU lung cancer, the over-prediction is by at least a factor of 10. Other causes of lung cancer are considered which further enhance the degree to which the linearity assumption over-estimates the effects of low level radiation. A similar analysis is applied to leukemia induced by natural radiation. It is concluded that the upper limit for this is not higher than estimates from the linearity hypothesis. (author)
Energy Technology Data Exchange (ETDEWEB)
Domingos, Roberto P.; Schirru, Roberto; Martinez, Aquilino S. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia
1997-12-01
This work presents a Genetic Programming paradigm and a nuclear application. A field of Artificial Intelligence, based on the concepts of Species Evolution and Natural Selection, can be understood as a self-programming process where the computer is the main agent responsible for the discovery of a program able to solve a given problem. In the present case, the problem was to find a mathematical expression in symbolic form, able to express the existent relation between equivalent ratio of a fuel cell, the enrichment of fuel elements and the multiplication factor. Such expression would avoid repeatedly reactor physics codes execution for core optimization. The results were compared with those obtained by different techniques such as Neural Networks and Linear Multiple Regression. Genetic Programming has shown to present a performance as good as, and under some features superior to Neural Network and Linear Multiple Regression. (author). 10 refs., 8 figs., 1 tabs.
Test results for three prototype models of a linear induction launcher
International Nuclear Information System (INIS)
Zabar, Z.; Lu, X.N.; He, J.L.; Birenbaum, L.; Levi, E.; Kuznetsov, S.B.; Nahemow, M.D.
1991-01-01
This paper reports on the work on the linear induction launcher (LIL) started with an analytical study tht was followed by computer simulations and then was tested by laboratory models. Two mathematical representations have been developed to describe the launcher. The first, based on the field approach with sinusoidal excitation, has been validated by static tests on a small scale prototype fed at constant current and variable frequency. The second, a transient representation using computer simulation allows consideration of energization by means of a capacitor bank and a power conditioner. Tests performed on three small-scale prototypes up to 100 m/s muzzle velocities show good agreement with predicted performance
Estimated of associated uncertainties of the linearity test of dose calibrators
International Nuclear Information System (INIS)
Sousa, Carlos H.S.; Peixoto, Jose G.P.
2013-01-01
Activimeters determine the activity of radioactive samples and them are validated by performance tests. This research determined the expanded uncertainties associated to the linearity test. Were used three dose calibrators and three sources of 99 Tc m for testing using recommended protocol by the IAEA, which considered the decay of radioactive samples. The expanded uncertainties evaluated were not correlated with each other and their analysis considered a rectangular probability distribution. The results are also presented in graphical form by the function of normalized activity measured in terms of conventional true value. (author)
Wavelet-linear genetic programming: A new approach for modeling monthly streamflow
Ravansalar, Masoud; Rajaee, Taher; Kisi, Ozgur
2017-06-01
The streamflows are important and effective factors in stream ecosystems and its accurate prediction is an essential and important issue in water resources and environmental engineering systems. A hybrid wavelet-linear genetic programming (WLGP) model, which includes a discrete wavelet transform (DWT) and a linear genetic programming (LGP) to predict the monthly streamflow (Q) in two gauging stations, Pataveh and Shahmokhtar, on the Beshar River at the Yasuj, Iran were used in this study. In the proposed WLGP model, the wavelet analysis was linked to the LGP model where the original time series of streamflow were decomposed into the sub-time series comprising wavelet coefficients. The results were compared with the single LGP, artificial neural network (ANN), a hybrid wavelet-ANN (WANN) and Multi Linear Regression (MLR) models. The comparisons were done by some of the commonly utilized relevant physical statistics. The Nash coefficients (E) were found as 0.877 and 0.817 for the WLGP model, for the Pataveh and Shahmokhtar stations, respectively. The comparison of the results showed that the WLGP model could significantly increase the streamflow prediction accuracy in both stations. Since, the results demonstrate a closer approximation of the peak streamflow values by the WLGP model, this model could be utilized for the simulation of cumulative streamflow data prediction in one month ahead.
Standard test method for linear-elastic plane-strain fracture toughness KIc of metallic materials
American Society for Testing and Materials. Philadelphia
2009-01-01
1.1 This test method covers the determination of fracture toughness (KIc) of metallic materials under predominantly linear-elastic, plane-strain conditions using fatigue precracked specimens having a thickness of 1.6 mm (0.063 in.) or greater subjected to slowly, or in special (elective) cases rapidly, increasing crack-displacement force. Details of test apparatus, specimen configuration, and experimental procedure are given in the Annexes. Note 1—Plane-strain fracture toughness tests of thinner materials that are sufficiently brittle (see 7.1) can be made using other types of specimens (1). There is no standard test method for such thin materials. 1.2 This test method is divided into two parts. The first part gives general recommendations and requirements for KIc testing. The second part consists of Annexes that give specific information on displacement gage and loading fixture design, special requirements for individual specimen configurations, and detailed procedures for fatigue precracking. Additional a...
Dose calibrator linearity test: {sup 99m}Tc versus {sup 18}F radioisotopes
Energy Technology Data Exchange (ETDEWEB)
Willegaignon, Jose; Coura-Filho, George Barberio; Garcez, Alexandre Teles, E-mail: willegaignon@hotmail.com [Instituto do Cancer do Estado de Sao Paulo Octavio Frias de Oliveira (ICESP), Sao Paulo, SP (Brazil); Sapienza, Marcelo Tatit; Buchpiguel, Carlos Alberto [Universidade de Sao Paulo (FM/USP), Sao Paulo, SP (Brazil). Fac. de Medicina; Alves, Carlos Eduardo Gonzalez Ribeiro; Cardona, Marissa Anabel Rivera; Gutterres, Ricardo Fraga [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil)
2015-01-15
Objective: the present study was aimed at evaluating the viability of replacing {sup 18}F with {sup 99m}Tc in dose calibrator linearity testing. Materials and methods: the test was performed with sources of {sup 99m}Tc (62 GBq) and {sup 18}F (12 GBq) whose activities were measured up to values lower than 1 MBq. Ratios and deviations between experimental and theoretical {sup 99m}Tc and {sup 18}F sources activities were calculated and subsequently compared. Results: mean deviations between experimental and theoretical {sup 99m}Tc and {sup 18}F sources activities were 0.56 (± 1.79)% and 0.92 (± 1.19)%, respectively. The mean ratio between activities indicated by the device for the {sup 99m}Tc source as measured with the equipment precalibrated to measure {sup 99m}Tc and {sup 18}F was 3.42 (± 0.06), and for the {sup 18}F source this ratio was 3.39 (± 0.05), values considered constant over the measurement time. Conclusion: the results of the linearity test using {sup 99m}Tc were compatible with those obtained with the {sup 18}F source, indicating the viability of utilizing both radioisotopes in dose calibrator linearity testing. Such information in association with the high potential of radiation exposure and costs involved in {sup 18}F acquisition suggest {sup 99m}Tc as the element of choice to perform dose calibrator linearity tests in centers that use {sup 18}F, without any detriment to the procedure as well as to the quality of the nuclear medicine service. (author)
Definition of the linearity loss of the surface temperature in static tensile tests
Directory of Open Access Journals (Sweden)
A. Risitano
2014-10-01
Full Text Available Static tensile tests on material for mechanical constructions have pointed out the linearity loss of the surface temperature with the application of load. This phenomenon is due to the heat generation caused by the local microplasticizations which carry the material to deviate from its completely thermoelastic behavior,. The identification of the static load which determines the loss of linearity of the temperature under stress, becomes extremely important to define a first dynamic characterization of the material. The temperature variations that can be recorded during the static test are often very limited (a few tenths of degree for every 100 MPa in steels and they require the use of special sensors able to measure very low temperature variations. The experience acquired in such analysis highlighted that, dealing with highly accurate sensors or with particular materials, the identification of the first linearity loss (often by eye in the temperature curves, can be influenced by the sensibility of the investigator himself and can lead to incorrect estimates. The aim of this work is to validate the above mentioned observations on different steels, by applying the autocorrelation function to the data collected during the application of a static load. This, in order to make the results of the thermal analysis free from the sensitivity of the operator and to make the results as objective as possible, for defining the closest time of the linearity loss in the temperature-time function.
Optimization of production planning in Czech agricultural co-operative via linear programming
Directory of Open Access Journals (Sweden)
Jitka Janová
2009-01-01
Full Text Available The production planning is one of the key managerial decisions in agricultural business, which must be done periodically every year. Correct decision must cover the agriculture demands of planting the crops such as crop rotation restrictions or water resource scarcity, while the decision maker aims to plan the crop design in most profitable way in sense of maximizing the total profit from the crop yield. This decision problem represents the optimization of crop design and can be treated by the methods of linear programming which begun to be extensively used in agriculture production planning in USA during 50’s. There is ongoing research of mathematical programming applications in agriculture worldwide, but the results are not easily transferable to other localities due to the specific local restrictions in each country. In Czech Republic the farmers use for production planning mainly their expert knowledge and past experience. However, the mathematical programming approach enables find the true optimal solution of the problem, which especially in the problems with a great number of constraints is not easy to find intuitively. One of the possible barriers for using the general decision support systems (which are based on mathematical programming methods for agriculture production planning in Czech Republic is its expensiveness. The small farmer can not afford to buy the expensive software or to employ a mathematical programming specialist. The aim of this paper is to present a user friendly linear programming model of the typical agricultural production planning problem in Czech Republic which can be solved via software tools commonly available in any farm (e.g. EXCEL. The linear programming model covering the restrictions on total costs, crop rotation, thresholds for the total area sowed by particular crops, total amount of manure and the need of feed crops is developed. The model is applied in real-world problem of Czech agriculture
Solutions to estimation problems for scalar hamilton-jacobi equations using linear programming
Claudel, Christian G.; Chamoin, Timothee; Bayen, Alexandre M.
2014-01-01
This brief presents new convex formulations for solving estimation problems in systems modeled by scalar Hamilton-Jacobi (HJ) equations. Using a semi-analytic formula, we show that the constraints resulting from a HJ equation are convex, and can be written as a set of linear inequalities. We use this fact to pose various (and seemingly unrelated) estimation problems related to traffic flow-engineering as a set of linear programs. In particular, we solve data assimilation and data reconciliation problems for estimating the state of a system when the model and measurement constraints are incompatible. We also solve traffic estimation problems, such as travel time estimation or density estimation. For all these problems, a numerical implementation is performed using experimental data from the Mobile Century experiment. In the context of reproducible research, the code and data used to compute the results presented in this brief have been posted online and are accessible to regenerate the results. © 2013 IEEE.
LPmerge: an R package for merging genetic maps by linear programming.
Endelman, Jeffrey B; Plomion, Christophe
2014-06-01
Consensus genetic maps constructed from multiple populations are an important resource for both basic and applied research, including genome-wide association analysis, genome sequence assembly and studies of evolution. The LPmerge software uses linear programming to efficiently minimize the mean absolute error between the consensus map and the linkage maps from each population. This minimization is performed subject to linear inequality constraints that ensure the ordering of the markers in the linkage maps is preserved. When marker order is inconsistent between linkage maps, a minimum set of ordinal constraints is deleted to resolve the conflicts. LPmerge is on CRAN at http://cran.r-project.org/web/packages/LPmerge. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
A primal-dual exterior point algorithm for linear programming problems
Directory of Open Access Journals (Sweden)
Samaras Nikolaos
2009-01-01
Full Text Available The aim of this paper is to present a new simplex type algorithm for the Linear Programming Problem. The Primal - Dual method is a Simplex - type pivoting algorithm that generates two paths in order to converge to the optimal solution. The first path is primal feasible while the second one is dual feasible for the original problem. Specifically, we use a three-phase-implementation. The first two phases construct the required primal and dual feasible solutions, using the Primal Simplex algorithm. Finally, in the third phase the Primal - Dual algorithm is applied. Moreover, a computational study has been carried out, using randomly generated sparse optimal linear problems, to compare its computational efficiency with the Primal Simplex algorithm and also with MATLAB's Interior Point Method implementation. The algorithm appears to be very promising since it clearly shows its superiority to the Primal Simplex algorithm as well as its robustness over the IPM algorithm.
Katz, Josh M; Winter, Carl K; Buttrey, Samuel E; Fadel, James G
2012-03-01
Western and guideline based diets were compared to determine if dietary improvements resulting from following dietary guidelines reduce acrylamide intake. Acrylamide forms in heat treated foods and is a human neurotoxin and animal carcinogen. Acrylamide intake from the Western diet was estimated with probabilistic techniques using teenage (13-19 years) National Health and Nutrition Examination Survey (NHANES) food consumption estimates combined with FDA data on the levels of acrylamide in a large number of foods. Guideline based diets were derived from NHANES data using linear programming techniques to comport to recommendations from the Dietary Guidelines for Americans, 2005. Whereas the guideline based diets were more properly balanced and rich in consumption of fruits, vegetables, and other dietary components than the Western diets, acrylamide intake (mean±SE) was significantly greater (Plinear programming and results demonstrate that linear programming techniques can be used to model specific diets for the assessment of toxicological and nutritional dietary components. Copyright Â© 2011 Elsevier Ltd. All rights reserved.
Computer programs for the solution of systems of linear algebraic equations
Sequi, W. T.
1973-01-01
FORTRAN subprograms for the solution of systems of linear algebraic equations are described, listed, and evaluated in this report. Procedures considered are direct solution, iteration, and matrix inversion. Both incore methods and those which utilize auxiliary data storage devices are considered. Some of the subroutines evaluated require the entire coefficient matrix to be in core, whereas others account for banding or sparceness of the system. General recommendations relative to equation solving are made, and on the basis of tests, specific subprograms are recommended.
International Nuclear Information System (INIS)
Assmann, R.; Adolphsen, C.; Bane, K.; Raubenheimer, T.O.; Siemann, R.; Thompson, K.
1996-09-01
Linear accelerators are the central components of the proposed next generation of linear colliders. They need to provide acceleration of up to 750 GeV per beam while maintaining very small normalized emittances. Standard simulation programs, mainly developed for storage rings, do not meet the specific requirements for high energy linear accelerators. The authors present a new program LIAR (LInear Accelerator Research code) that includes wakefield effects, a 4D coupled beam description, specific optimization algorithms and other advanced features. Its modular structure allows to use and to extend it easily for different purposes. They present examples of simulations for SLC and NLC
Material testing facilities and programs for plasma-facing component testing
Linsmeier, Ch.; Unterberg, B.; Coenen, J. W.; Doerner, R. P.; Greuner, H.; Kreter, A.; Linke, J.; Maier, H.
2017-09-01
Component development for operation in a large-scale fusion device requires thorough testing and qualification for the intended operational conditions. In particular environments are necessary which are comparable to the real operation conditions, allowing at the same time for in situ/in vacuo diagnostics and flexible operation, even beyond design limits during the testing. Various electron and neutral particle devices provide the capabilities for high heat load tests, suited for material samples and components from lab-scale dimensions up to full-size parts, containing toxic materials like beryllium, and being activated by neutron irradiation. To simulate the conditions specific to a fusion plasma both at the first wall and in the divertor of fusion devices, linear plasma devices allow for a test of erosion and hydrogen isotope recycling behavior under well-defined and controlled conditions. Finally, the complex conditions in a fusion device (including the effects caused by magnetic fields) are exploited for component and material tests by exposing test mock-ups or material samples to a fusion plasma by manipulator systems. They allow for easy exchange of test pieces in a tokamak or stellarator device, without opening the vessel. Such a chain of test devices and qualification procedures is required for the development of plasma-facing components which then can be successfully operated in future fusion power devices. The various available as well as newly planned devices and test stands, together with their specific capabilities, are presented in this manuscript. Results from experimental programs on test facilities illustrate their significance for the qualification of plasma-facing materials and components. An extended set of references provides access to the current status of material and component testing capabilities in the international fusion programs.
Frega, Romeo; Lanfranco, Jose Guerra; De Greve, Sam; Bernardini, Sara; Geniez, Perrine; Grede, Nils; Bloem, Martin; de Pee, Saskia
2012-09-01
Linear programming has been used for analyzing children's complementary feeding diets, for optimizing nutrient adequacy of dietary recommendations for a population, and for estimating the economic value of fortified foods. To describe and apply a linear programming tool ("Cost of the Diet") with data from Mozambique to determine what could be cost-effective fortification strategies. Based on locally assessed average household dietary needs, seasonal market prices of available food products, and food composition data, the tool estimates the lowest-cost diet that meets almost all nutrient needs. The results were compared with expenditure data from Mozambique to establish the affordability of this diet by quintiles of the population. Three different applications were illustrated: identifying likely "limiting nutrients," comparing cost effectiveness of different fortification interventions at the household level, and assessing economic access to nutritious foods. The analysis identified iron, vitamin B2, and pantothenic acid as "limiting nutrients." Under the Mozambique conditions, vegetable oil was estimated as a more cost-efficient vehicle for vitamin A fortification than sugar; maize flour may also be an effective vehicle to provide other constraining micronutrients. Multiple micronutrient fortification of maize flour could reduce the cost of the "lowest-cost nutritious diet" by 18%, but even this diet can be afforded by only 20% of the Mozambican population. Within the context of fortification, linear programming can be a useful tool for identifying likely nutrient inadequacies, for comparing fortification options in terms of cost effectiveness, and for illustrating the potential benefit of fortification for improving household access to a nutritious diet.
INFORMATION SECURITY RISKS OPTIMIZATION IN CLOUDY SERVICES ON THE BASIS OF LINEAR PROGRAMMING
Directory of Open Access Journals (Sweden)
I. A. Zikratov
2013-01-01
Full Text Available The paper discusses theoretical aspects of secure cloud services creation for information processing of various confidentiality degrees. A new approach to the reasoning of information security composition in distributed computing structures is suggested, presenting the problem of risk assessment as an extreme problem of decisionmaking. Linear programming method application is proved to minimize the risk of information security for given performance security in compliance with the economic balance for the maintenance of security facilities and cost of services. An example is given to illustrate the obtained theoretical results.
An improved multiple linear regression and data analysis computer program package
Sidik, S. M.
1972-01-01
NEWRAP, an improved version of a previous multiple linear regression program called RAPIER, CREDUC, and CRSPLT, allows for a complete regression analysis including cross plots of the independent and dependent variables, correlation coefficients, regression coefficients, analysis of variance tables, t-statistics and their probability levels, rejection of independent variables, plots of residuals against the independent and dependent variables, and a canonical reduction of quadratic response functions useful in optimum seeking experimentation. A major improvement over RAPIER is that all regression calculations are done in double precision arithmetic.
Reactor Network Synthesis Using Coupled Genetic Algorithm with the Quasi-linear Programming Method
Soltani, H.; Shafiei, S.; Edraki, J.
2016-01-01
This research is an attempt to develop a new procedure for the synthesis of reactor networks (RNs) using a genetic algorithm (GA) coupled with the quasi-linear programming (LP) method. The GA is used to produce structural configuration, whereas continuous variables are handled using a quasi-LP formulation for finding the best objective function. Quasi-LP consists of LP together with a search loop to find the best reactor conversions (xi), as well as split and recycle ratios (yi). Quasi-LP rep...
Quadratic-linear pattern in cancer fractional radiotherapy. Equations for a computering program
International Nuclear Information System (INIS)
Burgos, D.; Bullejos, J.; Garcia Puche, J.L.; Pedraza, V.
1990-01-01
Knowledge of equivalence between different tratment schemes with the same iso-effect is the essential thing in clinical cancer radiotherapy. For this purpose it is very useful the group of ideas derived from quadratic-linear pattern (Q-L) proposed in order to analyze cell survival curve to radiation. Iso-effect definition caused by several irradiation rules is done by extrapolated tolerance dose (ETD). Because equations for ETD are complex, a computering program have been carried out. In this paper, iso-effect equations for well defined therapeutic situations and flow diagram proposed for resolution, have been studied. (Author)
International Nuclear Information System (INIS)
Sun Wei; Huang, Guo H.; Lv Ying; Li Gongchen
2012-01-01
Highlights: ► Inexact piecewise-linearization-based fuzzy flexible programming is proposed. ► It’s the first application to waste management under multiple complexities. ► It tackles nonlinear economies-of-scale effects in interval-parameter constraints. ► It estimates costs more accurately than the linear-regression-based model. ► Uncertainties are decreased and more satisfactory interval solutions are obtained. - Abstract: To tackle nonlinear economies-of-scale (EOS) effects in interval-parameter constraints for a representative waste management problem, an inexact piecewise-linearization-based fuzzy flexible programming (IPFP) model is developed. In IPFP, interval parameters for waste amounts and transportation/operation costs can be quantified; aspiration levels for net system costs, as well as tolerance intervals for both capacities of waste treatment facilities and waste generation rates can be reflected; and the nonlinear EOS effects transformed from objective function to constraints can be approximated. An interactive algorithm is proposed for solving the IPFP model, which in nature is an interval-parameter mixed-integer quadratically constrained programming model. To demonstrate the IPFP’s advantages, two alternative models are developed to compare their performances. One is a conventional linear-regression-based inexact fuzzy programming model (IPFP2) and the other is an IPFP model with all right-hand-sides of fussy constraints being the corresponding interval numbers (IPFP3). The comparison results between IPFP and IPFP2 indicate that the optimized waste amounts would have the similar patterns in both models. However, when dealing with EOS effects in constraints, the IPFP2 may underestimate the net system costs while the IPFP can estimate the costs more accurately. The comparison results between IPFP and IPFP3 indicate that their solutions would be significantly different. The decreased system uncertainties in IPFP’s solutions demonstrate
Fleming, P.
1983-01-01
A design technique is proposed for linear regulators in which a feedback controller of fixed structure is chosen to minimize an integral quadratic objective function subject to the satisfaction of integral quadratic constraint functions. Application of a nonlinear programming algorithm to this mathematically tractable formulation results in an efficient and useful computer aided design tool. Particular attention is paid to computational efficiency and various recommendations are made. Two design examples illustrate the flexibility of the approach and highlight the special insight afforded to the designer. One concerns helicopter longitudinal dynamics and the other the flight dynamics of an aerodynamically unstable aircraft.
Directory of Open Access Journals (Sweden)
Darunee Hunwisai
2017-01-01
Full Text Available In this work, we considered two-person zero-sum games with fuzzy payoffs and matrix games with payoffs of trapezoidal intuitionistic fuzzy numbers (TrIFNs. The concepts of TrIFNs and their arithmetic operations were used. The cut-set based method for matrix game with payoffs of TrIFNs was also considered. Compute the interval-type value of any alfa-constrategies by simplex method for linear programming. The proposed method is illustrated with a numerical example.
International Nuclear Information System (INIS)
Kato, K.; Ihara, S.
1993-01-01
Hydrogen is expected to be an important energy carrier, especially in the frame of global warming problem solution. The purpose of this study is to examine the condition of market penetration of hydrogen technologies in reducing CO 2 emissions. A multi-time-period linear programming model (MARKAL, Market Allocation)) is used to explore technology options and cost for meeting the energy demands while reducing CO 2 emissions from energy systems. The results show that hydrogen technologies become economical when CO 2 emissions are stringently constrained. 9 figs., 2 refs
The evaluation of multi-element personal dosemeters using the linear programming method
International Nuclear Information System (INIS)
Kragh, P.; Ambrosi, P.; Boehm, J.; Hilgers, G.
1996-01-01
Multi-element dosemeters are frequently used in individual monitoring. Each element can be regarded as an individual dosemeter with its own individual dose measurement value. In general, the individual dose values of one dosemeter vary according to the exposure conditions, i. e. the energy and angle of incidence of the radiation. The (final) dose measurement value of the personal dosemeter is calculated from the individual dose values by means of an evaluation algorithm. The best possible dose value, i.e. that of the smallest systematic (type B) uncertainty if the exposure conditions are changed in the dosemeter's rated range of use, is obtained by the method of linear programming. (author)
Energy Technology Data Exchange (ETDEWEB)
Dufour, F., E-mail: dufour@math.u-bordeaux1.fr [Institut de Mathématiques de Bordeaux, INRIA Bordeaux Sud Ouest, Team: CQFD, and IMB (France); Prieto-Rumeau, T., E-mail: tprieto@ccia.uned.es [UNED, Department of Statistics and Operations Research (Spain)
2016-08-15
We consider a discrete-time constrained discounted Markov decision process (MDP) with Borel state and action spaces, compact action sets, and lower semi-continuous cost functions. We introduce a set of hypotheses related to a positive weight function which allow us to consider cost functions that might not be bounded below by a constant, and which imply the solvability of the linear programming formulation of the constrained MDP. In particular, we establish the existence of a constrained optimal stationary policy. Our results are illustrated with an application to a fishery management problem.
Directory of Open Access Journals (Sweden)
Sukhpreet Kaur Sidhu
2014-01-01
Full Text Available The drawbacks of the existing methods to obtain the fuzzy optimal solution of such linear programming problems, in which coefficients of the constraints are represented by real numbers and all the other parameters as well as variables are represented by symmetric trapezoidal fuzzy numbers, are pointed out, and to resolve these drawbacks, a new method (named as Mehar method is proposed for the same linear programming problems. Also, with the help of proposed Mehar method, a new method, much easy as compared to the existing methods, is proposed to deal with the sensitivity analysis of the same type of linear programming problems.
Free-piston Stirling engine/linear alternator 1000-hour endurance test
Rauch, J.; Dochat, G.
1985-01-01
The Free Piston Stirling Engine (FPSE) has the potential to be a long lived, highly reliable, power conversion device attractive for many product applications such as space, residential or remote site power. The purpose of endurance testing the FPSE was to demonstrate its potential for long life. The endurance program was directed at obtaining 1000 operational hours under various test conditions: low power, full stroke, duty cycle and stop/start. Critical performance parameters were measured to note any change and/or trend. Inspections were conducted to measure and compare critical seal/bearing clearances. The engine performed well throughout the program, completing more than 1100 hours. Hardware inspection, including the critical clearances, showed no significant change in hardware or clearance dimensions. The performance parameters did not exhibit any increasing or decreasing trends. The test program confirms the potential for long life FPSE applications.
Tests of the linearity assumption in the dose-effect relationship for radiation-induced cancer
International Nuclear Information System (INIS)
Cohen, A.F.; Cohen, B.L.
1978-01-01
The validity of the BEIR linear extrapolation to low doses of the dose-effect relationship for radiation induced cancer is tested by use of natural radiation making use of selectivity on type of cancer, sex, age group, geographic area, and time period. For lung cancer, a linear interpolation between zero dose-zero effect and the data from radon-induced cancers in miners over-estimates the total number of observed lung cancers in many countries in the early years of this century; the discrepancy is substantially increased if the 30-44 year age range and/or if only females are considered, and by the fact that many other causes of lung cancer are shown to have been important at that time. The degree to which changes of diagnostic efficiency with time can influence the analysis is considered at some length. It is concluded that the linear relationship substantially over-estimates effects of low radiation doses. A similar analysis is applied to leukemia induced by natural radiation, applying selectivity by age, sex, natural background level, and date, and considering other causes. It is concluded that effects substantially larger than those obtained from linear extrapolation are excluded. The use of the selectivities mentioned above is justified by the fact that the incidence of cancer or leukemia is an upper limit on the rate at which it is caused by radiation effects; in determining upper limits it is justifiable to select situations which minimize it. (author)
DEFF Research Database (Denmark)
Parlesak, A.; Tetens, Inge; Dejgård Jensen, Jørgen
2016-01-01
programming. The FBs were defined using five different constraints: cultural acceptability (CA), or dietary guidelines (DG), or nutrient recommendations (N), or cultural acceptability and nutrient recommendations (CAN), or dietary guidelines and nutrient recommendations (DGN). The variety and number of foods...... in each of the resulting five baskets was increased through limiting the relative share of individual foods. The one-day version of N contained only 12 foods at the minimum cost of DKK 27 (€ 3.6). The CA, DG, and DGN were about twice of this and the CAN cost ~DKK 81 (€ 10.8). The baskets with the greater...... variety of foods contained from 70 (CAN) to 134 (DGN) foods and cost between DKK 60 (€ 8.1, N) and DKK 125 (€ 16.8, DGN). Ensuring that the food baskets cover both dietary guidelines and nutrient recommendations doubled the cost while cultural acceptability (CAN) tripled it. Use of linear programming...
Consideration in selecting crops for the human-rated life support system: a Linear Programming model
Wheeler, E. F.; Kossowski, J.; Goto, E.; Langhans, R. W.; White, G.; Albright, L. D.; Wilcox, D.; Henninger, D. L. (Principal Investigator)
1996-01-01
A Linear Programming model has been constructed which aids in selecting appropriate crops for CELSS (Controlled Environment Life Support System) food production. A team of Controlled Environment Agriculture (CEA) faculty, staff, graduate students and invited experts representing more than a dozen disciplines, provided a wide range of expertise in developing the model and the crop production program. The model incorporates nutritional content and controlled-environment based production yields of carefully chosen crops into a framework where a crop mix can be constructed to suit the astronauts' needs. The crew's nutritional requirements can be adequately satisfied with only a few crops (assuming vitamin mineral supplements are provided) but this will not be satisfactory from a culinary standpoint. This model is flexible enough that taste and variety driven food choices can be built into the model.
Spacecraft Testing Programs: Adding Value to the Systems Engineering Process
Britton, Keith J.; Schaible, Dawn M.
2011-01-01
Testing has long been recognized as a critical component of spacecraft development activities - yet many major systems failures may have been prevented with more rigorous testing programs. The question is why is more testing not being conducted? Given unlimited resources, more testing would likely be included in a spacecraft development program. Striking the right balance between too much testing and not enough has been a long-term challenge for many industries. The objective of this paper is to discuss some of the barriers, enablers, and best practices for developing and sustaining a strong test program and testing team. This paper will also explore the testing decision factors used by managers; the varying attitudes toward testing; methods to develop strong test engineers; and the influence of behavior, culture and processes on testing programs. KEY WORDS: Risk, Integration and Test, Validation, Verification, Test Program Development
International Nuclear Information System (INIS)
Solarin, Sakiru Adebola; Lean, Hooi Hooi
2016-01-01
This paper examines the integration properties of the total oil consumption in 57 countries for the period of 1965–2012. A combination of new and powerful linear and nonlinear stationarity tests are employed to achieve the objectives of the study. We find that the oil consumption series in 21 countries follow a nonlinearity path while those in the other countries are linear in nature. Evidence of the presence of a unit root is found for the total oil consumption series in 38 countries while the series is stationary in the remaining 19 countries. An important insight is that the blueprints that were designed to reduce oil consumption are likely to have a permanent effect in most of the countries. - Highlights: • We examine the integration properties of total oil consumption in 57 countries. • We apply new and powerful linear and nonlinear stationarity tests. • Unit root is found in two third of the countries. • Blueprints designed to reduce oil consumption are likely to have permanent effect.
Averaging and Linear Programming in Some Singularly Perturbed Problems of Optimal Control
Energy Technology Data Exchange (ETDEWEB)
Gaitsgory, Vladimir, E-mail: vladimir.gaitsgory@mq.edu.au [Macquarie University, Department of Mathematics (Australia); Rossomakhine, Sergey, E-mail: serguei.rossomakhine@flinders.edu.au [Flinders University, Flinders Mathematical Sciences Laboratory, School of Computer Science, Engineering and Mathematics (Australia)
2015-04-15
The paper aims at the development of an apparatus for analysis and construction of near optimal solutions of singularly perturbed (SP) optimal controls problems (that is, problems of optimal control of SP systems) considered on the infinite time horizon. We mostly focus on problems with time discounting criteria but a possibility of the extension of results to periodic optimization problems is discussed as well. Our consideration is based on earlier results on averaging of SP control systems and on linear programming formulations of optimal control problems. The idea that we exploit is to first asymptotically approximate a given problem of optimal control of the SP system by a certain averaged optimal control problem, then reformulate this averaged problem as an infinite-dimensional linear programming (LP) problem, and then approximate the latter by semi-infinite LP problems. We show that the optimal solution of these semi-infinite LP problems and their duals (that can be found with the help of a modification of an available LP software) allow one to construct near optimal controls of the SP system. We demonstrate the construction with two numerical examples.
Energy Technology Data Exchange (ETDEWEB)
Djukanovic, M.; Babic, B.; Milosevic, B. [Electrical Engineering Inst. Nikola Tesla, Belgrade (Yugoslavia); Sobajic, D.J. [EPRI, Palo Alto, CA (United States). Power System Control; Pao, Y.H. [Case Western Reserve Univ., Cleveland, OH (United States)]|[AI WARE, Inc., Cleveland, OH (United States)
1996-05-01
In this paper the blending/transloading facilities are modeled using an interactive fuzzy linear programming (FLP), in order to allow the decision-maker to solve the problem of uncertainty of input information within the fuel scheduling optimization. An interactive decision-making process is formulated in which decision-maker can learn to recognize good solutions by considering all possibilities of fuzziness. The application of the fuzzy formulation is accompanied by a careful examination of the definition of fuzziness, appropriateness of the membership function and interpretation of results. The proposed concept provides a decision support system with integration-oriented features, whereby the decision-maker can learn to recognize the relative importance of factors in the specific domain of optimal fuel scheduling (OFS) problem. The formulation of a fuzzy linear programming problem to obtain a reasonable nonfuzzy solution under consideration of the ambiguity of parameters, represented by fuzzy numbers, is introduced. An additional advantage of the FLP formulation is its ability to deal with multi-objective problems.
Chen, Ruoying; Zhang, Zhiwang; Wu, Di; Zhang, Peng; Zhang, Xinyang; Wang, Yong; Shi, Yong
2011-01-21
Protein-protein interactions are fundamentally important in many biological processes and it is in pressing need to understand the principles of protein-protein interactions. Mutagenesis studies have found that only a small fraction of surface residues, known as hot spots, are responsible for the physical binding in protein complexes. However, revealing hot spots by mutagenesis experiments are usually time consuming and expensive. In order to complement the experimental efforts, we propose a new computational approach in this paper to predict hot spots. Our method, Rough Set-based Multiple Criteria Linear Programming (RS-MCLP), integrates rough sets theory and multiple criteria linear programming to choose dominant features and computationally predict hot spots. Our approach is benchmarked by a dataset of 904 alanine-mutated residues and the results show that our RS-MCLP method performs better than other methods, e.g., MCLP, Decision Tree, Bayes Net, and the existing HotSprint database. In addition, we reveal several biological insights based on our analysis. We find that four features (the change of accessible surface area, percentage of the change of accessible surface area, size of a residue, and atomic contacts) are critical in predicting hot spots. Furthermore, we find that three residues (Tyr, Trp, and Phe) are abundant in hot spots through analyzing the distribution of amino acids. Copyright © 2010 Elsevier Ltd. All rights reserved.
Li, Yanning
2013-10-01
This article presents a new robust control framework for transportation problems in which the state is modeled by a first order scalar conservation law. Using an equivalent formulation based on a Hamilton-Jacobi equation, we pose the problem of controlling the state of the system on a network link, using boundary flow control, as a Linear Program. Unlike many previously investigated transportation control schemes, this method yields a globally optimal solution and is capable of handling shocks (i.e. discontinuities in the state of the system). We also demonstrate that the same framework can handle robust control problems, in which the uncontrollable components of the initial and boundary conditions are encoded in intervals on the right hand side of inequalities in the linear program. The lower bound of the interval which defines the smallest feasible solution set is used to solve the robust LP (or MILP if the objective function depends on boolean variables). Since this framework leverages the intrinsic properties of the Hamilton-Jacobi equation used to model the state of the system, it is extremely fast. Several examples are given to demonstrate the performance of the robust control solution and the trade-off between the robustness and the optimality. © 2013 IEEE.
International Nuclear Information System (INIS)
Shimizu, Yoshiaki
1988-01-01
Due to the simplicity and effectiveness, linear program has been popular in the actual optimization in various fields. In the previous study, the uncertainty involved in the model at the different stage of optimization was dealt with by post-optimizing analysis. But it often becomes insufficient to make a decision how to deal with an uncertain system especially suffering large parameter deviation. Recently in the field of processing systems, it is desired to obtain a flexible solution which can present the counterplan to a deviating system from a practical viewpoint. The scope of this preliminary note presents how to apply a methodology development to obtain the flexible solution of a linear program. For this purpose, a simple example associated with nuclear reactor decommissioning is shown. The problem to maximize a system performance given as an objective function under the constraint of the static behavior of the system is considered, and the flexible solution is determined. In Japan, the decommissioning of commercial nuclear power plants will being in near future, and the study using the retired research reactor JPDR is in progress. The planning of decontamination and the reuse of wastes is taken as the example. (Kako, I.)
Li, Yanning; Canepa, Edward S.; Claudel, Christian G.
2013-01-01
This article presents a new robust control framework for transportation problems in which the state is modeled by a first order scalar conservation law. Using an equivalent formulation based on a Hamilton-Jacobi equation, we pose the problem of controlling the state of the system on a network link, using boundary flow control, as a Linear Program. Unlike many previously investigated transportation control schemes, this method yields a globally optimal solution and is capable of handling shocks (i.e. discontinuities in the state of the system). We also demonstrate that the same framework can handle robust control problems, in which the uncontrollable components of the initial and boundary conditions are encoded in intervals on the right hand side of inequalities in the linear program. The lower bound of the interval which defines the smallest feasible solution set is used to solve the robust LP (or MILP if the objective function depends on boolean variables). Since this framework leverages the intrinsic properties of the Hamilton-Jacobi equation used to model the state of the system, it is extremely fast. Several examples are given to demonstrate the performance of the robust control solution and the trade-off between the robustness and the optimality. © 2013 IEEE.
A novel approach based on preference-based index for interval bilevel linear programming problem.
Ren, Aihong; Wang, Yuping; Xue, Xingsi
2017-01-01
This paper proposes a new methodology for solving the interval bilevel linear programming problem in which all coefficients of both objective functions and constraints are considered as interval numbers. In order to keep as much uncertainty of the original constraint region as possible, the original problem is first converted into an interval bilevel programming problem with interval coefficients in both objective functions only through normal variation of interval number and chance-constrained programming. With the consideration of different preferences of different decision makers, the concept of the preference level that the interval objective function is preferred to a target interval is defined based on the preference-based index. Then a preference-based deterministic bilevel programming problem is constructed in terms of the preference level and the order relation [Formula: see text]. Furthermore, the concept of a preference δ -optimal solution is given. Subsequently, the constructed deterministic nonlinear bilevel problem is solved with the help of estimation of distribution algorithm. Finally, several numerical examples are provided to demonstrate the effectiveness of the proposed approach.
A novel approach based on preference-based index for interval bilevel linear programming problem
Directory of Open Access Journals (Sweden)
Aihong Ren
2017-05-01
Full Text Available Abstract This paper proposes a new methodology for solving the interval bilevel linear programming problem in which all coefficients of both objective functions and constraints are considered as interval numbers. In order to keep as much uncertainty of the original constraint region as possible, the original problem is first converted into an interval bilevel programming problem with interval coefficients in both objective functions only through normal variation of interval number and chance-constrained programming. With the consideration of different preferences of different decision makers, the concept of the preference level that the interval objective function is preferred to a target interval is defined based on the preference-based index. Then a preference-based deterministic bilevel programming problem is constructed in terms of the preference level and the order relation ⪯ m w $\\preceq_{mw}$ . Furthermore, the concept of a preference δ-optimal solution is given. Subsequently, the constructed deterministic nonlinear bilevel problem is solved with the help of estimation of distribution algorithm. Finally, several numerical examples are provided to demonstrate the effectiveness of the proposed approach.
Energy Technology Data Exchange (ETDEWEB)
Xu, Zhaoping; Chang, Siqin [School of Mechanical Engineering, Nanjing University of Science and Technology, Nanjing 210094 (China)
2010-04-15
A novel four-stroke free-piston engine equipped with a linear electric generator (namely internal combustion linear generator integrated power system) is proposed in this paper to achieve efficient energy conversion from fuel to electricity. Unique features of the novel power system are presented and their effects on the continuous running are discussed, along with potential advantages and disadvantages compared to conventional engines. A single cylinder, gasoline and spark ignition prototype is fabricated with reference to the geometric and control parameters of an existing conventional four-stroke engine. Stable running of the prototype is realized, and a 2.2 kW average output power with the generating efficiency of 32% has been obtained up to now. The feasibility and performance of the proposed design are verified. Detailed testing results from the continuous running prototype are analyzed in this paper for giving insight into the performance and dynamic behaviors of the novel power system. (author)
International Nuclear Information System (INIS)
Speliotopoulos, A.D.; Chiao, Raymond Y.
2004-01-01
The coupling of gravity to matter is explored in the linearized gravity limit. The usual derivation of gravity-matter couplings within the quantum-field-theoretic framework is reviewed. A number of inconsistencies between this derivation of the couplings and the known results of tidal effects on test particles according to classical general relativity are pointed out. As a step towards resolving these inconsistencies, a general laboratory frame fixed on the worldline of an observer is constructed. In this frame, the dynamics of nonrelativistic test particles in the linearized gravity limit is studied, and their Hamiltonian dynamics is derived. It is shown that for stationary metrics this Hamiltonian reduces to the usual Hamiltonian for nonrelativistic particles undergoing geodesic motion. For nonstationary metrics with long-wavelength gravitational waves present (GWs), it reduces to the Hamiltonian for a nonrelativistic particle undergoing geodesic deviation motion. Arbitrary-wavelength GWs couple to the test particle through a vector-potential-like field N a , the net result of the tidal forces that the GW induces in the system, namely, a local velocity field on the system induced by tidal effects, as seen by an observer in the general laboratory frame. Effective electric and magnetic fields, which are related to the electric and magnetic parts of the Weyl tensor, are constructed from N a that obey equations of the same form as Maxwell's equations. A gedankin gravitational Aharonov-Bohm-type experiment using N a to measure the interference of quantum test particles is presented
DEFF Research Database (Denmark)
Escudero, Laureano F.; Monge, Juan Francisco; Morales, Dolores Romero
2015-01-01
In this paper we consider multiperiod mixed 0–1 linear programming models under uncertainty. We propose a risk averse strategy using stochastic dominance constraints (SDC) induced by mixed-integer linear recourse as the risk measure. The SDC strategy extends the existing literature to the multist...
Nakhanu, Shikuku Beatrice; Musasia, Amadalo Maurice
2015-01-01
The topic Linear Programming is included in the compulsory Kenyan secondary school mathematics curriculum at form four. The topic provides skills for determining best outcomes in a given mathematical model involving some linear relationship. This technique has found application in business, economics as well as various engineering fields. Yet many…
Aihong Ren
2016-01-01
This paper is concerned with a class of fully fuzzy bilevel linear programming problems where all the coefficients and decision variables of both objective functions and the constraints are fuzzy numbers. A new approach based on deviation degree measures and a ranking function method is proposed to solve these problems. We first introduce concepts of the feasible region and the fuzzy optimal solution of a fully fuzzy bilevel linear programming problem. In order to obtain a fuzzy optimal solut...
Pioneer Robot Testing Program and Status
International Nuclear Information System (INIS)
Herndon, J.N.
2001-01-01
The U.S. Department of Energy (USDOE) and Ukraine established a joint program in 1997 to address the need for remotely operated systems for unstructured environments in Ukraine such as the highly hazardous conditions inside the failed Chernobyl Nuclear Power Plant (ChNPP) Unit 4, or Shelter Object. The environment inside Shelter Object is extremely hazardous due to ionizing radiation fields, high airborne contamination, and major industrial safety issues. Although Ukrainian workers have explored and mapped much of the internals of Unit 4 in the time since the accident during the morning hours of April 26, 1986, there remain areas where humans have not entered to this date. Based on the agreement between USDOE and Ukraine, the USDOE, in cooperation with the U.S. National Aeronautics and Space Administration (NASA), developed the Pioneer Robot and has provided it to the ChNPP within the framework of international technical assistance. Pioneer is capable of mobile platform movement and manipulation under teleoperated control, 3-dimensional mapping, and environmental data collection. The Pioneer is radiation hardened for conditions like those of Shelter Object. Pioneer has been evaluated on site in Ukraine for use in both the Shelter Object environment and the more general conditions of ChNPP decommissioning. This paper summarizes the results of these testing activities and describes the status and near-term activities in support of the Pioneer Robot integration into Ukraine
Dai, James Y.; Chan, Kwun Chuen Gary; Hsu, Li
2014-01-01
Instrumental variable regression is one way to overcome unmeasured confounding and estimate causal effect in observational studies. Built on structural mean models, there has been considerale work recently developed for consistent estimation of causal relative risk and causal odds ratio. Such models can sometimes suffer from identification issues for weak instruments. This hampered the applicability of Mendelian randomization analysis in genetic epidemiology. When there are multiple genetic variants available as instrumental variables, and causal effect is defined in a generalized linear model in the presence of unmeasured confounders, we propose to test concordance between instrumental variable effects on the intermediate exposure and instrumental variable effects on the disease outcome, as a means to test the causal effect. We show that a class of generalized least squares estimators provide valid and consistent tests of causality. For causal effect of a continuous exposure on a dichotomous outcome in logistic models, the proposed estimators are shown to be asymptotically conservative. When the disease outcome is rare, such estimators are consistent due to the log-linear approximation of the logistic function. Optimality of such estimators relative to the well-known two-stage least squares estimator and the double-logistic structural mean model is further discussed. PMID:24863158
A free-piston Stirling engine/linear alternator controls and load interaction test facility
Rauch, Jeffrey S.; Kankam, M. David; Santiago, Walter; Madi, Frank J.
1992-01-01
A test facility at LeRC was assembled for evaluating free-piston Stirling engine/linear alternator control options, and interaction with various electrical loads. This facility is based on a 'SPIKE' engine/alternator. The engine/alternator, a multi-purpose load system, a digital computer based load and facility control, and a data acquisition system with both steady-periodic and transient capability are described. Preliminary steady-periodic results are included for several operating modes of a digital AC parasitic load control. Preliminary results on the transient response to switching a resistive AC user load are discussed.
Borjigin, Sumuya; Yang, Yating; Yang, Xiaoguang; Sun, Leilei
2018-03-01
Many researchers have realized that there is a strong correlation between stock prices and macroeconomy. In order to make this relationship clear, a lot of studies have been done. However, the causal relationship between stock prices and macroeconomy has still not been well explained. A key point is that, most of the existing research adopts linear and stable models to investigate the correlation of stock prices and macroeconomy, while the real causality of that may be nonlinear and dynamic. To fill this research gap, we investigate the nonlinear and dynamic causal relationships between stock prices and macroeconomy. Based on the case of China's stock prices and acroeconomy measures from January 1992 to March 2017, we compare the linear Granger causality test models with nonlinear ones. Results demonstrate that the nonlinear dynamic Granger causality is much stronger than linear Granger causality. From the perspective of nonlinear dynamic Granger causality, China's stock prices can be viewed as "national economic barometer". On the one hand, this study will encourage researchers to take nonlinearity and dynamics into account when they investigate the correlation of stock prices and macroeconomy; on the other hand, our research can guide regulators and investors to make better decisions.
Kim, Seunggyu; Lee, Seokhun; Jeon, Jessie S.
2017-11-01
To determine the most effective antimicrobial treatments of infectious pathogen, high-throughput antibiotic susceptibility test (AST) is critically required. However, the conventional AST requires at least 16 hours to reach the minimum observable population. Therefore, we developed a microfluidic system that allows maintenance of linear antibiotic concentration and measurement of local bacterial density. Based on the Stokes-Einstein equation, the flow rate in the microchannel was optimized so that linearization was achieved within 10 minutes, taking into account the diffusion coefficient of each antibiotic in the agar gel. As a result, the minimum inhibitory concentration (MIC) of each antibiotic against P. aeruginosa could be immediately determined 6 hours after treatment of the linear antibiotic concentration. In conclusion, our system proved the efficacy of a high-throughput AST platform through MIC comparison with Clinical and Laboratory Standards Institute (CLSI) range of antibiotics. This work was supported by the Climate Change Research Hub (Grant No. N11170060) of the KAIST and by the Brain Korea 21 Plus project.
International Nuclear Information System (INIS)
Fernandes, Marco A.R.; Fernandes, David M.; Florentino, Helenice O.
2010-01-01
The work detaches the importance of the use of mathematical tools and computer systems for optimization of the planning in radiotherapy, seeking to the distribution of dose of appropriate radiation in the white volume that provides an ideal therapeutic rate between the tumor cells and the adjacent healthy tissues, extolled in the radiotherapy protocols. Examples of target volumes mathematically modeled are analyzed with the technique of linear programming, comparing the obtained results using the Simplex algorithm with those using the algorithm of Interior Points. The System Genesis II was used for obtaining of the isodose curves for the outline and geometry of fields idealized in the computer simulations, considering the parameters of a 10 MV photons beams. Both programming methods (Simplex and Interior Points) they resulted in a distribution of integral dose in the tumor volume and allow the adaptation of the dose in the critical organs inside of the restriction limits extolled. The choice of an or other method should take into account the facility and the need of limiting the programming time. The isodose curves, obtained with the Genesis II System, illustrate that the adjacent healthy tissues to the tumor receives larger doses than those reached in the computer simulations. More coincident values can be obtained altering the weights and some factors of minimization of the objective function. The prohibitive costs of the computer planning systems, at present available for radiotherapy, it motivates the researches to look for the implementation of simpler and so effective methods for optimization of the treatment plan. (author)
Directory of Open Access Journals (Sweden)
Mbarek Elbounjimi
2015-11-01
Full Text Available Closed-loop supply chain network design is a critical issue due to its impact on both economic and environmental performances of the supply chain. In this paper, we address the problem of designing a multi-echelon, multi-product and capacitated closed-loop supply chain network. First, a mixed-integer linear programming formulation is developed to maximize the total profit. The main contribution of the proposed model is addressing two economic viability issues of closed-loop supply chain. The first issue is the collection of sufficient quantity of end-of-life products are assured by retailers against an acquisition price. The second issue is exploiting the benefits of colocation of forward facilities and reverse facilities. The presented model is solved by LINGO for some test problems. Computational results and sensitivity analysis are conducted to show the performance of the proposed model.
Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.
Yoo, Yun Joo; Sun, Lei; Poirier, Julia G; Paterson, Andrew D; Bull, Shelley B
2017-02-01
By jointly analyzing multiple variants within a gene, instead of one at a time, gene-based multiple regression can improve power, robustness, and interpretation in genetic association analysis. We investigate multiple linear combination (MLC) test statistics for analysis of common variants under realistic trait models with linkage disequilibrium (LD) based on HapMap Asian haplotypes. MLC is a directional test that exploits LD structure in a gene to construct clusters of closely correlated variants recoded such that the majority of pairwise correlations are positive. It combines variant effects within the same cluster linearly, and aggregates cluster-specific effects in a quadratic sum of squares and cross-products, producing a test statistic with reduced degrees of freedom (df) equal to the number of clusters. By simulation studies of 1000 genes from across the genome, we demonstrate that MLC is a well-powered and robust choice among existing methods across a broad range of gene structures. Compared to minimum P-value, variance-component, and principal-component methods, the mean power of MLC is never much lower than that of other methods, and can be higher, particularly with multiple causal variants. Moreover, the variation in gene-specific MLC test size and power across 1000 genes is less than that of other methods, suggesting it is a complementary approach for discovery in genome-wide analysis. The cluster construction of the MLC test statistics helps reveal within-gene LD structure, allowing interpretation of clustered variants as haplotypic effects, while multiple regression helps to distinguish direct and indirect associations. © 2016 The Authors Genetic Epidemiology Published by Wiley Periodicals, Inc.
Xia, Yin; Cai, Tianxi; Cai, T Tony
2018-01-01
Motivated by applications in genomics, we consider in this paper global and multiple testing for the comparisons of two high-dimensional linear regression models. A procedure for testing the equality of the two regression vectors globally is proposed and shown to be particularly powerful against sparse alternatives. We then introduce a multiple testing procedure for identifying unequal coordinates while controlling the false discovery rate and false discovery proportion. Theoretical justifications are provided to guarantee the validity of the proposed tests and optimality results are established under sparsity assumptions on the regression coefficients. The proposed testing procedures are easy to implement. Numerical properties of the procedures are investigated through simulation and data analysis. The results show that the proposed tests maintain the desired error rates under the null and have good power under the alternative at moderate sample sizes. The procedures are applied to the Framingham Offspring study to investigate the interactions between smoking and cardiovascular related genetic mutations important for an inflammation marker.
Ghadie, Mohamed A; Japkowicz, Nathalie; Perkins, Theodore J
2015-08-15
Stem cell differentiation is largely guided by master transcriptional regulators, but it also depends on the expression of other types of genes, such as cell cycle genes, signaling genes, metabolic genes, trafficking genes, etc. Traditional approaches to understanding gene expression patterns across multiple conditions, such as principal components analysis or K-means clustering, can group cell types based on gene expression, but they do so without knowledge of the differentiation hierarchy. Hierarchical clustering can organize cell types into a tree, but in general this tree is different from the differentiation hierarchy itself. Given the differentiation hierarchy and gene expression data at each node, we construct a weighted Euclidean distance metric such that the minimum spanning tree with respect to that metric is precisely the given differentiation hierarchy. We provide a set of linear constraints that are provably sufficient for the desired construction and a linear programming approach to identify sparse sets of weights, effectively identifying genes that are most relevant for discriminating different parts of the tree. We apply our method to microarray gene expression data describing 38 cell types in the hematopoiesis hierarchy, constructing a weighted Euclidean metric that uses just 175 genes. However, we find that there are many alternative sets of weights that satisfy the linear constraints. Thus, in the style of random-forest training, we also construct metrics based on random subsets of the genes and compare them to the metric of 175 genes. We then report on the selected genes and their biological functions. Our approach offers a new way to identify genes that may have important roles in stem cell differentiation. tperkins@ohri.ca Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
International Nuclear Information System (INIS)
De Donato, O.; Parisi, M.A.
1977-01-01
When loads increase proportionally beyond the elastic limit in the presence of elastic-plastic piecewise-linear constitutive laws, the problem of finding the whole evolution of the plastic strain and displacements of structures was recently shown to be amenable to a parametric linear complementary problem (PLCP) in which the parameter is represented by the load factor, the matrix is symmetric positive definite or at least semi-definite (for perfect plasticity) and the variables with a direct mechanical meaning are the plastic multipliers. With reference to plane trusses and frames with elastic-plastic linear work-hardening material behaviour numerical solutions were also fairly efficiently obtained using a recent mathematical programming algorithm (due to R.W. Cottle) which is able to provide the whole deformation history of the structure and, at the same time to rule out local unloadings along the given proportional loading process by means of 'a priori' checks carried out before each pivotal step of the procedure. Hence it becomes possible to use the holonomic (reversible, path-independent) constitutive laws in finite terms and to benefit by all the relevant numerical and computational advantages despite the non-holonomic nature of plastic behaviour. In the present paper the method of solution is re-examined in view to overcome an important drawback of the algorithm deriving from the size of PLCP fully populated matrix when structural problems with large number of variables are considered and, consequently, the updating, the storing or, generally, the handling of the current tableau may become prohibitive. (Auth.)
Linear programming optimization of nuclear energy strategy with sodium-cooled fast reactors
International Nuclear Information System (INIS)
Lee, Je Whan; Jeong, Yong Hoon; Chang, Yoon Il; Chang, Soon Heung
2011-01-01
Nuclear power has become an essential part of electricity generation to meet the continuous growth of electricity demand. A Sodium-cooled Fast Reactor (SFR) was developed to extend uranium resource utilization under a growing nuclear energy scenario while concomitantly providing a nuclear waste management solution. Key questions in this scenario are when to introduce SFRs and how many reactors should be introduced. In this study, a methodology using Linear Programming is employed in order to quantify an optimized growth pattern of a nuclear energy system comprising light water reactors and SFRs. The optimization involves tradeoffs between SFR capital cost premiums and the total system U3O8 price premiums. Optimum nuclear growth patterns for several scenarios are presented, as well as sensitivity analyses of important input parameters
Approximating high-dimensional dynamics by barycentric coordinates with linear programming
Energy Technology Data Exchange (ETDEWEB)
Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki [Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505 (Japan); Department of Mathematical Informatics, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656 (Japan); CREST, JST, 4-1-8 Honcho, Kawaguchi, Saitama 332-0012 (Japan); Shiro, Masanori [Department of Mathematical Informatics, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656 (Japan); Mathematical Neuroinformatics Group, Advanced Industrial Science and Technology, Tsukuba, Ibaraki 305-8568 (Japan); Takahashi, Nozomu; Mas, Paloma [Center for Research in Agricultural Genomics (CRAG), Consorci CSIC-IRTA-UAB-UB, Barcelona 08193 (Spain)
2015-01-15
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.
Decomposition and (importance) sampling techniques for multi-stage stochastic linear programs
Energy Technology Data Exchange (ETDEWEB)
Infanger, G.
1993-11-01
The difficulty of solving large-scale multi-stage stochastic linear programs arises from the sheer number of scenarios associated with numerous stochastic parameters. The number of scenarios grows exponentially with the number of stages and problems get easily out of hand even for very moderate numbers of stochastic parameters per stage. Our method combines dual (Benders) decomposition with Monte Carlo sampling techniques. We employ importance sampling to efficiently obtain accurate estimates of both expected future costs and gradients and right-hand sides of cuts. The method enables us to solve practical large-scale problems with many stages and numerous stochastic parameters per stage. We discuss the theory of sharing and adjusting cuts between different scenarios in a stage. We derive probabilistic lower and upper bounds, where we use importance path sampling for the upper bound estimation. Initial numerical results turned out to be promising.
Approximating high-dimensional dynamics by barycentric coordinates with linear programming
International Nuclear Information System (INIS)
Hirata, Yoshito; Aihara, Kazuyuki; Suzuki, Hideyuki; Shiro, Masanori; Takahashi, Nozomu; Mas, Paloma
2015-01-01
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data
Impulsive Control for Continuous-Time Markov Decision Processes: A Linear Programming Approach
Energy Technology Data Exchange (ETDEWEB)
Dufour, F., E-mail: dufour@math.u-bordeaux1.fr [Bordeaux INP, IMB, UMR CNRS 5251 (France); Piunovskiy, A. B., E-mail: piunov@liv.ac.uk [University of Liverpool, Department of Mathematical Sciences (United Kingdom)
2016-08-15
In this paper, we investigate an optimization problem for continuous-time Markov decision processes with both impulsive and continuous controls. We consider the so-called constrained problem where the objective of the controller is to minimize a total expected discounted optimality criterion associated with a cost rate function while keeping other performance criteria of the same form, but associated with different cost rate functions, below some given bounds. Our model allows multiple impulses at the same time moment. The main objective of this work is to study the associated linear program defined on a space of measures including the occupation measures of the controlled process and to provide sufficient conditions to ensure the existence of an optimal control.
A Mixed Integer Linear Programming Approach to Electrical Stimulation Optimization Problems.
Abouelseoud, Gehan; Abouelseoud, Yasmine; Shoukry, Amin; Ismail, Nour; Mekky, Jaidaa
2018-02-01
Electrical stimulation optimization is a challenging problem. Even when a single region is targeted for excitation, the problem remains a constrained multi-objective optimization problem. The constrained nature of the problem results from safety concerns while its multi-objectives originate from the requirement that non-targeted regions should remain unaffected. In this paper, we propose a mixed integer linear programming formulation that can successfully address the challenges facing this problem. Moreover, the proposed framework can conclusively check the feasibility of the stimulation goals. This helps researchers to avoid wasting time trying to achieve goals that are impossible under a chosen stimulation setup. The superiority of the proposed framework over alternative methods is demonstrated through simulation examples.
Xia, Bisheng; Qian, Xin; Yao, Hong
2017-11-01
Although the risk-explicit interval linear programming (REILP) model has solved the problem of having interval solutions, it has an equity problem, which can lead to unbalanced allocation between different decision variables. Therefore, an improved REILP model is proposed. This model adds an equity objective function and three constraint conditions to overcome this equity problem. In this case, pollution reduction is in proportion to pollutant load, which supports balanced development between different regional economies. The model is used to solve the problem of pollution load allocation in a small transboundary watershed. Compared with the REILP original model result, our model achieves equity between the upstream and downstream pollutant loads; it also overcomes the problem of greatest pollution reduction, where sources are nearest to the control section. The model provides a better solution to the problem of pollution load allocation than previous versions.
Approximating high-dimensional dynamics by barycentric coordinates with linear programming.
Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma
2015-01-01
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.
A minimax technique for time-domain design of preset digital equalizers using linear programming
Vaughn, G. L.; Houts, R. C.
1975-01-01
A linear programming technique is presented for the design of a preset finite-impulse response (FIR) digital filter to equalize the intersymbol interference (ISI) present in a baseband channel with known impulse response. A minimax technique is used which minimizes the maximum absolute error between the actual received waveform and a specified raised-cosine waveform. Transversal and frequency-sampling FIR digital filters are compared as to the accuracy of the approximation, the resultant ISI and the transmitted energy required. The transversal designs typically have slightly better waveform accuracy for a given distortion; however, the frequency-sampling equalizer uses fewer multipliers and requires less transmitted energy. A restricted transversal design is shown to use the least number of multipliers at the cost of a significant increase in energy and loss of waveform accuracy at the receiver.
Zenis, F. M.; Supian, S.; Lesmana, E.
2018-03-01
Land is one of the most important assets for farmers in Sumedang Regency. Therefore, agricultural land should be used optimally. This study aims to obtain the optimal land use composition in order to obtain maximum income. The optimization method used in this research is Linear Programming Models. Based on the results of the analysis, the composition of land use for rice area of 135.314 hectares, corn area of 11.798 hectares, soy area of 2.290 hectares, and peanuts of 2.818 hectares with the value of farmers income of IDR 2.682.020.000.000,-/year. The results of this analysis can be used as a consideration in decisions making about cropping patterns by farmers.
Directory of Open Access Journals (Sweden)
Tumpal Sihombing
2013-01-01
Full Text Available The world is entering the era of recession when the trend is bearish and market is not so favorable. The capital markets in every major country were experiencing great amount of loss and people suffered in their investment. The Jakarta Composite Index (JCI has shown a great downturn for the past one year but the trend bearish year of the JCI. Therefore, rational investors should consider restructuring their portfolio to set bigger proportion in bonds and cash instead of stocks. Investors can apply modern portfolio theory by Harry Markowitz to find the optimum asset allocation for their portfolio. Higher return is always associated with higher risk. This study shows investors how to find out the lowest risk of a portfolio investment by providing them with several structures of portfolio weighting. By this way, investor can compare and make the decision based on risk-return consideration and opportunity cost as well. Keywords: Modern portfolio theory, Monte Carlo, linear programming
Linear programming: an alternative approach for developing formulations for emergency food products.
Sheibani, Ershad; Dabbagh Moghaddam, Arasb; Sharifan, Anousheh; Afshari, Zahra
2018-03-01
To minimize the mortality rates of individuals affected by disasters, providing high-quality food relief during the initial stages of an emergency is crucial. The goal of this study was to develop a formulation for a high-energy, nutrient-dense prototype using linear programming (LP) model as a novel method for developing formulations for food products. The model consisted of the objective function and the decision variables, which were the formulation costs and weights of the selected commodities, respectively. The LP constraints were the Institute of Medicine and the World Health Organization specifications of the content of nutrients in the product. Other constraints related to the product's sensory properties were also introduced to the model. Nonlinear constraints for energy ratios of nutrients were linearized to allow their use in the LP. Three focus group studies were conducted to evaluate the palatability and other aspects of the optimized formulation. New constraints were introduced to the LP model based on the focus group evaluations to improve the formulation. LP is an appropriate tool for designing formulations of food products to meet a set of nutritional requirements. This method is an excellent alternative to the traditional 'trial and error' method in designing formulations. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
Baran, Richard; Northen, Trent R
2013-10-15
Untargeted metabolite profiling using liquid chromatography and mass spectrometry coupled via electrospray ionization is a powerful tool for the discovery of novel natural products, metabolic capabilities, and biomarkers. However, the elucidation of the identities of uncharacterized metabolites from spectral features remains challenging. A critical step in the metabolite identification workflow is the assignment of redundant spectral features (adducts, fragments, multimers) and calculation of the underlying chemical formula. Inspection of the data by experts using computational tools solving partial problems (e.g., chemical formula calculation for individual ions) can be performed to disambiguate alternative solutions and provide reliable results. However, manual curation is tedious and not readily scalable or standardized. Here we describe an automated procedure for the robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming optimization (RAMSI). Chemical rules among related ions are expressed as linear constraints and both the spectra interpretation and chemical formula calculation are performed in a single optimization step. This approach is unbiased in that it does not require predefined sets of neutral losses and positive and negative polarity spectra can be combined in a single optimization. The procedure was evaluated with 30 experimental mass spectra and was found to effectively identify the protonated or deprotonated molecule ([M + H](+) or [M - H](-)) while being robust to the presence of background ions. RAMSI provides a much-needed standardized tool for interpreting ions for subsequent identification in untargeted metabolomics workflows.
Directory of Open Access Journals (Sweden)
Maoyuan Feng
2014-01-01
Full Text Available This study proposes a mixed integer linear programming (MILP model to optimize the spillways scheduling for reservoir flood control. Unlike the conventional reservoir operation model, the proposed MILP model specifies the spillways status (including the number of spillways to be open and the degree of the spillway opened instead of reservoir release, since the release is actually controlled by using the spillway. The piecewise linear approximation is used to formulate the relationship between the reservoir storage and water release for a spillway, which should be open/closed with a status depicted by a binary variable. The control order and symmetry rules of spillways are described and incorporated into the constraints for meeting the practical demand. Thus, a MILP model is set up to minimize the maximum reservoir storage. The General Algebraic Modeling System (GAMS and IBM ILOG CPLEX Optimization Studio (CPLEX software are used to find the optimal solution for the proposed MILP model. The China’s Three Gorges Reservoir, whose spillways are of five types with the total number of 80, is selected as the case study. It is shown that the proposed model decreases the flood risk compared with the conventional operation and makes the operation more practical by specifying the spillways status directly.
150-MW S-Band klystron program at the Stanford Linear Accelerator Center
Energy Technology Data Exchange (ETDEWEB)
Sprehn, D.; Caryotakis, G.; Phillips, R.M. [Stanford Linear Accelerator Center, Stanford Univ., Stanford, CA (United States)
1997-04-01
Two S-Band klystrons operating at 150 MW have been designed, fabricated and tested at the Stanford Linear Accelerator Center (SLAC) during the past two years for use in an experimental accelerator at Deutsches Elektronen Synchrotron (DESY) in Hamburg, Germany. Both klystrons operate at the design power, 60 Hz repetition rate, 3 {mu}s pulsewidth, with an efficiency > 40%, and agreement between the experimental results and simulations is excellent. The 535 kV, 700 A electron gun was tested by constructing a solenoidal focused beam-stick which identified a source of oscillation, subsequently engineered out of the klystron guns. Design of the beam-stick and the two klystrons is discussed, along with observation and suppression of spurious oscillations. Differences in design and the resulting performance of the klystrons is emphasized. (author)
Directory of Open Access Journals (Sweden)
Guan Yu
Full Text Available Accurately identifying mild cognitive impairment (MCI individuals who will progress to Alzheimer's disease (AD is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI and fluorodeoxyglucose positron emission tomography (FDG-PET. However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI subjects and 226 stable MCI (sMCI subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images and also the single-task classification method (using only MRI or only subjects with both MRI and
PAPR reduction in FBMC using an ACE-based linear programming optimization
van der Neut, Nuan; Maharaj, Bodhaswar TJ; de Lange, Frederick; González, Gustavo J.; Gregorio, Fernando; Cousseau, Juan
2014-12-01
This paper presents four novel techniques for peak-to-average power ratio (PAPR) reduction in filter bank multicarrier (FBMC) modulation systems. The approach extends on current PAPR reduction active constellation extension (ACE) methods, as used in orthogonal frequency division multiplexing (OFDM), to an FBMC implementation as the main contribution. The four techniques introduced can be split up into two: linear programming optimization ACE-based techniques and smart gradient-project (SGP) ACE techniques. The linear programming (LP)-based techniques compensate for the symbol overlaps by utilizing a frame-based approach and provide a theoretical upper bound on achievable performance for the overlapping ACE techniques. The overlapping ACE techniques on the other hand can handle symbol by symbol processing. Furthermore, as a result of FBMC properties, the proposed techniques do not require side information transmission. The PAPR performance of the techniques is shown to match, or in some cases improve, on current PAPR techniques for FBMC. Initial analysis of the computational complexity of the SGP techniques indicates that the complexity issues with PAPR reduction in FBMC implementations can be addressed. The out-of-band interference introduced by the techniques is investigated. As a result, it is shown that the interference can be compensated for, whilst still maintaining decent PAPR performance. Additional results are also provided by means of a study of the PAPR reduction of the proposed techniques at a fixed clipping probability. The bit error rate (BER) degradation is investigated to ensure that the trade-off in terms of BER degradation is not too severe. As illustrated by exhaustive simulations, the SGP ACE-based technique proposed are ideal candidates for practical implementation in systems employing the low-complexity polyphase implementation of FBMC modulators. The methods are shown to offer significant PAPR reduction and increase the feasibility of FBMC as
Yu, Guan; Liu, Yufeng; Thung, Kim-Han; Shen, Dinggang
2014-01-01
Accurately identifying mild cognitive impairment (MCI) individuals who will progress to Alzheimer's disease (AD) is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET). However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD) analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification) for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF) learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI) subjects and 226 stable MCI (sMCI) subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images) and also the single-task classification method (using only MRI or only subjects with both MRI and PET images
Parlesak, Alexandr; Tetens, Inge; Dejgård Jensen, Jørgen; Smed, Sinne; Gabrijelčič Blenkuš, Mojca; Rayner, Mike; Darmon, Nicole; Robertson, Aileen
2016-01-01
Food-Based Dietary Guidelines (FBDGs) are developed to promote healthier eating patterns, but increasing food prices may make healthy eating less affordable. The aim of this study was to design a range of cost-minimized nutritionally adequate health-promoting food baskets (FBs) that help prevent both micronutrient inadequacy and diet-related non-communicable diseases at lowest cost. Average prices for 312 foods were collected within the Greater Copenhagen area. The cost and nutrient content of five different cost-minimized FBs for a family of four were calculated per day using linear programming. The FBs were defined using five different constraints: cultural acceptability (CA), or dietary guidelines (DG), or nutrient recommendations (N), or cultural acceptability and nutrient recommendations (CAN), or dietary guidelines and nutrient recommendations (DGN). The variety and number of foods in each of the resulting five baskets was increased through limiting the relative share of individual foods. The one-day version of N contained only 12 foods at the minimum cost of DKK 27 (€ 3.6). The CA, DG, and DGN were about twice of this and the CAN cost ~DKK 81 (€ 10.8). The baskets with the greater variety of foods contained from 70 (CAN) to 134 (DGN) foods and cost between DKK 60 (€ 8.1, N) and DKK 125 (€ 16.8, DGN). Ensuring that the food baskets cover both dietary guidelines and nutrient recommendations doubled the cost while cultural acceptability (CAN) tripled it. Use of linear programming facilitates the generation of low-cost food baskets that are nutritionally adequate, health promoting, and culturally acceptable.
International Nuclear Information System (INIS)
Shaban Boloukat, Mohammad Hadi; Akbari Foroud, Asghar
2016-01-01
This paper represents a stochastic approach for long-term optimal resource expansion planning of a grid-connected microgrid (MG) containing different technologies as intermittent renewable energy resources, energy storage systems and thermal resources. Maximizing profit and reliability, along with minimizing investment and operation costs, are major objectives which have been considered in this model. Also, the impacts of intermittency and uncertainty in renewable energy resources were investigated. The interval linear programming (ILP) was applied for modelling inherent stochastic nature of the renewable energy resources. ILP presents some superiority in modelling of uncertainties in MG planning. The problem was formulated as a mixed-integer linear programming. It has been demonstrated previously that the benders decomposition (BD) served as an effective tool for solving such problems. BD divides the original problem into a master (investment) problem and operation and reliability subproblems. In this paper a multiperiod MG planning is presented, considering life time, maximum penetration limit of each technology, interest rate, capital recovery factor and investment fund. Real-time energy exchange with the utility is covered, with a consideration of variable tariffs at different load blocks. The presented approach can help MG planners to adopt best decision under various uncertainty levels based on their budgetary policies. - Highlights: • Considering uncertain nature of the renewable resources with applying ILP. • Considering the effect of intermittency of renewable in MG planning. • Multiobjective MG planning problem which covers cost, profit and reliability. • Multiperiod approach for MG planning considering life time and MPL of technologies. • Presenting real-time energy exchange with the utility considering variable tariffs.
Energy Technology Data Exchange (ETDEWEB)
Jana, C. [Indian Inst. of Social Welfare and Business Management, Kolkata (India); Chattopadhyay, R.N. [Indian Inst. of Technology, Kharagpur (India). Rural Development Centre
2004-09-01
Creating provisions for domestic lighting is important for rural development. Its significance in rural economy is unquestionable since some activities, like literacy, education and manufacture of craft items and other cottage products are largely dependent on domestic lighting facilities for their progress and prosperity. Thus, in rural energy planning, domestic lighting remains a key sector for allocation of investments. For rational allocation, decision makers need alternative strategies for identifying adequate and proper investment structure corresponding to appropriate sources and precise devices. The present study aims at designing a model of energy utilisation by developing a decision support frame for an optimised solution to the problem, taking into consideration four sources and six devices suitable for the study area, namely Narayangarh Block of Midnapore District in India. Since the data available from rural and unorganised sectors are often ill-defined and subjective in nature, many coefficients are fuzzy numbers, and hence several constraints appear to be fuzzy expressions. In this study, the energy allocation model is initiated with three separate objectives for optimisation, namely minimising the total cost, minimising the use of non-local sources of energy and maximising the overall efficiency of the system. Since each of the above objective-based solutions has relevance to the needs of the society and economy, it is necessary to build a model that makes a compromise among the three individual solutions. This multi-objective fuzzy linear programming (MOFLP) model, solved in a compromising decision support frame, seems to be a more rational alternative than single objective linear programming model in rural energy planning. (author)
Development and tests of fast 1-MA linear transformer driver stages
Directory of Open Access Journals (Sweden)
A. A. Kim
2009-05-01
Full Text Available In this article we present the design and test results of the most powerful, fast linear transformer driver (LTD stage developed to date. This 1-MA LTD stage consists of 40 parallel RLC (resistor R, inductor L, and capacitor C circuits called “bricks” that are triggered simultaneously; it is able to deliver ∼1 MA current pulse with a rise time of ∼100 ns into the ∼0.1-Ohm matched load. The electrical behavior of the stage can be predicted by using a simple RLC circuit, thus simplifying the designing of various LTD-based accelerators. Five 1-MA LTD stages assembled in series into a module have been successfully tested with both resistive and vacuum electron-beam diode loads.
Q-Matrix Optimization Based on the Linear Logistic Test Model.
Ma, Lin; Green, Kelly E
This study explored optimization of item-attribute matrices with the linear logistic test model (Fischer, 1973), with optimal models explaining more variance in item difficulty due to identified item attributes. Data were 8th-grade mathematics test item responses of two TIMSS 2007 booklets. The study investigated three categories of attributes (content, cognitive process, and comprehensive cognitive process) at two grain levels (larger, smaller) and also compared results with random attribute matrices. The proposed attributes accounted for most of the variance in item difficulty for two assessment booklets (81% and 65%). The variance explained by the content attributes was very small (13% to 31%), less than variance explained by the comprehensive cognitive process attributes which explained much more variance than the content and cognitive process attributes. The variances explained by the grain level were similar to each other. However, the attributes did not predict the item difficulties of two assessment booklets equally.
Test of the linear-no threshold theory of radiation carcinogenesis
International Nuclear Information System (INIS)
Cohen, B.L.
1994-01-01
We recently completed a compilation of radon measurements from available sources which gives the average radon level, in homes for 1730 counties, well over half of all U.S. counties and comprising about 90% of the total U.S. population. Epidemiologists normally study the relationship between mortality risks to individuals, m, vs their personal exposure, r, whereas an ecological study like ours deals with the relationship between the average risk to groups of individuals (population of counties) and their average exposure. It is well known to epidemiologists that, in general, the average dose does not determine the average risk, and to assume otherwise is called 'the ecological fallacy'. However, it is easy to show that, in testing a linear-no threshold theory, 'the ecological fallacy' does not apply; in that theory, the average dose does determine the average risk. This is widely recognized from the fact that 'person-rem' determines the number of deaths. Dividing person-rem by population gives average dose, and dividing number of deaths by population gives mortality rate. Because of the 'ecological fallacy', epidemiology textbooks often state that an ecological study cannot determine a causal relationship between risk and exposure. That may be true, but it is irrelevant here because the purpose of our study is not to determine a causal relationship; it is rather to test the linear-no threshold dependence of m on r. (author)
Mazo Lopera, Mauricio A; Coombes, Brandon J; de Andrade, Mariza
2017-09-27
Gene-environment (GE) interaction has important implications in the etiology of complex diseases that are caused by a combination of genetic factors and environment variables. Several authors have developed GE analysis in the context of independent subjects or longitudinal data using a gene-set. In this paper, we propose to analyze GE interaction for discrete and continuous phenotypes in family studies by incorporating the relatedness among the relatives for each family into a generalized linear mixed model (GLMM) and by using a gene-based variance component test. In addition, we deal with collinearity problems arising from linkage disequilibrium among single nucleotide polymorphisms (SNPs) by considering their coefficients as random effects under the null model estimation. We show that the best linear unbiased predictor (BLUP) of such random effects in the GLMM is equivalent to the ridge regression estimator. This equivalence provides a simple method to estimate the ridge penalty parameter in comparison to other computationally-demanding estimation approaches based on cross-validation schemes. We evaluated the proposed test using simulation studies and applied it to real data from the Baependi Heart Study consisting of 76 families. Using our approach, we identified an interaction between BMI and the Peroxisome Proliferator Activated Receptor Gamma ( PPARG ) gene associated with diabetes.
Using Virtual ATE Model to Migrate Test Programs
Institute of Scientific and Technical Information of China (English)
王晓明; 杨乔林
1995-01-01
Bacause of high development costs of IC (Integrated Circuit)test programs,recycling existing test programs from one kind of ATE (Automatic Test Equipment) to another or generating directly from CAD simulation modules to ATE is more and more valuable.In this paper,a new approach to migrating test programs is presented.A virtual ATE model based on object-oriented paradigm is developed;it runs Test C++ (an intermediate test control language) programs and TeIF(Test Inftermediate Format-an intermediate pattern),migrates test programs among three kinds of ATE (Ando DIC8032,Schlumberger S15 and GenRad 1732) and generates test patterns from two kinds of CAD 9Daisy and Panda) automatically.
DCS-Neural-Network Program for Aircraft Control and Testing
Jorgensen, Charles C.
2006-01-01
A computer program implements a dynamic-cell-structure (DCS) artificial neural network that can perform such tasks as learning selected aerodynamic characteristics of an airplane from wind-tunnel test data and computing real-time stability and control derivatives of the airplane for use in feedback linearized control. A DCS neural network is one of several types of neural networks that can incorporate additional nodes in order to rapidly learn increasingly complex relationships between inputs and outputs. In the DCS neural network implemented by the present program, the insertion of nodes is based on accumulated error. A competitive Hebbian learning rule (a supervised-learning rule in which connection weights are adjusted to minimize differences between actual and desired outputs for training examples) is used. A Kohonen-style learning rule (derived from a relatively simple training algorithm, implements a Delaunay triangulation layout of neurons) is used to adjust node positions during training. Neighborhood topology determines which nodes are used to estimate new values. The network learns, starting with two nodes, and adds new nodes sequentially in locations chosen to maximize reductions in global error. At any given time during learning, the error becomes homogeneously distributed over all nodes.
Reliability and Usefulness of Linear Sprint Testing in Adolescent Rugby Union and League Players.
Darrall-Jones, Joshua D; Jones, Ben; Roe, Gregory; Till, Kevin
2016-05-01
The purpose of this study was to evaluate (a) whether there were differences in sprint times at 5, 10, 20, 30, and 40 m between rugby union and rugby league players, (b) determine the reliability and usefulness of linear sprint testing in adolescent rugby players. Data were collected on 28 rugby union and league academy players over 2 testing sessions, with 3-day rest between sessions. Rugby league players were faster at 5 m than rugby union players, with further difference unclear. Sprint time at 10, 20, 30, and 40 m was all reliable (coefficient of variation [CV] = 3.1, 1.8, 2.0, and 1.3%) but greater than the smallest worthwhile change (SWC [0.2 × between-subject SD]), rating the test as marginal for usefulness. Although the test was incapable of detecting the SWC, we recommend that practitioners and researchers use Hopkins' proposed method; whereby plotting the change score of the individual at each split (±typical error [TE] expressed as a CV) against the SWC and visually inspecting whether the TE crosses into the SWC are capable of identifying whether a change is both real (greater than the noise of the test, i.e., >TE) and of practical significance (>SWC). Researchers and practitioners can use the TE and SWC from this study to assess changes in performance of adolescent rugby players when using single beam timing gates.
Modal Identification of A Tested Steel Frame using Linear ARX Model Structure
Directory of Open Access Journals (Sweden)
Yavuz Kaya
2009-07-01
Full Text Available This study contains the identification of modal dynamic properties of a 3-story large-scale steel test frame structure through shaking table measurements. Shaking table test is carried out to estimate the modal properties of the test frame such as natural frequencies, damping ratios and mode shapes. Among many different model structures, ARX (Auto Recursive Exogenous model structure is used for modal identification of the frame structure system. The unknown parameters in the obtained ARX model structure are estimated by Least-Square method by minimizing the AIC criteria with the help of a program coded in advanced computing software MATLAB®. The adopted model structure is then tested out in time domain to verify the validity of the model with the selected model parameters. Then the modal characteristics of test frame and the story stiffness are estimated using the white noise shakings. An attempt is done to determine the change of modal characteristics and the story stiffness of test frame according to the velocity, which the test frame structure experienced during the shaking schedule and also during the input shaking of El Centro 1940 NS. Results shows that there is an increase in damping ratio and a decrease in both story stiffness and natural frequency for all modes when the damage forms at cementitious device and the test frame structure itself during the shaking schedule.
Alternative filtration testing program: Pre-evaluation of test results
International Nuclear Information System (INIS)
Georgeton, G.K.; Poirier, M.R.
1990-01-01
Based on results of testing eight solids removal technologies and one pretreatment option, it is recommended that a centrifugal ultrafilter and polymeric ultrafilter undergo further testing as possible alternatives to the Norton Ceramic filters. Deep bed filtration should be considered as a third alternative, if a backwashable cartridge filter is shown to be inefficient in separate testing
Alternative filtration testing program: Pre-evaluation of test results
Energy Technology Data Exchange (ETDEWEB)
Georgeton, G.K.; Poirier, M.R.
1990-09-28
Based on results of testing eight solids removal technologies and one pretreatment option, it is recommended that a centrifugal ultrafilter and polymeric ultrafilter undergo further testing as possible alternatives to the Norton Ceramic filters. Deep bed filtration should be considered as a third alternative, if a backwashable cartridge filter is shown to be inefficient in separate testing.
Wei, Peng; Sridhar, Banavar; Chen, Neil Yi-Nan; Sun, Dengfent
2012-01-01
A class of strategies has been proposed to reduce contrail formation in the United States airspace. A 3D grid based on weather data and the cruising altitude level of aircraft is adjusted to avoid the persistent contrail potential area with the consideration to fuel-efficiency. In this paper, the authors introduce a contrail avoidance strategy on 3D grid by considering additional operationally feasible constraints from an air traffic controller's aspect. First, shifting too many aircraft to the same cruising level will make the miles-in-trail at this level smaller than the safety separation threshold. Furthermore, the high density of aircraft at one cruising level may exceed the workload for the traffic controller. Therefore, in our new model we restrict the number of total aircraft at each level. Second, the aircraft count variation for successive intervals cannot be too drastic since the workload to manage climbing/descending aircraft is much larger than managing cruising aircraft. The contrail reduction is formulated as an integer-programming problem and the problem is shown to have the property of total unimodularity. Solving the corresponding relaxed linear programming with the simplex method provides an optimal and integral solution to the problem. Simulation results are provided to illustrate the methodology.
European refining trends to 2030: The advent of multi-area linear programming
International Nuclear Information System (INIS)
Saint-Antonin, V.; Marion, P.
2011-01-01
The current high degree of uncertainty that pervades the global energy landscape is directly impacting on the oil industry, which is having to integrate growing mobility requirements in the context of energy transition due to the emergence of alternatives to petroleum fuels and restrictions on pollutant emissions. In this context, the study 'Raffinage 2030' (Refining 2030), carried out by IFPEN (the French Institute of Petroleum and New Energy Sources), is a prospective exercise for a better understanding of the balance between global supply and demand of petroleum products in order to shed light on the type and geographical location of necessary investments in refineries, as well as to assess the impact on these of the introduction of new fuels and more and more restrictions, such as environmental regulations. To this end, the refinery model used is one of linear programming, breaking the world down into nine geographical areas. This article introduces the programming model and its basic assumptions, before presenting the main lessons drawn om this study regarding the potential evolutions of the refining industry, in particular the European one, to face the market's long term trends. (authors)
Drag reduction of a car model by linear genetic programming control
Li, Ruiying; Noack, Bernd R.; Cordier, Laurent; Borée, Jacques; Harambat, Fabien
2017-08-01
We investigate open- and closed-loop active control for aerodynamic drag reduction of a car model. Turbulent flow around a blunt-edged Ahmed body is examined at ReH≈ 3× 105 based on body height. The actuation is performed with pulsed jets at all trailing edges (multiple inputs) combined with a Coanda deflection surface. The flow is monitored with 16 pressure sensors distributed at the rear side (multiple outputs). We apply a recently developed model-free control strategy building on genetic programming in Dracopoulos and Kent (Neural Comput Appl 6:214-228, 1997) and Gautier et al. (J Fluid Mech 770:424-441, 2015). The optimized control laws comprise periodic forcing, multi-frequency forcing and sensor-based feedback including also time-history information feedback and combinations thereof. Key enabler is linear genetic programming (LGP) as powerful regression technique for optimizing the multiple-input multiple-output control laws. The proposed LGP control can select the best open- or closed-loop control in an unsupervised manner. Approximately 33% base pressure recovery associated with 22% drag reduction is achieved in all considered classes of control laws. Intriguingly, the feedback actuation emulates periodic high-frequency forcing. In addition, the control identified automatically the only sensor which listens to high-frequency flow components with good signal to noise ratio. Our control strategy is, in principle, applicable to all multiple actuators and sensors experiments.
International Nuclear Information System (INIS)
Sadeghi, Mehdi; Mirshojaeian Hosseini, Hossein
2006-01-01
For many years, energy models have been used in developed or developing countries to satisfy different needs in energy planning. One of major problems against energy planning and consequently energy models is uncertainty, spread in different economic, political and legal dimensions of energy planning. Confronting uncertainty, energy planners have often used two well-known strategies. The first strategy is stochastic programming, in which energy system planners define different scenarios and apply an explicit probability of occurrence to each scenario. The second strategy is Minimax Regret strategy that minimizes regrets of different decisions made in energy planning. Although these strategies have been used extensively, they could not flexibly and effectively deal with the uncertainties caused by fuzziness. 'Fuzzy Linear Programming (FLP)' is a strategy that can take fuzziness into account. This paper tries to demonstrate the method of application of FLP for optimization of supply energy system in Iran, as a case study. The used FLP model comprises fuzzy coefficients for investment costs. Following the mentioned purpose, it is realized that FLP is an easy and flexible approach that can be a serious competitor for other confronting uncertainties approaches, i.e. stochastic and Minimax Regret strategies. (author)