WorldWideScience

Sample records for solving large systems

  1. A convex optimization approach for solving large scale linear systems

    Directory of Open Access Journals (Sweden)

    Debora Cores

    2017-01-01

    Full Text Available The well-known Conjugate Gradient (CG method minimizes a strictly convex quadratic function for solving large-scale linear system of equations when the coefficient matrix is symmetric and positive definite. In this work we present and analyze a non-quadratic convex function for solving any large-scale linear system of equations regardless of the characteristics of the coefficient matrix. For finding the global minimizers, of this new convex function, any low-cost iterative optimization technique could be applied. In particular, we propose to use the low-cost globally convergent Spectral Projected Gradient (SPG method, which allow us to extend this optimization approach for solving consistent square and rectangular linear system, as well as linear feasibility problem, with and without convex constraints and with and without preconditioning strategies. Our numerical results indicate that the new scheme outperforms state-of-the-art iterative techniques for solving linear systems when the symmetric part of the coefficient matrix is indefinite, and also for solving linear feasibility problems.

  2. Krylov subspace methods for solving large unsymmetric linear systems

    International Nuclear Information System (INIS)

    Saad, Y.

    1981-01-01

    Some algorithms based upon a projection process onto the Krylov subspace K/sub m/ = Span(r 0 , Ar 0 ,...,A/sup m/-1r 0 ) are developed, generalizing the method of conjugate gradients to unsymmetric systems. These methods are extensions of Arnoldi's algorithm for solving eigenvalue problems. The convergence is analyzed in terms of the distance of the solution to the subspace K/sub m/ and some error bounds are established showing, in particular, a similarity with the conjugate gradient method (for symmetric matrices) when the eigenvalues are real. Several numerical experiments are described and discussed

  3. Solving large linear systems in an implicit thermohaline ocean model

    NARCIS (Netherlands)

    de Niet, Arie Christiaan

    2007-01-01

    The climate on earth is largely determined by the global ocean circulation. Hence it is important to predict how the flow will react to perturbation by for example melting icecaps. To answer questions about the stability of the global ocean flow, a computer model has been developed that is able to

  4. More on Generalizations and Modifications of Iterative Methods for Solving Large Sparse Indefinite Linear Systems

    Directory of Open Access Journals (Sweden)

    Jen-Yuan Chen

    2014-01-01

    Full Text Available Continuing from the works of Li et al. (2014, Li (2007, and Kincaid et al. (2000, we present more generalizations and modifications of iterative methods for solving large sparse symmetric and nonsymmetric indefinite systems of linear equations. We discuss a variety of iterative methods such as GMRES, MGMRES, MINRES, LQ-MINRES, QR MINRES, MMINRES, MGRES, and others.

  5. Solving large-scale sparse eigenvalue problems and linear systems of equations for accelerator modeling

    International Nuclear Information System (INIS)

    Gene Golub; Kwok Ko

    2009-01-01

    The solutions of sparse eigenvalue problems and linear systems constitute one of the key computational kernels in the discretization of partial differential equations for the modeling of linear accelerators. The computational challenges faced by existing techniques for solving those sparse eigenvalue problems and linear systems call for continuing research to improve on the algorithms so that ever increasing problem size as required by the physics application can be tackled. Under the support of this award, the filter algorithm for solving large sparse eigenvalue problems was developed at Stanford to address the computational difficulties in the previous methods with the goal to enable accelerator simulations on then the world largest unclassified supercomputer at NERSC for this class of problems. Specifically, a new method, the Hemitian skew-Hemitian splitting method, was proposed and researched as an improved method for solving linear systems with non-Hermitian positive definite and semidefinite matrices.

  6. Solving large scale unit dilemma in electricity system by applying commutative law

    Science.gov (United States)

    Legino, Supriadi; Arianto, Rakhmat

    2018-03-01

    The conventional system, pooling resources with large centralized power plant interconnected as a network. provides a lot of advantages compare to the isolated one include optimizing efficiency and reliability. However, such a large plant need a huge capital. In addition, more problems emerged to hinder the construction of big power plant as well as its associated transmission lines. By applying commutative law of math, ab = ba, for all a,b €-R, the problem associated with conventional system as depicted above, can be reduced. The idea of having small unit but many power plants, namely “Listrik Kerakyatan,” abbreviated as LK provides both social and environmental benefit that could be capitalized by using proper assumption. This study compares the cost and benefit of LK to those of conventional system, using simulation method to prove that LK offers alternative solution to answer many problems associated with the large system. Commutative Law of Algebra can be used as a simple mathematical model to analyze whether the LK system as an eco-friendly distributed generation can be applied to solve various problems associated with a large scale conventional system. The result of simulation shows that LK provides more value if its plants operate in less than 11 hours as peaker power plant or load follower power plant to improve load curve balance of the power system. The result of simulation indicates that the investment cost of LK plant should be optimized in order to minimize the plant investment cost. This study indicates that the benefit of economies of scale principle does not always apply to every condition, particularly if the portion of intangible cost and benefit is relatively high.

  7. Using a grid platform for solving large sparse linear systems over GF(2)

    OpenAIRE

    Kleinjung , Thorsten; Nussbaum , Lucas; Thomé , Emmanuel

    2010-01-01

    International audience; In Fall 2009, the final step of the factorization of rsa768 was carried out on several clusters of the Grid'5000 platform, leading to a new record in integer factorization. This step involves solving a huge sparse linear system defined over the binary field GF(2). This article aims at describing the algorithm used, the difficulties encountered, and the methodology which led to success. In particular, we illustrate how our use of the block Wiedemann algorithm led to a m...

  8. Algorithm for solving the linear Cauchy problem for large systems of ordinary differential equations with the use of parallel computations

    Energy Technology Data Exchange (ETDEWEB)

    Moryakov, A. V., E-mail: sailor@orc.ru [National Research Centre Kurchatov Institute (Russian Federation)

    2016-12-15

    An algorithm for solving the linear Cauchy problem for large systems of ordinary differential equations is presented. The algorithm for systems of first-order differential equations is implemented in the EDELWEISS code with the possibility of parallel computations on supercomputers employing the MPI (Message Passing Interface) standard for the data exchange between parallel processes. The solution is represented by a series of orthogonal polynomials on the interval [0, 1]. The algorithm is characterized by simplicity and the possibility to solve nonlinear problems with a correction of the operator in accordance with the solution obtained in the previous iterative process.

  9. An implicit iterative scheme for solving large systems of linear equations

    International Nuclear Information System (INIS)

    Barry, J.M.; Pollard, J.P.

    1986-12-01

    An implicit iterative scheme for the solution of large systems of linear equations arising from neutron diffusion studies is presented. The method is applied to three-dimensional reactor studies and its performance is compared with alternative iterative approaches

  10. Efficient Solving of Large Non-linear Arithmetic Constraint Systems with Complex Boolean Structure

    Czech Academy of Sciences Publication Activity Database

    Fränzle, M.; Herde, C.; Teige, T.; Ratschan, Stefan; Schubert, T.

    2007-01-01

    Roč. 1, - (2007), s. 209-236 ISSN 1574-0617 Grant - others:AVACS(DE) SFB/TR 14 Institutional research plan: CEZ:AV0Z10300504 Keywords : interval-based arithmetic constraint solving * SAT modulo theories Subject RIV: BA - General Mathematics

  11. A Chess-Like Game for Teaching Engineering Students to Solve Large System of Simultaneous Linear Equations

    Science.gov (United States)

    Nguyen, Duc T.; Mohammed, Ahmed Ali; Kadiam, Subhash

    2010-01-01

    Solving large (and sparse) system of simultaneous linear equations has been (and continues to be) a major challenging problem for many real-world engineering/science applications [1-2]. For many practical/large-scale problems, the sparse, Symmetrical and Positive Definite (SPD) system of linear equations can be conveniently represented in matrix notation as [A] {x} = {b} , where the square coefficient matrix [A] and the Right-Hand-Side (RHS) vector {b} are known. The unknown solution vector {x} can be efficiently solved by the following step-by-step procedures [1-2]: Reordering phase, Matrix Factorization phase, Forward solution phase, and Backward solution phase. In this research work, a Game-Based Learning (GBL) approach has been developed to help engineering students to understand crucial details about matrix reordering and factorization phases. A "chess-like" game has been developed and can be played by either a single player, or two players. Through this "chess-like" open-ended game, the players/learners will not only understand the key concepts involved in reordering algorithms (based on existing algorithms), but also have the opportunities to "discover new algorithms" which are better than existing algorithms. Implementing the proposed "chess-like" game for matrix reordering and factorization phases can be enhanced by FLASH [3] computer environments, where computer simulation with animated human voice, sound effects, visual/graphical/colorful displays of matrix tables, score (or monetary) awards for the best game players, etc. can all be exploited. Preliminary demonstrations of the developed GBL approach can be viewed by anyone who has access to the internet web-site [4]!

  12. Using the Multiplicative Schwarz Alternating Algorithm (MSAA) for Solving the Large Linear System of Equations Related to Global Gravity Field Recovery up to Degree and Order 120

    Science.gov (United States)

    Safari, A.; Sharifi, M. A.; Amjadiparvar, B.

    2010-05-01

    The GRACE mission has substantiated the low-low satellite-to-satellite tracking (LL-SST) concept. The LL-SST configuration can be combined with the previously realized high-low SST concept in the CHAMP mission to provide a much higher accuracy. The line of sight (LOS) acceleration difference between the GRACE satellite pair is the mostly used observable for mapping the global gravity field of the Earth in terms of spherical harmonic coefficients. In this paper, mathematical formulae for LOS acceleration difference observations have been derived and the corresponding linear system of equations has been set up for spherical harmonic up to degree and order 120. The total number of unknowns is 14641. Such a linear equation system can be solved with iterative solvers or direct solvers. However, the runtime of direct methods or that of iterative solvers without a suitable preconditioner increases tremendously. This is the reason why we need a more sophisticated method to solve the linear system of problems with a large number of unknowns. Multiplicative variant of the Schwarz alternating algorithm is a domain decomposition method, which allows it to split the normal matrix of the system into several smaller overlaped submatrices. In each iteration step the multiplicative variant of the Schwarz alternating algorithm solves linear systems with the matrices obtained from the splitting successively. It reduces both runtime and memory requirements drastically. In this paper we propose the Multiplicative Schwarz Alternating Algorithm (MSAA) for solving the large linear system of gravity field recovery. The proposed algorithm has been tested on the International Association of Geodesy (IAG)-simulated data of the GRACE mission. The achieved results indicate the validity and efficiency of the proposed algorithm in solving the linear system of equations from accuracy and runtime points of view. Keywords: Gravity field recovery, Multiplicative Schwarz Alternating Algorithm, Low

  13. Confluent-Functional solving systems

    Directory of Open Access Journals (Sweden)

    V.N. Koval

    2001-08-01

    Full Text Available The paper proposes a statistical knowledge-acquision approach. The solving systems are considered, which are able to find unknown structural dependences between situational and transforming variables on the basis of statistically analyzed input information. Situational variables describe features, states and relations between environment objects. Transforming variables describe transforming influences, exerted by a goal-oriented system onto an environment. Unknown environment rules are simulated by a structural equations system, associating situational and transforming variables.

  14. Combining the CORS and BiCORSTAB Iterative Methods with MLFMA and SAI Preconditioning for Solving Large Linear Systems in Electromagnetics

    NARCIS (Netherlands)

    Carpentieri, Bruno; Jing, Yan-Fei; Huang, Ting-Zhu; Pi, Wei-Chao; Sheng, Xin-Qing

    We report on experiments with a novel family of Krylov subspace methods for solving dense, complex, non-Hermitian systems of linear equations arising from the Galerkin discretization of surface integral equation models in Electromagnetics. By some experiments on realistic radar-cross-section

  15. Planning under uncertainty solving large-scale stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G. [Stanford Univ., CA (United States). Dept. of Operations Research]|[Technische Univ., Vienna (Austria). Inst. fuer Energiewirtschaft

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  16. Some Applications of Algebraic System Solving

    Science.gov (United States)

    Roanes-Lozano, Eugenio

    2011-01-01

    Technology and, in particular, computer algebra systems, allows us to change both the way we teach mathematics and the mathematical curriculum. Curiously enough, unlike what happens with linear system solving, algebraic system solving is not widely known. The aim of this paper is to show that, although the theory lying behind the "exact…

  17. The Daily Operational Brief: Fostering Daily Readiness, Care Coordination, and Problem-Solving Accountability in a Large Pediatric Health Care System.

    Science.gov (United States)

    Donnelly, Lane F; Basta, Kathryne C; Dykes, Anne M; Zhang, Wei; Shook, Joan E

    2018-01-01

    At a pediatric health system, the Daily Operational Brief (DOB) was updated in 2015 after three years of operation. Quality and safety metrics, the patient volume and staffing assessment, and the readiness assessment are all presented. In addition, in the problem-solving accountability system, problematic issues are categorized as Quick Hits or Complex Issues. Walk-the-Wall, a biweekly meeting attended by hospital senior administrative leadership and quality and safety leaders, is conducted to chart current progress on Complex Issues. The DOB provides a daily standardized approach to evaluate readiness to provide care to current patients and improvement in the care to be provided for future patients. Copyright © 2017 The Joint Commission. Published by Elsevier Inc. All rights reserved.

  18. Solving Large Clustering Problems with Meta-Heuristic Search

    DEFF Research Database (Denmark)

    Turkensteen, Marcel; Andersen, Kim Allan; Bang-Jensen, Jørgen

    In Clustering Problems, groups of similar subjects are to be retrieved from data sets. In this paper, Clustering Problems with the frequently used Minimum Sum-of-Squares Criterion are solved using meta-heuristic search. Tabu search has proved to be a successful methodology for solving optimization...... problems, but applications to large clustering problems are rare. The simulated annealing heuristic has mainly been applied to relatively small instances. In this paper, we implement tabu search and simulated annealing approaches and compare them to the commonly used k-means approach. We find that the meta-heuristic...

  19. Problem solving using soft systems methodology.

    Science.gov (United States)

    Land, L

    This article outlines a method of problem solving which considers holistic solutions to complex problems. Soft systems methodology allows people involved in the problem situation to have control over the decision-making process.

  20. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    Science.gov (United States)

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  1. Solving large scale structure in ten easy steps with COLA

    Energy Technology Data Exchange (ETDEWEB)

    Tassev, Svetlin [Department of Astrophysical Sciences, Princeton University, 4 Ivy Lane, Princeton, NJ 08544 (United States); Zaldarriaga, Matias [School of Natural Sciences, Institute for Advanced Study, Olden Lane, Princeton, NJ 08540 (United States); Eisenstein, Daniel J., E-mail: stassev@cfa.harvard.edu, E-mail: matiasz@ias.edu, E-mail: deisenstein@cfa.harvard.edu [Center for Astrophysics, Harvard University, 60 Garden Street, Cambridge, MA 02138 (United States)

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 10{sup 9}M{sub s}un/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 10{sup 11}M{sub s}un/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.

  2. Discovering Steiner Triple Systems through Problem Solving

    Science.gov (United States)

    Sriraman, Bharath

    2004-01-01

    An attempt to implement problem solving as a teacher of ninth grade algebra is described. The problems selected were not general ones, they involved combinations and represented various situations and were more complex which lead to the discovery of Steiner triple systems.

  3. Optimization theory for large systems

    CERN Document Server

    Lasdon, Leon S

    2002-01-01

    Important text examines most significant algorithms for optimizing large systems and clarifying relations between optimization procedures. Much data appear as charts and graphs and will be highly valuable to readers in selecting a method and estimating computer time and cost in problem-solving. Initial chapter on linear and nonlinear programming presents all necessary background for subjects covered in rest of book. Second chapter illustrates how large-scale mathematical programs arise from real-world problems. Appendixes. List of Symbols.

  4. Algebraic polynomial system solving and applications

    NARCIS (Netherlands)

    Bleylevens, I.W.M.

    2010-01-01

    The problem of computing the solutions of a system of multivariate polynomial equations can be approached by the Stetter-Möller matrix method which casts the problem into a large eigenvalue problem. This Stetter-Möller matrix method forms the starting point for the development of computational

  5. Using Systemic Problem Solving (SPS) to Assess Student ...

    African Journals Online (AJOL)

    This paper focuses on the uses of systemic problem solving in chemistry at the tertiary level. Traditional problem solving (TPS) is a useful tool to help teachers examine recall of information, comprehension, and application. However, systemic problem solving (SPS) can challenge students and probe higher cognitive skills ...

  6. Solving Large Scale Crew Scheduling Problems in Practice

    NARCIS (Netherlands)

    E.J.W. Abbink (Erwin); L. Albino; T.A.B. Dollevoet (Twan); D. Huisman (Dennis); J. Roussado; R.L. Saldanha

    2010-01-01

    textabstractThis paper deals with large-scale crew scheduling problems arising at the Dutch railway operator, Netherlands Railways (NS). NS operates about 30,000 trains a week. All these trains need a driver and a certain number of guards. Some labor rules restrict the duties of a certain crew base

  7. Solving Large-Scale Computational Problems Using Insights from Statistical Physics

    Energy Technology Data Exchange (ETDEWEB)

    Selman, Bart [Cornell University

    2012-02-29

    Many challenging problems in computer science and related fields can be formulated as constraint satisfaction problems. Such problems consist of a set of discrete variables and a set of constraints between those variables, and represent a general class of so-called NP-complete problems. The goal is to find a value assignment to the variables that satisfies all constraints, generally requiring a search through and exponentially large space of variable-value assignments. Models for disordered systems, as studied in statistical physics, can provide important new insights into the nature of constraint satisfaction problems. Recently, work in this area has resulted in the discovery of a new method for solving such problems, called the survey propagation (SP) method. With SP, we can solve problems with millions of variables and constraints, an improvement of two orders of magnitude over previous methods.

  8. System to solve three designs of the fuel management

    International Nuclear Information System (INIS)

    Castillo M, J. A.; Ortiz S, J. J.; Montes T, J. L.; Perusquia del C, R.; Marinez R, R.

    2015-09-01

    In this paper preliminary results are presented, obtained with the development of a computer system that resolves three stages of the nuclear fuel management, which are: the axial and radial designs of fuel, as well as the design of nuclear fuel reloads. The novelty of the system is that the solution is obtained solving the 3 mentioned stages, in coupled form. For this, heuristic techniques are used for each stage, in each one of these has a function objective that is applied to particular problems, but in all cases the obtained partial results are used as input data for the next stage. The heuristic techniques that were used to solve the coupled problem are: tabu search, neural networks and a hybrid between the scatter search and path re linking. The system applies an iterative process from the design of a fuel cell to the reload design, since are preliminary results the reload is designed using the operation strategy Haling type. In each one of the stages nuclear parameters inherent to the design are monitored. The results so far show the advantage of solving the problem in a coupled manner, even when a large amount of computer resources is used. (Author)

  9. Solving Large Quadratic|Assignment Problems in Parallel

    DEFF Research Database (Denmark)

    Clausen, Jens; Perregaard, Michael

    1997-01-01

    and recalculation of bounds between branchings when used in a parallel Branch-and-Bound algorithm. The algorithm has been implemented on a 16-processor MEIKO Computing Surface with Intel i860 processors. Computational results from the solution of a number of large QAPs, including the classical Nugent 20...... processors, and have hence not been ideally suited for computations essentially involving non-vectorizable computations on integers.In this paper we investigate the combination of one of the best bound functions for a Branch-and-Bound algorithm (the Gilmore-Lawler bound) and various testing, variable binding...

  10. Multicriteria vehicle routing problem solved by artificial immune system

    Directory of Open Access Journals (Sweden)

    Bogna MRÓWCZYŃSKA

    2015-09-01

    Full Text Available Vehicles route planning in large transportation companies, where drivers are workers, usually takes place on the basis of experience or intuition of the employees. Because of the cost and environmental protection, it is important to save fuel, thus planning routes in an optimal way. In this article an example of the problem is presented solving delivery vans route planning taking into account the distance and travel time within the constraints of vehicle capacities, restrictions on working time of drivers and having varying degrees of movement. An artificial immune system was used for the calculations.

  11. Solving-Problems and Hypermedia Systems

    Directory of Open Access Journals (Sweden)

    Ricardo LÓPEZ FERNÁNDEZ

    2009-06-01

    Full Text Available The solving problems like the transfer constitute two nuclei, related, essential in the cognitive investigation and in the mathematical education. No is in and of itself casual that, from the first moment, in the investigations on the application gives the computer science to the teaching the mathematics, cybernetic models were developed that simulated processes problem solving and transfer cotexts (GPS, 1969 and IDEA (Interactive Decision Envisioning Aid, Pea, BrunerCohen, Webster & Mellen, 1987. The present articulates it analyzes, that can contribute to the development in this respect the new technologies hypermedias, give applications that are good to implement processes of learning the heuristic thought and give the capacity of «transfer». From our perspective and from the experience that we have developed in this field, to carry out a function gives analysis and the theories on the problem solving, it requires that we exercise a previous of interpretation the central aspsects over the theories gives the solving problem and transfer starting from the classic theories on the prosecution of the information. In this sense, so much the theory gives the dual memory as the most recent, J. Anderson (1993 based on the mechanisms activation nodes information they allow to establish an interpretation suggester over the mental mechanism that you/they operate in the heuristic processes. On this analysis, the present articulates it develops a theoritical interpretation over the function gives the supports based on technology hypermedia advancing in the definition of a necessary theoretical body, having in it counts that on the other hand the practical experimentation is permanent concluding in the efficiency and effectiveness gives the support hypermedia like mechanism of comunication in the processes heuristic learning.

  12. Solving Conic Systems via Projection and Rescaling

    OpenAIRE

    Pena, Javier; Soheili, Negar

    2015-01-01

    We propose a simple projection and rescaling algorithm to solve the feasibility problem \\[ \\text{ find } x \\in L \\cap \\Omega, \\] where $L$ and $\\Omega$ are respectively a linear subspace and the interior of a symmetric cone in a finite-dimensional vector space $V$. This projection and rescaling algorithm is inspired by previous work on rescaled versions of the perceptron algorithm and by Chubanov's projection-based method for linear feasibility problems. As in these predecessors, each main it...

  13. Improved Monkey-King Genetic Algorithm for Solving Large Winner Determination in Combinatorial Auction

    Science.gov (United States)

    Li, Yuzhong

    Using GA solve the winner determination problem (WDP) with large bids and items, run under different distribution, because the search space is large, constraint complex and it may easy to produce infeasible solution, would affect the efficiency and quality of algorithm. This paper present improved MKGA, including three operator: preprocessing, insert bid and exchange recombination, and use Monkey-king elite preservation strategy. Experimental results show that improved MKGA is better than SGA in population size and computation. The problem that traditional branch and bound algorithm hard to solve, improved MKGA can solve and achieve better effect.

  14. The semantic system is involved in mathematical problem solving.

    Science.gov (United States)

    Zhou, Xinlin; Li, Mengyi; Li, Leinian; Zhang, Yiyun; Cui, Jiaxin; Liu, Jie; Chen, Chuansheng

    2018-02-01

    Numerous studies have shown that the brain regions around bilateral intraparietal cortex are critical for number processing and arithmetical computation. However, the neural circuits for more advanced mathematics such as mathematical problem solving (with little routine arithmetical computation) remain unclear. Using functional magnetic resonance imaging (fMRI), this study (N = 24 undergraduate students) compared neural bases of mathematical problem solving (i.e., number series completion, mathematical word problem solving, and geometric problem solving) and arithmetical computation. Direct subject- and item-wise comparisons revealed that mathematical problem solving typically had greater activation than arithmetical computation in all 7 regions of the semantic system (which was based on a meta-analysis of 120 functional neuroimaging studies on semantic processing). Arithmetical computation typically had greater activation in the supplementary motor area and left precentral gyrus. The results suggest that the semantic system in the brain supports mathematical problem solving. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Multistage Spectral Relaxation Method for Solving the Hyperchaotic Complex Systems

    Directory of Open Access Journals (Sweden)

    Hassan Saberi Nik

    2014-01-01

    Full Text Available We present a pseudospectral method application for solving the hyperchaotic complex systems. The proposed method, called the multistage spectral relaxation method (MSRM is based on a technique of extending Gauss-Seidel type relaxation ideas to systems of nonlinear differential equations and using the Chebyshev pseudospectral methods to solve the resulting system on a sequence of multiple intervals. In this new application, the MSRM is used to solve famous hyperchaotic complex systems such as hyperchaotic complex Lorenz system and the complex permanent magnet synchronous motor. We compare this approach to the Runge-Kutta based ode45 solver to show that the MSRM gives accurate results.

  16. Numerical solution of large sparse linear systems

    International Nuclear Information System (INIS)

    Meurant, Gerard; Golub, Gene.

    1982-02-01

    This note is based on one of the lectures given at the 1980 CEA-EDF-INRIA Numerical Analysis Summer School whose aim is the study of large sparse linear systems. The main topics are solving least squares problems by orthogonal transformation, fast Poisson solvers and solution of sparse linear system by iterative methods with a special emphasis on preconditioned conjuguate gradient method [fr

  17. Clock Math — a System for Solving SLEs Exactly

    Directory of Open Access Journals (Sweden)

    Jakub Hladík

    2013-01-01

    Full Text Available In this paper, we present a GPU-accelerated hybrid system that solves ill-conditioned systems of linear equations exactly. Exactly means without rounding errors due to using integer arithmetics. First, we scale floating-point numbers up to integers, then we solve dozens of SLEs within different modular arithmetics and then we assemble sub-solutions back using the Chinese remainder theorem. This approach effectively bypasses current CPU floating-point limitations. The system is capable of solving Hilbert’s matrix without losing a single bit of precision, and with a significant speedup compared to existing CPU solvers.

  18. Escript: Open Source Environment For Solving Large-Scale Geophysical Joint Inversion Problems in Python

    Science.gov (United States)

    Gross, Lutz; Altinay, Cihan; Fenwick, Joel; Smith, Troy

    2014-05-01

    The program package escript has been designed for solving mathematical modeling problems using python, see Gross et al. (2013). Its development and maintenance has been funded by the Australian Commonwealth to provide open source software infrastructure for the Australian Earth Science community (recent funding by the Australian Geophysical Observing System EIF (AGOS) and the AuScope Collaborative Research Infrastructure Scheme (CRIS)). The key concepts of escript are based on the terminology of spatial functions and partial differential equations (PDEs) - an approach providing abstraction from the underlying spatial discretization method (i.e. the finite element method (FEM)). This feature presents a programming environment to the user which is easy to use even for complex models. Due to the fact that implementations are independent from data structures simulations are easily portable across desktop computers and scalable compute clusters without modifications to the program code. escript has been successfully applied in a variety of applications including modeling mantel convection, melting processes, volcanic flow, earthquakes, faulting, multi-phase flow, block caving and mineralization (see Poulet et al. 2013). The recent escript release (see Gross et al. (2013)) provides an open framework for solving joint inversion problems for geophysical data sets (potential field, seismic and electro-magnetic). The strategy bases on the idea to formulate the inversion problem as an optimization problem with PDE constraints where the cost function is defined by the data defect and the regularization term for the rock properties, see Gross & Kemp (2013). This approach of first-optimize-then-discretize avoids the assemblage of the - in general- dense sensitivity matrix as used in conventional approaches where discrete programming techniques are applied to the discretized problem (first-discretize-then-optimize). In this paper we will discuss the mathematical framework for

  19. Solved problems in dynamical systems and control

    CERN Document Server

    Tenreiro-Machado, J; Valério, Duarte; Galhano, Alexandra M

    2016-01-01

    This book presents a collection of exercises on dynamical systems, modelling and control. Each topic covered includes a summary of the theoretical background, problems with solutions, and further exercises.

  20. Study of solving a Toda dynamic system with loop algebra

    International Nuclear Information System (INIS)

    Zhu Qiao; Yang Zhanying; Shi Kangjie; Wen Junqing

    2006-01-01

    The authors construct a Toda system with Loop algebra, and prove that the Lax equation L=[L,M] can be solved by means of solving a regular Riemann-Hilbert problem. In our system, M in Lax pair is an antisymmetrical matrix, while L=L + + M, and L + is a quasi-upper triangular matrix of loop algebra. In order to check our result, the authors exactly solve an R-H problem under a given initial condition as an example. (authors)

  1. Solving modified systems with multiple right-hand sides

    Energy Technology Data Exchange (ETDEWEB)

    Simoncini, V.; Gallopoulos, E. [Univ. of Patras (Greece)

    1996-12-31

    In this talk we discuss the iterative solution of large linear systems of the form (A + USV{sup H})X = B, where A is an n x n non-Hermitian matrix, USV{sup H} is a rank-r modification of A and B is of rank s with s, r {much_lt} n. We analyze several approaches that exploit the structure of the coefficient matrix so as to solve the systems more efficiently than if one were to apply a non-hermitian solver to the original systems. In the development of procedures, we take into account the presence of both the low-rank modification and the several right-hand sides. Interesting issues connected to this problem originate from the quest for techniques that accelerate the underlying iterative solvers: preconditioning (e.g. inner-outer iteration strategies), domain decomposition, and continuation methods. Experiments are provided to analyze the behavior of the methods depending on the structure of the rectangular matrices. Preconditioning strategies are explored for an efficient implementation on the transformed systems.

  2. New approach to solve symmetric fully fuzzy linear systems

    Indian Academy of Sciences (India)

    concepts of fuzzy set theory and then define a fully fuzzy linear system of equations. .... To represent the above problem as fully fuzzy linear system, we represent x .... Fully fuzzy linear systems can be solved by Linear programming approach, ...

  3. A note on solving large-scale zero-one programming problems

    NARCIS (Netherlands)

    Adema, Jos J.

    1988-01-01

    A heuristic for solving large-scale zero-one programming problems is provided. The heuristic is based on the modifications made by H. Crowder et al. (1983) to the standard branch-and-bound strategy. First, the initialization is modified. The modification is only useful if the objective function

  4. Engineering management of large scale systems

    Science.gov (United States)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  5. Large Superconducting Magnet Systems

    CERN Document Server

    Védrine, P.

    2014-07-17

    The increase of energy in accelerators over the past decades has led to the design of superconducting magnets for both accelerators and the associated detectors. The use of Nb−Ti superconducting materials allows an increase in the dipole field by up to 10 T compared with the maximum field of 2 T in a conventional magnet. The field bending of the particles in the detectors and generated by the magnets can also be increased. New materials, such as Nb$_{3}$Sn and high temperature superconductor (HTS) conductors, can open the way to higher fields, in the range 13–20 T. The latest generations of fusion machines producing hot plasma also use large superconducting magnet systems.

  6. Large Superconducting Magnet Systems

    Energy Technology Data Exchange (ETDEWEB)

    Védrine, P [Saclay (France)

    2014-07-01

    The increase of energy in accelerators over the past decades has led to the design of superconducting magnets for both accelerators and the associated detectors. The use of Nb−Ti superconducting materials allows an increase in the dipole field by up to 10 T compared with the maximum field of 2 T in a conventional magnet. The field bending of the particles in the detectors and generated by the magnets can also be increased. New materials, such as Nb3Sn and high temperature superconductor (HTS) conductors, can open the way to higher fields, in the range 13–20 T. The latest generations of fusion machines producing hot plasma also use large superconducting magnet systems.

  7. Solving a large-scale precedence constrained scheduling problem with elastic jobs using tabu search

    DEFF Research Database (Denmark)

    Pedersen, C.R.; Rasmussen, R.V.; Andersen, Kim Allan

    2007-01-01

    exploitation of the elastic jobs and solve the problem using a tabu search procedure. Finding an initial feasible solution is in general -complete, but the tabu search procedure includes a specialized heuristic for solving this problem. The solution method has proven to be very efficient and leads......This paper presents a solution method for minimizing makespan of a practical large-scale scheduling problem with elastic jobs. The jobs are processed on three servers and restricted by precedence constraints, time windows and capacity limitations. We derive a new method for approximating the server...... to a significant decrease in makespan compared to the strategy currently implemented....

  8. Solving a large-scale precedence constrained scheduling problem with elastic jobs using tabu search

    DEFF Research Database (Denmark)

    Pedersen, C.R.; Rasmussen, R.V.; Andersen, Kim Allan

    2007-01-01

    This paper presents a solution method for minimizing makespan of a practical large-scale scheduling problem with elastic jobs. The jobs are processed on three servers and restricted by precedence constraints, time windows and capacity limitations. We derive a new method for approximating the server...... exploitation of the elastic jobs and solve the problem using a tabu search procedure. Finding an initial feasible solution is in general -complete, but the tabu search procedure includes a specialized heuristic for solving this problem. The solution method has proven to be very efficient and leads...

  9. Decomposition and parallelization strategies for solving large-scale MDO problems

    Energy Technology Data Exchange (ETDEWEB)

    Grauer, M.; Eschenauer, H.A. [Research Center for Multidisciplinary Analyses and Applied Structural Optimization, FOMAAS, Univ. of Siegen (Germany)

    2007-07-01

    During previous years, structural optimization has been recognized as a useful tool within the discriptiones of engineering and economics. However, the optimization of large-scale systems or structures is impeded by an immense solution effort. This was the reason to start a joint research and development (R and D) project between the Institute of Mechanics and Control Engineering and the Information and Decision Sciences Institute within the Research Center for Multidisciplinary Analyses and Applied Structural Optimization (FOMAAS) on cluster computing for parallel and distributed solution of multidisciplinary optimization (MDO) problems based on the OpTiX-Workbench. Here the focus of attention will be put on coarsegrained parallelization and its implementation on clusters of workstations. A further point of emphasis was laid on the development of a parallel decomposition strategy called PARDEC, for the solution of very complex optimization problems which cannot be solved efficiently by sequential integrated optimization. The use of the OptiX-Workbench together with the FEM ground water simulation system FEFLOW is shown for a special water management problem. (orig.)

  10. New approach to solve symmetric fully fuzzy linear systems

    Indian Academy of Sciences (India)

    In this paper, we present a method to solve fully fuzzy linear systems with symmetric coefficient matrix. The symmetric coefficient matrix is decomposed into two systems of equations by using Cholesky method and then a solution can be obtained. Numerical examples are given to illustrate our method.

  11. Galerkin projection methods for solving multiple related linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Chan, T.F.; Ng, M.; Wan, W.L.

    1996-12-31

    We consider using Galerkin projection methods for solving multiple related linear systems A{sup (i)}x{sup (i)} = b{sup (i)} for 1 {le} i {le} s, where A{sup (i)} and b{sup (i)} are different in general. We start with the special case where A{sup (i)} = A and A is symmetric positive definite. The method generates a Krylov subspace from a set of direction vectors obtained by solving one of the systems, called the seed system, by the CG method and then projects the residuals of other systems orthogonally onto the generated Krylov subspace to get the approximate solutions. The whole process is repeated with another unsolved system as a seed until all the systems are solved. We observe in practice a super-convergence behaviour of the CG process of the seed system when compared with the usual CG process. We also observe that only a small number of restarts is required to solve all the systems if the right-hand sides are close to each other. These two features together make the method particularly effective. In this talk, we give theoretical proof to justify these observations. Furthermore, we combine the advantages of this method and the block CG method and propose a block extension of this single seed method. The above procedure can actually be modified for solving multiple linear systems A{sup (i)}x{sup (i)} = b{sup (i)}, where A{sup (i)} are now different. We can also extend the previous analytical results to this more general case. Applications of this method to multiple related linear systems arising from image restoration and recursive least squares computations are considered as examples.

  12. Multiparameter extrapolation and deflation methods for solving equation systems

    Directory of Open Access Journals (Sweden)

    A. J. Hughes Hallett

    1984-01-01

    Full Text Available Most models in economics and the applied sciences are solved by first order iterative techniques, usually those based on the Gauss-Seidel algorithm. This paper examines the convergence of multiparameter extrapolations (accelerations of first order iterations, as an improved approximation to the Newton method for solving arbitrary nonlinear equation systems. It generalises my earlier results on single parameter extrapolations. Richardson's generalised method and the deflation method for detecting successive solutions in nonlinear equation systems are also presented as multiparameter extrapolations of first order iterations. New convergence results are obtained for those methods.

  13. Experimental quantum computing to solve systems of linear equations.

    Science.gov (United States)

    Cai, X-D; Weedbrook, C; Su, Z-E; Chen, M-C; Gu, Mile; Zhu, M-J; Li, Li; Liu, Nai-Le; Lu, Chao-Yang; Pan, Jian-Wei

    2013-06-07

    Solving linear systems of equations is ubiquitous in all areas of science and engineering. With rapidly growing data sets, such a task can be intractable for classical computers, as the best known classical algorithms require a time proportional to the number of variables N. A recently proposed quantum algorithm shows that quantum computers could solve linear systems in a time scale of order log(N), giving an exponential speedup over classical computers. Here we realize the simplest instance of this algorithm, solving 2×2 linear equations for various input vectors on a quantum computer. We use four quantum bits and four controlled logic gates to implement every subroutine required, demonstrating the working principle of this algorithm.

  14. On Solving the Lorenz System by Differential Transformation Method

    International Nuclear Information System (INIS)

    Al-Sawalha, M. Mossa; Noorani, M. S. M.

    2008-01-01

    The differential transformation method (DTM) is employed to solve a nonlinear differential equation, namely the Lorenz system. Numerical results are compared to those obtained by the Runge–Kutta method to illustrate the preciseness and effectiveness of the proposed method. In particular, we examine the accuracy of the (DTM) as the Lorenz system changes from a non-chaotic system to a chaotic one. It is shown that the (DTM) is robust, accurate and easy to apply

  15. The multilevel fast multipole algorithm (MLFMA) for solving large-scale computational electromagnetics problems

    CERN Document Server

    Ergul, Ozgur

    2014-01-01

    The Multilevel Fast Multipole Algorithm (MLFMA) for Solving Large-Scale Computational Electromagnetic Problems provides a detailed and instructional overview of implementing MLFMA. The book: Presents a comprehensive treatment of the MLFMA algorithm, including basic linear algebra concepts, recent developments on the parallel computation, and a number of application examplesCovers solutions of electromagnetic problems involving dielectric objects and perfectly-conducting objectsDiscusses applications including scattering from airborne targets, scattering from red

  16. Solving or resolving inadequate and noisy tomographic systems

    NARCIS (Netherlands)

    Nolet, G.

    1985-01-01

    Tomography in seismology often leads to underdetermined and inconsistent systems of linear equations. When solving, care must be taken to keep the propagation of data errors under control. In this paper I test the applicability of three types of damped least-squares algorithms to the kind of

  17. Comments on new iterative methods for solving linear systems

    Directory of Open Access Journals (Sweden)

    Wang Ke

    2017-06-01

    Full Text Available Some new iterative methods were presented by Du, Zheng and Wang for solving linear systems in [3], where it is shown that the new methods, comparing to the classical Jacobi or Gauss-Seidel method, can be applied to more systems and have faster convergence. This note shows that their methods are suitable for more matrices than positive matrices which the authors suggested through further analysis and numerical examples.

  18. A Proposed Method for Solving Fuzzy System of Linear Equations

    Directory of Open Access Journals (Sweden)

    Reza Kargar

    2014-01-01

    Full Text Available This paper proposes a new method for solving fuzzy system of linear equations with crisp coefficients matrix and fuzzy or interval right hand side. Some conditions for the existence of a fuzzy or interval solution of m×n linear system are derived and also a practical algorithm is introduced in detail. The method is based on linear programming problem. Finally the applicability of the proposed method is illustrated by some numerical examples.

  19. Solving Large Scale Nonlinear Eigenvalue Problem in Next-Generation Accelerator Design

    Energy Technology Data Exchange (ETDEWEB)

    Liao, Ben-Shan; Bai, Zhaojun; /UC, Davis; Lee, Lie-Quan; Ko, Kwok; /SLAC

    2006-09-28

    A number of numerical methods, including inverse iteration, method of successive linear problem and nonlinear Arnoldi algorithm, are studied in this paper to solve a large scale nonlinear eigenvalue problem arising from finite element analysis of resonant frequencies and external Q{sub e} values of a waveguide loaded cavity in the next-generation accelerator design. They present a nonlinear Rayleigh-Ritz iterative projection algorithm, NRRIT in short and demonstrate that it is the most promising approach for a model scale cavity design. The NRRIT algorithm is an extension of the nonlinear Arnoldi algorithm due to Voss. Computational challenges of solving such a nonlinear eigenvalue problem for a full scale cavity design are outlined.

  20. A novel artificial fish swarm algorithm for solving large-scale reliability-redundancy application problem.

    Science.gov (United States)

    He, Qiang; Hu, Xiangtao; Ren, Hong; Zhang, Hongqi

    2015-11-01

    A novel artificial fish swarm algorithm (NAFSA) is proposed for solving large-scale reliability-redundancy allocation problem (RAP). In NAFSA, the social behaviors of fish swarm are classified in three ways: foraging behavior, reproductive behavior, and random behavior. The foraging behavior designs two position-updating strategies. And, the selection and crossover operators are applied to define the reproductive ability of an artificial fish. For the random behavior, which is essentially a mutation strategy, the basic cloud generator is used as the mutation operator. Finally, numerical results of four benchmark problems and a large-scale RAP are reported and compared. NAFSA shows good performance in terms of computational accuracy and computational efficiency for large scale RAP. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Exp-function method for solving Maccari's system

    International Nuclear Information System (INIS)

    Zhang Sheng

    2007-01-01

    In this Letter, the Exp-function method is used to seek exact solutions of Maccari's system. As a result, single and combined generalized solitonary solutions are obtained, from which some known solutions obtained by extended sine-Gordon equation method and improved hyperbolic function method are recovered as special cases. It is shown that the Exp-function method provides a very effective and powerful mathematical tool for solving nonlinear evolution equations in mathematical physics

  2. CHEBYSHEV ACCELERATION TECHNIQUE FOR SOLVING FUZZY LINEAR SYSTEM

    Directory of Open Access Journals (Sweden)

    S.H. Nasseri

    2011-07-01

    Full Text Available In this paper, Chebyshev acceleration technique is used to solve the fuzzy linear system (FLS. This method is discussed in details and followed by summary of some other acceleration techniques. Moreover, we show that in some situations that the methods such as Jacobi, Gauss-Sidel, SOR and conjugate gradient is divergent, our proposed method is applicable and the acquired results are illustrated by some numerical examples.

  3. CHEBYSHEV ACCELERATION TECHNIQUE FOR SOLVING FUZZY LINEAR SYSTEM

    Directory of Open Access Journals (Sweden)

    S.H. Nasseri

    2009-10-01

    Full Text Available In this paper, Chebyshev acceleration technique is used to solve the fuzzy linear system (FLS. This method is discussed in details and followed by summary of some other acceleration techniques. Moreover, we show that in some situations that the methods such as Jacobi, Gauss-Sidel, SOR and conjugate gradient is divergent, our proposed method is applicable and the acquired results are illustrated by some numerical examples.

  4. AZTEC: A parallel iterative package for the solving linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Hutchinson, S.A.; Shadid, J.N.; Tuminaro, R.S. [Sandia National Labs., Albuquerque, NM (United States)

    1996-12-31

    We describe a parallel linear system package, AZTEC. The package incorporates a number of parallel iterative methods (e.g. GMRES, biCGSTAB, CGS, TFQMR) and preconditioners (e.g. Jacobi, Gauss-Seidel, polynomial, domain decomposition with LU or ILU within subdomains). Additionally, AZTEC allows for the reuse of previous preconditioning factorizations within Newton schemes for nonlinear methods. Currently, a number of different users are using this package to solve a variety of PDE applications.

  5. The mathematical statement for the solving of the problem of N-version software system design

    Science.gov (United States)

    Kovalev, I. V.; Kovalev, D. I.; Zelenkov, P. V.; Voroshilova, A. A.

    2015-10-01

    The N-version programming, as a methodology of the fault-tolerant software systems design, allows successful solving of the mentioned tasks. The use of N-version programming approach turns out to be effective, since the system is constructed out of several parallel executed versions of some software module. Those versions are written to meet the same specification but by different programmers. The problem of developing an optimal structure of N-version software system presents a kind of very complex optimization problem. This causes the use of deterministic optimization methods inappropriate for solving the stated problem. In this view, exploiting heuristic strategies looks more rational. In the field of pseudo-Boolean optimization theory, the so called method of varied probabilities (MVP) has been developed to solve problems with a large dimensionality.

  6. Solving Fully Fuzzy Linear System of Equations in General Form

    Directory of Open Access Journals (Sweden)

    A. Yousefzadeh

    2012-06-01

    Full Text Available In this work, we propose an approach for computing the positive solution of a fully fuzzy linear system where the coefficient matrix is a fuzzy $nimes n$ matrix. To do this, we use arithmetic operations on fuzzy numbers that introduced by Kaffman in and convert the fully fuzzy linear system into two $nimes n$ and $2nimes 2n$ crisp linear systems. If the solutions of these linear systems don't satisfy in positive fuzzy solution condition, we introduce the constrained least squares problem to obtain optimal fuzzy vector solution by applying the ranking function in given fully fuzzy linear system. Using our proposed method, the fully fuzzy linear system of equations always has a solution. Finally, we illustrate the efficiency of proposed method by solving some numerical examples.

  7. Solving Large-Scale TSP Using a Fast Wedging Insertion Partitioning Approach

    Directory of Open Access Journals (Sweden)

    Zuoyong Xiang

    2015-01-01

    Full Text Available A new partitioning method, called Wedging Insertion, is proposed for solving large-scale symmetric Traveling Salesman Problem (TSP. The idea of our proposed algorithm is to cut a TSP tour into four segments by nodes’ coordinate (not by rectangle, such as Strip, FRP, and Karp. Each node is located in one of their segments, which excludes four particular nodes, and each segment does not twist with other segments. After the partitioning process, this algorithm utilizes traditional construction method, that is, the insertion method, for each segment to improve the quality of tour, and then connects the starting node and the ending node of each segment to obtain the complete tour. In order to test the performance of our proposed algorithm, we conduct the experiments on various TSPLIB instances. The experimental results show that our proposed algorithm in this paper is more efficient for solving large-scale TSPs. Specifically, our approach is able to obviously reduce the time complexity for running the algorithm; meanwhile, it will lose only about 10% of the algorithm’s performance.

  8. Side effects of problem-solving strategies in large-scale nutrition science: towards a diversification of health.

    Science.gov (United States)

    Penders, Bart; Vos, Rein; Horstman, Klasien

    2009-11-01

    Solving complex problems in large-scale research programmes requires cooperation and division of labour. Simultaneously, large-scale problem solving also gives rise to unintended side effects. Based upon 5 years of researching two large-scale nutrigenomic research programmes, we argue that problems are fragmented in order to be solved. These sub-problems are given priority for practical reasons and in the process of solving them, various changes are introduced in each sub-problem. Combined with additional diversity as a result of interdisciplinarity, this makes reassembling the original and overall goal of the research programme less likely. In the case of nutrigenomics and health, this produces a diversification of health. As a result, the public health goal of contemporary nutrition science is not reached in the large-scale research programmes we studied. Large-scale research programmes are very successful in producing scientific publications and new knowledge; however, in reaching their political goals they often are less successful.

  9. Solving the stability-accuracy-diversity dilemma of recommender systems

    Science.gov (United States)

    Hou, Lei; Liu, Kecheng; Liu, Jianguo; Zhang, Runtong

    2017-02-01

    Recommender systems are of great significance in predicting the potential interesting items based on the target user's historical selections. However, the recommendation list for a specific user has been found changing vastly when the system changes, due to the unstable quantification of item similarities, which is defined as the recommendation stability problem. To improve the similarity stability and recommendation stability is crucial for the user experience enhancement and the better understanding of user interests. While the stability as well as accuracy of recommendation could be guaranteed by recommending only popular items, studies have been addressing the necessity of diversity which requires the system to recommend unpopular items. By ranking the similarities in terms of stability and considering only the most stable ones, we present a top- n-stability method based on the Heat Conduction algorithm (denoted as TNS-HC henceforth) for solving the stability-accuracy-diversity dilemma. Experiments on four benchmark data sets indicate that the TNS-HC algorithm could significantly improve the recommendation stability and accuracy simultaneously and still retain the high-diversity nature of the Heat Conduction algorithm. Furthermore, we compare the performance of the TNS-HC algorithm with a number of benchmark recommendation algorithms. The result suggests that the TNS-HC algorithm is more efficient in solving the stability-accuracy-diversity triple dilemma of recommender systems.

  10. Efficient Method to Approximately Solve Retrial Systems with Impatience

    Directory of Open Access Journals (Sweden)

    Jose Manuel Gimenez-Guzman

    2012-01-01

    Full Text Available We present a novel technique to solve multiserver retrial systems with impatience. Unfortunately these systems do not present an exact analytic solution, so it is mandatory to resort to approximate techniques. This novel technique does not rely on the numerical solution of the steady-state Kolmogorov equations of the Continuous Time Markov Chain as it is common for this kind of systems but it considers the system in its Markov Decision Process setting. This technique, known as value extrapolation, truncates the infinite state space using a polynomial extrapolation method to approach the states outside the truncated state space. A numerical evaluation is carried out to evaluate this technique and to compare its performance with previous techniques. The obtained results show that value extrapolation greatly outperforms the previous approaches appeared in the literature not only in terms of accuracy but also in terms of computational cost.

  11. Application of ANNs approach for solving fully fuzzy polynomials system

    Directory of Open Access Journals (Sweden)

    R. Novin

    2017-11-01

    Full Text Available In processing indecisive or unclear information, the advantages of fuzzy logic and neurocomputing disciplines should be taken into account and combined by fuzzy neural networks. The current research intends to present a fuzzy modeling method using multi-layer fuzzy neural networks for solving a fully fuzzy polynomials system. To clarify the point, it is necessary to inform that a supervised gradient descent-based learning law is employed. The feasibility of the method is examined using computer simulations on a numerical example. The experimental results obtained from the investigation of the proposed method are valid and delivers very good approximation results.

  12. Solving linear systems in FLICA-4, thermohydraulic code for 3-D transient computations

    International Nuclear Information System (INIS)

    Allaire, G.

    1995-01-01

    FLICA-4 is a computer code, developed at the CEA (France), devoted to steady state and transient thermal-hydraulic analysis of nuclear reactor cores, for small size problems (around 100 mesh cells) as well as for large ones (more than 100000), on, either standard workstations or vector super-computers. As for time implicit codes, the largest time and memory consuming part of FLICA-4 is the routine dedicated to solve the linear system (the size of which is of the order of the number of cells). Therefore, the efficiency of the code is crucially influenced by the optimization of the algorithms used in assembling and solving linear systems: direct methods as the Gauss (or LU) decomposition for moderate size problems, iterative methods as the preconditioned conjugate gradient for large problems. 6 figs., 13 refs

  13. Web-Based Problem-Solving Assignment and Grading System

    Science.gov (United States)

    Brereton, Giles; Rosenberg, Ronald

    2014-11-01

    In engineering courses with very specific learning objectives, such as fluid mechanics and thermodynamics, it is conventional to reinforce concepts and principles with problem-solving assignments and to measure success in problem solving as an indicator of student achievement. While the modern-day ease of copying and searching for online solutions can undermine the value of traditional assignments, web-based technologies also provide opportunities to generate individualized well-posed problems with an infinite number of different combinations of initial/final/boundary conditions, so that the probability of any two students being assigned identical problems in a course is vanishingly small. Such problems can be designed and programmed to be: single or multiple-step, self-grading, allow students single or multiple attempts; provide feedback when incorrect; selectable according to difficulty; incorporated within gaming packages; etc. In this talk, we discuss the use of a homework/exam generating program of this kind in a single-semester course, within a web-based client-server system that ensures secure operation.

  14. A matrix formalism to solve interface condition equations in a reactor system

    Energy Technology Data Exchange (ETDEWEB)

    Matausek, M V [Boris Kidric Institute of Nuclear Sciences Vinca, Beograd (Yugoslavia)

    1970-05-15

    When a nuclear reactor or a reactor lattice cell is treated by an approximate procedure to solve the neutron transport equation, as the last computational step often appears a problem of solving systems of algebraic equations stating the interface and boundary conditions for the neutron flux moments. These systems have usually the coefficient matrices of the block-bi diagonal type, containing thus a large number of zero elements. In the present report it is shown how such a system can be solved efficiently accounting for all the zero elements both in the coefficient matrix and in the free term vector. The procedure is presented here for the case of multigroup P{sub 3} calculation of neutron flux distribution in a cylindrical reactor lattice cell. Compared with the standard gaussian elimination method, this procedure is more advantageous both in respect to the number of operations needed to solve a given problem and in respect to the computer memory storage requirements. A similar formalism can also be applied for other approximate methods, for instance for multigroup diffusion treatment of a multi zone reactor. (author)

  15. Modern architectures for intelligent systems: reusable ontologies and problem-solving methods.

    Science.gov (United States)

    Musen, M A

    1998-01-01

    When interest in intelligent systems for clinical medicine soared in the 1970s, workers in medical informatics became particularly attracted to rule-based systems. Although many successful rule-based applications were constructed, development and maintenance of large rule bases remained quite problematic. In the 1980s, an entire industry dedicated to the marketing of tools for creating rule-based systems rose and fell, as workers in medical informatics began to appreciate deeply why knowledge acquisition and maintenance for such systems are difficult problems. During this time period, investigators began to explore alternative programming abstractions that could be used to develop intelligent systems. The notions of "generic tasks" and of reusable problem-solving methods became extremely influential. By the 1990s, academic centers were experimenting with architectures for intelligent systems based on two classes of reusable components: (1) domain-independent problem-solving methods-standard algorithms for automating stereotypical tasks--and (2) domain ontologies that captured the essential concepts (and relationships among those concepts) in particular application areas. This paper will highlight how intelligent systems for diverse tasks can be efficiently automated using these kinds of building blocks. The creation of domain ontologies and problem-solving methods is the fundamental end product of basic research in medical informatics. Consequently, these concepts need more attention by our scientific community.

  16. Solving very large scattering problems using a parallel PWTD-enhanced surface integral equation solver

    KAUST Repository

    Liu, Yang

    2013-07-01

    The computational complexity and memory requirements of multilevel plane wave time domain (PWTD)-accelerated marching-on-in-time (MOT)-based surface integral equation (SIE) solvers scale as O(NtNs(log 2)Ns) and O(Ns 1.5); here N t and Ns denote numbers of temporal and spatial basis functions discretizing the current [Shanker et al., IEEE Trans. Antennas Propag., 51, 628-641, 2003]. In the past, serial versions of these solvers have been successfully applied to the analysis of scattering from perfect electrically conducting as well as homogeneous penetrable targets involving up to Ns ≈ 0.5 × 106 and Nt ≈ 10 3. To solve larger problems, parallel PWTD-enhanced MOT solvers are called for. Even though a simple parallelization strategy was demonstrated in the context of electromagnetic compatibility analysis [M. Lu et al., in Proc. IEEE Int. Symp. AP-S, 4, 4212-4215, 2004], by and large, progress in this area has been slow. The lack of progress can be attributed wholesale to difficulties associated with the construction of a scalable PWTD kernel. © 2013 IEEE.

  17. Adams Predictor-Corrector Systems for Solving Fuzzy Differential Equations

    Directory of Open Access Journals (Sweden)

    Dequan Shang

    2013-01-01

    Full Text Available A predictor-corrector algorithm and an improved predictor-corrector (IPC algorithm based on Adams method are proposed to solve first-order differential equations with fuzzy initial condition. These algorithms are generated by updating the Adams predictor-corrector method and their convergence is also analyzed. Finally, the proposed methods are illustrated by solving an example.

  18. New approach to solve fully fuzzy system of linear equations using ...

    Indian Academy of Sciences (India)

    Known example problems are solved to illustrate the efficacy and ... The concept of fuzzy set and fuzzy number were first introduced by Zadeh .... (iii) Fully fuzzy linear systems can be solved by linear programming approach, Gauss elim-.

  19. The integration of marketing problem-solving modes and marketing management support systems

    NARCIS (Netherlands)

    B. Wierenga (Berend); G.H. van Bruggen (Gerrit)

    1997-01-01

    textabstractFocuses on the issue of problem solving in marketing and develops a classification of marketing problem-solving modes (MPSMs). Typology of MPSMs; Relationship among MPSMs; Marketing management support systems.

  20. Improved Quasi-Newton method via PSB update for solving systems of nonlinear equations

    Science.gov (United States)

    Mamat, Mustafa; Dauda, M. K.; Waziri, M. Y.; Ahmad, Fadhilah; Mohamad, Fatma Susilawati

    2016-10-01

    The Newton method has some shortcomings which includes computation of the Jacobian matrix which may be difficult or even impossible to compute and solving the Newton system in every iteration. Also, the common setback with some quasi-Newton methods is that they need to compute and store an n × n matrix at each iteration, this is computationally costly for large scale problems. To overcome such drawbacks, an improved Method for solving systems of nonlinear equations via PSB (Powell-Symmetric-Broyden) update is proposed. In the proposed method, the approximate Jacobian inverse Hk of PSB is updated and its efficiency has improved thereby require low memory storage, hence the main aim of this paper. The preliminary numerical results show that the proposed method is practically efficient when applied on some benchmark problems.

  1. Solving Man-Induced Large-Scale Conservation Problems: The Spanish Imperial Eagle and Power Lines

    Science.gov (United States)

    López-López, Pascual; Ferrer, Miguel; Madero, Agustín; Casado, Eva; McGrady, Michael

    2011-01-01

    Background Man-induced mortality of birds caused by electrocution with poorly-designed pylons and power lines has been reported to be an important mortality factor that could become a major cause of population decline of one of the world rarest raptors, the Spanish imperial eagle (Aquila adalberti). Consequently it has resulted in an increasing awareness of this problem amongst land managers and the public at large, as well as increased research into the distribution of electrocution events and likely mitigation measures. Methodology/Principal Findings We provide information of how mitigation measures implemented on a regional level under the conservation program of the Spanish imperial eagle have resulted in a positive shift of demographic trends in Spain. A 35 years temporal data set (1974–2009) on mortality of Spanish imperial eagle was recorded, including population censuses, and data on electrocution and non-electrocution of birds. Additional information was obtained from 32 radio-tracked young eagles and specific field surveys. Data were divided into two periods, before and after the approval of a regional regulation of power line design in 1990 which established mandatory rules aimed at minimizing or eliminating the negative impacts of power lines facilities on avian populations. Our results show how population size and the average annual percentage of population change have increased between the two periods, whereas the number of electrocuted birds has been reduced in spite of the continuous growing of the wiring network. Conclusions Our results demonstrate that solving bird electrocution is an affordable problem if political interest is shown and financial investment is made. The combination of an adequate spatial planning with a sustainable development of human infrastructures will contribute positively to the conservation of the Spanish imperial eagle and may underpin population growth and range expansion, with positive side effects on other endangered

  2. Interacting star clusters in the Large Magellanic Cloud. Overmerging problem solved by cluster group formation

    Science.gov (United States)

    Leon, Stéphane; Bergond, Gilles; Vallenari, Antonella

    1999-04-01

    We present the tidal tail distributions of a sample of candidate binary clusters located in the bar of the Large Magellanic Cloud (LMC). One isolated cluster, SL 268, is presented in order to study the effect of the LMC tidal field. All the candidate binary clusters show tidal tails, confirming that the pairs are formed by physically linked objects. The stellar mass in the tails covers a large range, from 1.8x 10(3) to 3x 10(4) \\msun. We derive a total mass estimate for SL 268 and SL 356. At large radii, the projected density profiles of SL 268 and SL 356 fall off as r(-gamma ) , with gamma = 2.27 and gamma =3.44, respectively. Out of 4 pairs or multiple systems, 2 are older than the theoretical survival time of binary clusters (going from a few 10(6) years to 10(8) years). A pair shows too large age difference between the components to be consistent with classical theoretical models of binary cluster formation (Fujimoto & Kumai \\cite{fujimoto97}). We refer to this as the ``overmerging'' problem. A different scenario is proposed: the formation proceeds in large molecular complexes giving birth to groups of clusters over a few 10(7) years. In these groups the expected cluster encounter rate is larger, and tidal capture has higher probability. Cluster pairs are not born together through the splitting of the parent cloud, but formed later by tidal capture. For 3 pairs, we tentatively identify the star cluster group (SCG) memberships. The SCG formation, through the recent cluster starburst triggered by the LMC-SMC encounter, in contrast with the quiescent open cluster formation in the Milky Way can be an explanation to the paucity of binary clusters observed in our Galaxy. Based on observations collected at the European Southern Observatory, La Silla, Chile}

  3. Krylov subspace methods for the solution of large systems of ODE's

    DEFF Research Database (Denmark)

    Thomsen, Per Grove; Bjurstrøm, Nils Henrik

    1998-01-01

    In Air Pollution Modelling large systems of ODE's arise. Solving such systems may be done efficientliy by Semi Implicit Runge-Kutta methods. The internal stages may be solved using Krylov subspace methods. The efficiency of this approach is investigated and verified.......In Air Pollution Modelling large systems of ODE's arise. Solving such systems may be done efficientliy by Semi Implicit Runge-Kutta methods. The internal stages may be solved using Krylov subspace methods. The efficiency of this approach is investigated and verified....

  4. Systems engineering for very large systems

    Science.gov (United States)

    Lewkowicz, Paul E.

    Very large integrated systems have always posed special problems for engineers. Whether they are power generation systems, computer networks or space vehicles, whenever there are multiple interfaces, complex technologies or just demanding customers, the challenges are unique. 'Systems engineering' has evolved as a discipline in order to meet these challenges by providing a structured, top-down design and development methodology for the engineer. This paper attempts to define the general class of problems requiring the complete systems engineering treatment and to show how systems engineering can be utilized to improve customer satisfaction and profit ability. Specifically, this work will focus on a design methodology for the largest of systems, not necessarily in terms of physical size, but in terms of complexity and interconnectivity.

  5. Solving large sets of coupled equations iteratively by vector processing on the CYBER 205 computer

    International Nuclear Information System (INIS)

    Tolsma, L.D.

    1985-01-01

    The set of coupled linear second-order differential equations which has to be solved for the quantum-mechanical description of inelastic scattering of atomic and nuclear particles can be rewritten as an equivalent set of coupled integral equations. When some type of functions is used as piecewise analytic reference solutions, the integrals that arise in this set can be evaluated analytically. The set of integral equations can be solved iteratively. For the results mentioned an inward-outward iteration scheme has been applied. A concept of vectorization of coupled-channel Fortran programs, based on this integral method, is presented for the use on the Cyber 205 computer. It turns out that, for two heavy ion nuclear scattering test cases, this vector algorithm gives an overall speed-up of about a factor of 2 to 3 compared to a highly optimized scalar algorithm for a one vector pipeline computer

  6. Solving and Interpreting Large-scale Harvest Scheduling Problems by Duality and Decomposition

    OpenAIRE

    Berck, Peter; Bible, Thomas

    1982-01-01

    This paper presents a solution to the forest planning problem that takes advantage of both the duality of linear programming formulations currently being used for harvest scheduling and the characteristics of decomposition inherent in the forest land class-relationship. The subproblems of decomposition, defined as the dual, can be solved in a simple, recursive fashion. In effect, such a technique reduces the computational burden in terms of time and computer storage as compared to the traditi...

  7. Solving large test-day models by iteration on data and preconditioned conjugate gradient.

    Science.gov (United States)

    Lidauer, M; Strandén, I; Mäntysaari, E A; Pösö, J; Kettunen, A

    1999-12-01

    A preconditioned conjugate gradient method was implemented into an iteration on a program for data estimation of breeding values, and its convergence characteristics were studied. An algorithm was used as a reference in which one fixed effect was solved by Gauss-Seidel method, and other effects were solved by a second-order Jacobi method. Implementation of the preconditioned conjugate gradient required storing four vectors (size equal to number of unknowns in the mixed model equations) in random access memory and reading the data at each round of iteration. The preconditioner comprised diagonal blocks of the coefficient matrix. Comparison of algorithms was based on solutions of mixed model equations obtained by a single-trait animal model and a single-trait, random regression test-day model. Data sets for both models used milk yield records of primiparous Finnish dairy cows. Animal model data comprised 665,629 lactation milk yields and random regression test-day model data of 6,732,765 test-day milk yields. Both models included pedigree information of 1,099,622 animals. The animal model ¿random regression test-day model¿ required 122 ¿305¿ rounds of iteration to converge with the reference algorithm, but only 88 ¿149¿ were required with the preconditioned conjugate gradient. To solve the random regression test-day model with the preconditioned conjugate gradient required 237 megabytes of random access memory and took 14% of the computation time needed by the reference algorithm.

  8. Integrator Performance Analysis In Solving Stiff Differential Equation System

    International Nuclear Information System (INIS)

    B, Alhadi; Basaruddin, T.

    2001-01-01

    In this paper we discuss the four-stage index-2 singly diagonally implicit Runge-Kutta method, which is used to solve stiff ordinary differential equations (SODE). Stiff problems require a method where step size is not restricted by the method's stability. We desire SDIRK to be A-stable that has no stability restrictions when solving y'= λy with Reλ>0 and h>0, so by choosing suitable stability function we can determine appropriate constant g) to formulate SDIRK integrator to solve SODE. We select the second stage of the internal stage as embedded method to perform low order estimate for error predictor. The strategy for choosing the step size is adopted from the strategy proposed by Hall(1996:6). And the algorithm that is developed in this paper is implemented using MATLAB 5.3, which is running on Window's 95 environment. Our performance measurement's local truncation error accuracy, and efficiency were evaluated by statistical results of sum of steps, sum of calling functions, average of Newton iterations and elapsed times.As the results, our numerical experiment show that SDIRK is unconditionally stable. By using Hall's step size strategy, the method can be implemented efficiently, provided that suitable parameters are used

  9. A new modified conjugate gradient coefficient for solving system of linear equations

    Science.gov (United States)

    Hajar, N.; ‘Aini, N.; Shapiee, N.; Abidin, Z. Z.; Khadijah, W.; Rivaie, M.; Mamat, M.

    2017-09-01

    Conjugate gradient (CG) method is an evolution of computational method in solving unconstrained optimization problems. This approach is easy to implement due to its simplicity and has been proven to be effective in solving real-life application. Although this field has received copious amount of attentions in recent years, some of the new approaches of CG algorithm cannot surpass the efficiency of the previous versions. Therefore, in this paper, a new CG coefficient which retains the sufficient descent and global convergence properties of the original CG methods is proposed. This new CG is tested on a set of test functions under exact line search. Its performance is then compared to that of some of the well-known previous CG methods based on number of iterations and CPU time. The results show that the new CG algorithm has the best efficiency amongst all the methods tested. This paper also includes an application of the new CG algorithm for solving large system of linear equations

  10. A Decomposition-Based Pricing Method for Solving a Large-Scale MILP Model for an Integrated Fishery

    Directory of Open Access Journals (Sweden)

    M. Babul Hasan

    2007-01-01

    The IFP can be decomposed into a trawler-scheduling subproblem and a fish-processing subproblem in two different ways by relaxing different sets of constraints. We tried conventional decomposition techniques including subgradient optimization and Dantzig-Wolfe decomposition, both of which were unacceptably slow. We then developed a decomposition-based pricing method for solving the large fishery model, which gives excellent computation times. Numerical results for several planning horizon models are presented.

  11. Application of Homotopy Analysis Method to Solve Relativistic Toda Lattice System

    International Nuclear Information System (INIS)

    Wang Qi

    2010-01-01

    In this letter, the homotopy analysis method is successfully applied to solve the Relativistic Toda lattice system. Comparisons are made between the results of the proposed method and exact solutions. Analysis results show that homotopy analysis method is a powerful and easy-to-use analytic tool to solve systems of differential-difference equations. (general)

  12. Solving nonlinear evolution equation system using two different methods

    Science.gov (United States)

    Kaplan, Melike; Bekir, Ahmet; Ozer, Mehmet N.

    2015-12-01

    This paper deals with constructing more general exact solutions of the coupled Higgs equation by using the (G0/G, 1/G)-expansion and (1/G0)-expansion methods. The obtained solutions are expressed by three types of functions: hyperbolic, trigonometric and rational functions with free parameters. It has been shown that the suggested methods are productive and will be used to solve nonlinear partial differential equations in applied mathematics and engineering. Throughout the paper, all the calculations are made with the aid of the Maple software.

  13. SLAP, Large Sparse Linear System Solution Package

    International Nuclear Information System (INIS)

    Greenbaum, A.

    1987-01-01

    1 - Description of program or function: SLAP is a set of routines for solving large sparse systems of linear equations. One need not store the entire matrix - only the nonzero elements and their row and column numbers. Any nonzero structure is acceptable, so the linear system solver need not be modified when the structure of the matrix changes. Auxiliary storage space is acquired and released within the routines themselves by use of the LRLTRAN POINTER statement. 2 - Method of solution: SLAP contains one direct solver, a band matrix factorization and solution routine, BAND, and several interactive solvers. The iterative routines are as follows: JACOBI, Jacobi iteration; GS, Gauss-Seidel Iteration; ILUIR, incomplete LU decomposition with iterative refinement; DSCG and ICCG, diagonal scaling and incomplete Cholesky decomposition with conjugate gradient iteration (for symmetric positive definite matrices only); DSCGN and ILUGGN, diagonal scaling and incomplete LU decomposition with conjugate gradient interaction on the normal equations; DSBCG and ILUBCG, diagonal scaling and incomplete LU decomposition with bi-conjugate gradient iteration; and DSOMN and ILUOMN, diagonal scaling and incomplete LU decomposition with ORTHOMIN iteration

  14. Solving large nonlinear generalized eigenvalue problems from Density Functional Theory calculations in parallel

    DEFF Research Database (Denmark)

    Bendtsen, Claus; Nielsen, Ole Holm; Hansen, Lars Bruno

    2001-01-01

    The quantum mechanical ground state of electrons is described by Density Functional Theory, which leads to large minimization problems. An efficient minimization method uses a self-consistent field (SCF) solution of large eigenvalue problems. The iterative Davidson algorithm is often used, and we...

  15. Solving large-scale PDE-constrained Bayesian inverse problems with Riemann manifold Hamiltonian Monte Carlo

    Science.gov (United States)

    Bui-Thanh, T.; Girolami, M.

    2014-11-01

    We consider the Riemann manifold Hamiltonian Monte Carlo (RMHMC) method for solving statistical inverse problems governed by partial differential equations (PDEs). The Bayesian framework is employed to cast the inverse problem into the task of statistical inference whose solution is the posterior distribution in infinite dimensional parameter space conditional upon observation data and Gaussian prior measure. We discretize both the likelihood and the prior using the H1-conforming finite element method together with a matrix transfer technique. The power of the RMHMC method is that it exploits the geometric structure induced by the PDE constraints of the underlying inverse problem. Consequently, each RMHMC posterior sample is almost uncorrelated/independent from the others providing statistically efficient Markov chain simulation. However this statistical efficiency comes at a computational cost. This motivates us to consider computationally more efficient strategies for RMHMC. At the heart of our construction is the fact that for Gaussian error structures the Fisher information matrix coincides with the Gauss-Newton Hessian. We exploit this fact in considering a computationally simplified RMHMC method combining state-of-the-art adjoint techniques and the superiority of the RMHMC method. Specifically, we first form the Gauss-Newton Hessian at the maximum a posteriori point and then use it as a fixed constant metric tensor throughout RMHMC simulation. This eliminates the need for the computationally costly differential geometric Christoffel symbols, which in turn greatly reduces computational effort at a corresponding loss of sampling efficiency. We further reduce the cost of forming the Fisher information matrix by using a low rank approximation via a randomized singular value decomposition technique. This is efficient since a small number of Hessian-vector products are required. The Hessian-vector product in turn requires only two extra PDE solves using the adjoint

  16. Modeling and solving a large-scale generation expansion planning problem under uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Shan; Ryan, Sarah M. [Iowa State University, Department of Industrial and Manufacturing Systems Engineering, Ames (United States); Watson, Jean-Paul [Sandia National Laboratories, Discrete Math and Complex Systems Department, Albuquerque (United States); Woodruff, David L. [University of California Davis, Graduate School of Management, Davis (United States)

    2011-11-15

    We formulate a generation expansion planning problem to determine the type and quantity of power plants to be constructed over each year of an extended planning horizon, considering uncertainty regarding future demand and fuel prices. Our model is expressed as a two-stage stochastic mixed-integer program, which we use to compute solutions independently minimizing the expected cost and the Conditional Value-at-Risk; i.e., the risk of significantly larger-than-expected operational costs. We introduce stochastic process models to capture demand and fuel price uncertainty, which are in turn used to generate trees that accurately represent the uncertainty space. Using a realistic problem instance based on the Midwest US, we explore two fundamental, unexplored issues that arise when solving any stochastic generation expansion model. First, we introduce and discuss the use of an algorithm for computing confidence intervals on obtained solution costs, to account for the fact that a finite sample of scenarios was used to obtain a particular solution. Second, we analyze the nature of solutions obtained under different parameterizations of this method, to assess whether the recommended solutions themselves are invariant to changes in costs. The issues are critical for decision makers who seek truly robust recommendations for generation expansion planning. (orig.)

  17. Using a Recommendation System to Support Problem Solving and Case-Based Reasoning Retrieval

    Science.gov (United States)

    Tawfik, Andrew A.; Alhoori, Hamed; Keene, Charles Wayne; Bailey, Christian; Hogan, Maureen

    2018-01-01

    In case library learning environments, learners are presented with an array of narratives that can be used to guide their problem solving. However, according to theorists, learners struggle to identify and retrieve the optimal case to solve a new problem. Given the challenges novice face during case retrieval, recommender systems can be embedded…

  18. Workflow Agents vs. Expert Systems: Problem Solving Methods in Work Systems Design

    Science.gov (United States)

    Clancey, William J.; Sierhuis, Maarten; Seah, Chin

    2009-01-01

    During the 1980s, a community of artificial intelligence researchers became interested in formalizing problem solving methods as part of an effort called "second generation expert systems" (2nd GES). How do the motivations and results of this research relate to building tools for the workplace today? We provide an historical review of how the theory of expertise has developed, a progress report on a tool for designing and implementing model-based automation (Brahms), and a concrete example how we apply 2nd GES concepts today in an agent-based system for space flight operations (OCAMS). Brahms incorporates an ontology for modeling work practices, what people are doing in the course of a day, characterized as "activities." OCAMS was developed using a simulation-to-implementation methodology, in which a prototype tool was embedded in a simulation of future work practices. OCAMS uses model-based methods to interactively plan its actions and keep track of the work to be done. The problem solving methods of practice are interactive, employing reasoning for and through action in the real world. Analogously, it is as if a medical expert system were charged not just with interpreting culture results, but actually interacting with a patient. Our perspective shifts from building a "problem solving" (expert) system to building an actor in the world. The reusable components in work system designs include entire "problem solvers" (e.g., a planning subsystem), interoperability frameworks, and workflow agents that use and revise models dynamically in a network of people and tools. Consequently, the research focus shifts so "problem solving methods" include ways of knowing that models do not fit the world, and ways of interacting with other agents and people to gain or verify information and (ultimately) adapt rules and procedures to resolve problematic situations.

  19. A new efficient analytical method for a system of vibration. Structural analysis using a new technique of partially solving method

    International Nuclear Information System (INIS)

    Gunyasu, Kenzo; Hiramoto, Tsuneyuki; Tanimoto, Mitsumori; Osano, Minetada

    2002-01-01

    We describe a new method for solving large-scale system of linear equations resulting from discretization of ordinary differential equation and partial differential equation directly. This new method effectively reduces the memory capacity requirements and computing time problems for analyses using finite difference method and finite element method. In this paper we have tried to solve one-million linear equations directly for the case that initial displacement and boundary displacement are known about the finite difference scheme of second order inhomogeneous differential equation for vibration of a 10 story structure. Excellent results were got. (author)

  20. Knowledge acquisition from natural language for expert systems based on classification problem-solving methods

    Science.gov (United States)

    Gomez, Fernando

    1989-01-01

    It is shown how certain kinds of domain independent expert systems based on classification problem-solving methods can be constructed directly from natural language descriptions by a human expert. The expert knowledge is not translated into production rules. Rather, it is mapped into conceptual structures which are integrated into long-term memory (LTM). The resulting system is one in which problem-solving, retrieval and memory organization are integrated processes. In other words, the same algorithm and knowledge representation structures are shared by these processes. As a result of this, the system can answer questions, solve problems or reorganize LTM.

  1. Solving sparse linear least squares problems on some supercomputers by using large dense blocks

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Ostromsky, T; Sameh, A

    1997-01-01

    technique is preferable to sparse matrix technique when the matrices are not large, because the high computational speed compensates fully the disadvantages of using more arithmetic operations and more storage. For very large matrices the computations must be organized as a sequence of tasks in each......Efficient subroutines for dense matrix computations have recently been developed and are available on many high-speed computers. On some computers the speed of many dense matrix operations is near to the peak-performance. For sparse matrices storage and operations can be saved by operating only...... and storing only nonzero elements. However, the price is a great degradation of the speed of computations on supercomputers (due to the use of indirect addresses, to the need to insert new nonzeros in the sparse storage scheme, to the lack of data locality, etc.). On many high-speed computers a dense matrix...

  2. Impact of problem-based learning in a large classroom setting: student perception and problem-solving skills.

    Science.gov (United States)

    Klegeris, Andis; Hurren, Heather

    2011-12-01

    Problem-based learning (PBL) can be described as a learning environment where the problem drives the learning. This technique usually involves learning in small groups, which are supervised by tutors. It is becoming evident that PBL in a small-group setting has a robust positive effect on student learning and skills, including better problem-solving skills and an increase in overall motivation. However, very little research has been done on the educational benefits of PBL in a large classroom setting. Here, we describe a PBL approach (using tutorless groups) that was introduced as a supplement to standard didactic lectures in University of British Columbia Okanagan undergraduate biochemistry classes consisting of 45-85 students. PBL was chosen as an effective method to assist students in learning biochemical and physiological processes. By monitoring student attendance and using informal and formal surveys, we demonstrated that PBL has a significant positive impact on student motivation to attend and participate in the course work. Student responses indicated that PBL is superior to traditional lecture format with regard to the understanding of course content and retention of information. We also demonstrated that student problem-solving skills are significantly improved, but additional controlled studies are needed to determine how much PBL exercises contribute to this improvement. These preliminary data indicated several positive outcomes of using PBL in a large classroom setting, although further studies aimed at assessing student learning are needed to further justify implementation of this technique in courses delivered to large undergraduate classes.

  3. EDDYMULT: a computing system for solving eddy current problems in a multi-torus system

    International Nuclear Information System (INIS)

    Nakamura, Yukiharu; Ozeki, Takahisa

    1989-03-01

    A new computing system EDDYMULT based on the finite element circuit method has been developed to solve actual eddy current problems in a multi-torus system, which consists of many torus-conductors and various kinds of axisymmetric poloidal field coils. The EDDYMULT computing system can deal three-dimensionally with the modal decomposition of eddy current in a multi-torus system, the transient phenomena of eddy current distributions and the resultant magnetic field. Therefore, users can apply the computing system to the solution of the eddy current problems in a tokamak fusion device, such as the design of poloidal field coil power supplies, the mechanical stress design of the intensive electromagnetic loading on device components and the control analysis of plasma position. The present report gives a detailed description of the EDDYMULT system as an user's manual: 1) theory, 2) structure of the code system, 3) input description, 4) problem restrictions, 5) description of the subroutines, etc. (author)

  4. Solving system of DAEs by homotopy analysis method

    International Nuclear Information System (INIS)

    Awawdeh, Fadi; Jaradat, H.M.; Alsayyed, O.

    2009-01-01

    Homotopy analysis method (HAM) is applied to systems of differential-algebraic equations (DAEs). The HAM is proved to be very effective, simple and convenient to give approximate analytical solutions to DAEs.

  5. Solving polynomial systems using no-root elimination blending schemes

    KAUST Repository

    Barton, Michael

    2011-01-01

    Searching for the roots of (piecewise) polynomial systems of equations is a crucial problem in computer-aided design (CAD), and an efficient solution is in strong demand. Subdivision solvers are frequently used to achieve this goal; however

  6. Comparative Study on a Solving Model and Algorithm for a Flush Air Data Sensing System

    Directory of Open Access Journals (Sweden)

    Yanbin Liu

    2014-05-01

    Full Text Available With the development of high-performance aircraft, precise air data are necessary to complete challenging tasks such as flight maneuvering with large angles of attack and high speed. As a result, the flush air data sensing system (FADS was developed to satisfy the stricter control demands. In this paper, comparative stuides on the solving model and algorithm for FADS are conducted. First, the basic principles of FADS are given to elucidate the nonlinear relations between the inputs and the outputs. Then, several different solving models and algorithms of FADS are provided to compute the air data, including the angle of attck, sideslip angle, dynamic pressure and static pressure. Afterwards, the evaluation criteria of the resulting models and algorithms are discussed to satisfy the real design demands. Futhermore, a simulation using these algorithms is performed to identify the properites of the distinct models and algorithms such as the measuring precision and real-time features. The advantages of these models and algorithms corresponding to the different flight conditions are also analyzed, furthermore, some suggestions on their engineering applications are proposed to help future research.

  7. Analytical-Algebraic Approach to Solving Chaotic System

    Czech Academy of Sciences Publication Activity Database

    Beran, Zdeněk; Čelikovský, Sergej

    2016-01-01

    Roč. 26, č. 3 (2016), č. článku 1650051. ISSN 0218-1274 R&D Projects: GA ČR GA13-20433S Institutional support: RVO:67985556 Keywords : Laplace transform * Laplace-Adomian decomposition * Adomian polynomials * nonlinear systems * chaos Subject RIV: BC - Control Systems Theory Impact factor: 1.329, year: 2016 http://library.utia.cas.cz/separaty/2016/TR/beran-0458430.pdf

  8. Measurement system for large motions

    International Nuclear Information System (INIS)

    Noyes, R.; Davies, L.; Kalinowski, J.; Stubbs, T.

    1979-05-01

    The system used to measure the response of geologic media to stress waves generated during and after underground tests performed by the Lawrence Livermore Laboratory (LLL) at the Department of Energy's Nevada Test Site (NTS) is described. Included are descriptions of the system transducers and accelerometers, the procedures used in calibrating and packaging the system at the North Las Vegas Facility of EG and G, Inc., the positioning of equipment during fielding activities at NTS, and the procedures used at LLL's facilities in California to reduce and analyze the data recorded on magnetic tape at NTS during an underground nuclear explosion. In summarizing, the authors give the system high marks, attributing its success to good basic design, careful installation, and rigorous calibration and data analysis techniques applied with good judgement on the part of the instrumentation engineers and data analysts. 10 figures

  9. Australian health system restructuring - what problem is being solved?

    Science.gov (United States)

    Dwyer, Judith M

    2004-11-19

    BACKGROUND: In recent years, Australian state and territory governments have reviewed and restructured the health systems they lead and regulate. This paper examines the outcomes of the most recent official published reviews of systems and structures; identifies the common themes; and addresses two questions: what problems are being addressed? And how would we know if the changes were successful? RESULTS: In all the broad, systemic reviews, the main health system problems identified were money, hospital utilisation and a weak primary health care system. The solutions are various, but there is a common trend towards centralisation of governance, often at state health authority level, and stronger accountability measures. Other common themes are hospital substitution (services to avoid the need for admission); calls for cooperation across the Commonwealth:state divide, or for its abolition; and the expected range of current efficiency and effectiveness measures (eg amalgamate pathology and support services) and ideas in good currency (eg call centres). The top-down nature of the public review process is noted, along with the political nature of the immediate catalysts for calling on a review. CONCLUSION: The long-standing tension between the pull to centralisation of authority and the need for innovation in care models is heightened by recent changes, which may be counterproductive in an era dominated by the burden of chronic disease. I argue that the current reforms will not succeed in achieving the stated goals unless they make a difference for people with chronic illness. And if this is correct, the most useful focus for evaluation of the success of the reforms may be their impact on the system's ability to develop and deliver better models of care for this growing group of patients.

  10. Testing reactive systems with data: enumerative methods and constraint solving

    NARCIS (Netherlands)

    Calamé, Jens Rüdiger

    2008-01-01

    Software faults are a well-known phenomenon. In most cases, they are just annoying – if the computer game does not work as expected – or expensive – if once again a space project fails due to some faulty data conversion. In critical systems, however, faults can have life-threatening consequences. It

  11. Parallel computation for solving the tridiagonal linear system of equations

    International Nuclear Information System (INIS)

    Ishiguro, Misako; Harada, Hiroo; Fujii, Minoru; Fujimura, Toichiro; Nakamura, Yasuhiro; Nanba, Katsumi.

    1981-09-01

    Recently, applications of parallel computation for scientific calculations have increased from the need of the high speed calculation of large scale programs. At the JAERI computing center, an array processor FACOM 230-75 APU has installed to study the applicability of parallel computation for nuclear codes. We made some numerical experiments by using the APU on the methods of solution of tridiagonal linear equation which is an important problem in scientific calculations. Referring to the recent papers with parallel methods, we investigate eight ones. These are Gauss elimination method, Parallel Gauss method, Accelerated parallel Gauss method, Jacobi method, Recursive doubling method, Cyclic reduction method, Chebyshev iteration method, and Conjugate gradient method. The computing time and accuracy were compared among the methods on the basis of the numerical experiments. As the result, it is found that the Cyclic reduction method is best both in computing time and accuracy and the Gauss elimination method is the second one. (author)

  12. Solving the Turbine Positioning Problem for Large Offshore Wind Farms by Simulated Annealing

    DEFF Research Database (Denmark)

    Rivas, Rajai Aghabi; Clausen, Jens; Hansen, Kurt Schaldemose

    2009-01-01

    search operations are performed recursively until the system converges. The effectiveness of the proposed algorithm is demonstrated on a suite of real life test cases, including Horns Rev offshore wind farm. The results are verified using a commercial wind resource software indicating that this method...... is negligible while, as the wind farm's size reduces, the differences start becoming significant. A sensitivity analysis is also performed showing that greater density of turbines in the perimeter of the optimized wind farm reduces the wake losses even if the wind climate changes....

  13. Derivative free Davidon-Fletcher-Powell (DFP) for solving symmetric systems of nonlinear equations

    Science.gov (United States)

    Mamat, M.; Dauda, M. K.; Mohamed, M. A. bin; Waziri, M. Y.; Mohamad, F. S.; Abdullah, H.

    2018-03-01

    Research from the work of engineers, economist, modelling, industry, computing, and scientist are mostly nonlinear equations in nature. Numerical solution to such systems is widely applied in those areas of mathematics. Over the years, there has been significant theoretical study to develop methods for solving such systems, despite these efforts, unfortunately the methods developed do have deficiency. In a contribution to solve systems of the form F(x) = 0, x ∈ Rn , a derivative free method via the classical Davidon-Fletcher-Powell (DFP) update is presented. This is achieved by simply approximating the inverse Hessian matrix with {Q}k+1-1 to θkI. The modified method satisfied the descent condition and possess local superlinear convergence properties. Interestingly, without computing any derivative, the proposed method never fail to converge throughout the numerical experiments. The output is based on number of iterations and CPU time, different initial starting points were used on a solve 40 benchmark test problems. With the aid of the squared norm merit function and derivative-free line search technique, the approach yield a method of solving symmetric systems of nonlinear equations that is capable of significantly reducing the CPU time and number of iteration, as compared to its counterparts. A comparison between the proposed method and classical DFP update were made and found that the proposed methodis the top performer and outperformed the existing method in almost all the cases. In terms of number of iterations, out of the 40 problems solved, the proposed method solved 38 successfully, (95%) while classical DFP solved 2 problems (i.e. 05%). In terms of CPU time, the proposed method solved 29 out of the 40 problems given, (i.e.72.5%) successfully whereas classical DFP solves 11 (27.5%). The method is valid in terms of derivation, reliable in terms of number of iterations and accurate in terms of CPU time. Thus, suitable and achived the objective.

  14. Operation of large cryogenic systems

    International Nuclear Information System (INIS)

    Rode, C.H.; Ferry, B.; Fowler, W.B.; Makara, J.; Peterson, T.; Theilacker, J.; Walker, R.

    1985-06-01

    This report is based on the past 12 years of experiments on R and D and operation of the 27 kW Fermilab Tevatron Cryogenic System. In general the comments are applicable for all helium plants larger than 1000W (400 l/hr) and non mass-produced nitrogen plants larger than 50 tons per day. 14 refs., 3 figs., 1 tab

  15. Solving Complex Logistics Problems with Multi-Artificial Intelligent System

    Directory of Open Access Journals (Sweden)

    Y.K. Tse

    2009-10-01

    Full Text Available The economy, which has become more information intensive, more global and more technologically dependent, is undergoing dramatic changes. The role of logistics is also becoming more and more important. In logistics, the objective of service providers is to fulfill all customers? demands while adapting to the dynamic changes of logistics networks so as to achieve a higher degree of customer satisfaction and therefore a higher return on investment. In order to provide high quality service, knowledge and information sharing among departments becomes a must in this fast changing market environment. In particular, artificial intelligence (AI technologies have achieved significant attention for enhancing the agility of supply chain management, as well as logistics operations. In this research, a multi-artificial intelligence system, named Integrated Intelligent Logistics System (IILS is proposed. The objective of IILS is to provide quality logistics solutions to achieve high levels of service performance in the logistics industry. The new feature of this agile intelligence system is characterized by the incorporation of intelligence modules through the capabilities of the case-based reasoning, multi-agent, fuzzy logic and artificial neural networks, achieving the optimization of the performance of organizations.

  16. Solving polynomial systems using no-root elimination blending schemes

    KAUST Repository

    Barton, Michael

    2011-12-01

    Searching for the roots of (piecewise) polynomial systems of equations is a crucial problem in computer-aided design (CAD), and an efficient solution is in strong demand. Subdivision solvers are frequently used to achieve this goal; however, the subdivision process is expensive, and a vast number of subdivisions is to be expected, especially for higher-dimensional systems. Two blending schemes that efficiently reveal domains that cannot contribute by any root, and therefore significantly reduce the number of subdivisions, are proposed. Using a simple linear blend of functions of the given polynomial system, a function is sought after to be no-root contributing, with all control points of its BernsteinBézier representation of the same sign. If such a function exists, the domain is purged away from the subdivision process. The applicability is demonstrated on several CAD benchmark problems, namely surfacesurfacesurface intersection (SSSI) and surfacecurve intersection (SCI) problems, computation of the Hausdorff distance of two planar curves, or some kinematic-inspired tasks. © 2011 Elsevier Ltd. All rights reserved.

  17. Solved and unsolved problems in boiler systems. Learning from accidents

    International Nuclear Information System (INIS)

    Ozawa, Mamoru

    2000-01-01

    This paper begins with a brief review on the similarity law of conventional fossil-fuel-fired boilers. The concept is based on the fact that the heat release due to combustion in the furnace is restricted by the furnace volume but the heat absorption is restricted by the heat transfer surface area. This means that a small-capacity boiler has relatively high specific furnace heat release rate, about 10 MW/m 3 , and on the contrary a large-capacity boiler has lower value. The surface-heat-flux limit is mainly dominated by the CHF inside the water-wall tubes of the boiler furnace, about 350 kW/m 2 . This heat-flux limit is almost the same order independently on the capacity of boilers. For the safety of water-walls, it is essential to retain suitable water circulation, i.e. circulation ratio and velocity of water. This principle is a common knowledge of boiler designer, but actual situation is not the case. Newly designed boilers often suffer from similar accidents, especially burnout due to circulation problems. This paper demonstrates recent accidents encountered in practical boilers, and raises problems of rather classical but important two-phase flow and heat transfer. (author)

  18. Engineering large-scale agent-based systems with consensus

    Science.gov (United States)

    Bokma, A.; Slade, A.; Kerridge, S.; Johnson, K.

    1994-01-01

    The paper presents the consensus method for the development of large-scale agent-based systems. Systems can be developed as networks of knowledge based agents (KBA) which engage in a collaborative problem solving effort. The method provides a comprehensive and integrated approach to the development of this type of system. This includes a systematic analysis of user requirements as well as a structured approach to generating a system design which exhibits the desired functionality. There is a direct correspondence between system requirements and design components. The benefits of this approach are that requirements are traceable into design components and code thus facilitating verification. The use of the consensus method with two major test applications showed it to be successful and also provided valuable insight into problems typically associated with the development of large systems.

  19. Nonlinear evolution equations and solving algebraic systems: the importance of computer algebra

    International Nuclear Information System (INIS)

    Gerdt, V.P.; Kostov, N.A.

    1989-01-01

    In the present paper we study the application of computer algebra to solve the nonlinear polynomial systems which arise in investigation of nonlinear evolution equations. We consider several systems which are obtained in classification of integrable nonlinear evolution equations with uniform rank. Other polynomial systems are related with the finding of algebraic curves for finite-gap elliptic potentials of Lame type and generalizations. All systems under consideration are solved using the method based on construction of the Groebner basis for corresponding polynomial ideals. The computations have been carried out using computer algebra systems. 20 refs

  20. Computer programs for solving systems of nonlinear equations

    International Nuclear Information System (INIS)

    Asaoka, Takumi

    1978-03-01

    Computer programs to find a solution, usually the one closest to some guess, of a system of simultaneous nonlinear equations are provided for real functions of the real arguments. These are based on quasi-Newton methods or projection methods, which are briefly reviewed in the present report. Benchmark tests were performed on these subroutines to grasp their characteristics. As the program not requiring analytical forms of the derivatives of the Jacobian matrix, we have dealt with NS01A of Powell, NS03A of Reid for a system with the sparse Jacobian and NONLIN of Brown. Of these three subroutines of quasi-Newton methods, NONLIN is shown to be the most useful because of its stable algorithm and short computation time. On the other hand, as the subroutine for which the derivatives of the Jacobian are to be supplied analytically, we have tested INTECH of a quasi-Newton method based on the Boggs' algorithm, PROJA of Georg and Keller based on the projection method and an option of NS03A. The results have shown that INTECH, treating variables which appear only linearly in the functions separately, takes the shortest computation time, on the whole, while the projection method requires further research to find an optimal algorithm. (auth.)

  1. Solving the apparent diversity-accuracy dilemma of recommender systems.

    Science.gov (United States)

    Zhou, Tao; Kuscsik, Zoltán; Liu, Jian-Guo; Medo, Matús; Wakeling, Joseph Rushton; Zhang, Yi-Cheng

    2010-03-09

    Recommender systems use data on past user preferences to predict possible future likes and interests. A key challenge is that while the most useful individual recommendations are to be found among diverse niche objects, the most reliably accurate results are obtained by methods that recommend objects based on user or object similarity. In this paper we introduce a new algorithm specifically to address the challenge of diversity and show how it can be used to resolve this apparent dilemma when combined in an elegant hybrid with an accuracy-focused algorithm. By tuning the hybrid appropriately we are able to obtain, without relying on any semantic or context-specific information, simultaneous gains in both accuracy and diversity of recommendations.

  2. Large inflated-antenna system

    Science.gov (United States)

    Hinson, W. F.; Keafer, L. S.

    1984-01-01

    It is proposed that for inflatable antenna systems, technology feasibility can be demonstrated and parametric design and scalability (scale factor 10 to 20) can be validated with an experiment using a 16-m-diameter antenna attached to the Shuttle. The antenna configuration consists of a thin film cone and paraboloid held to proper shape by internal pressure and a self-rigidizing torus. The cone and paraboloid would be made using pie-shaped gores with the paraboloid being coated with aluminum to provide reflectivity. The torus would be constructed using an aluminum polyester composite that when inflated would erect to a smooth shell that can withstand loads without internal pressure.

  3. Jump phenomena. [large amplitude responses of nonlinear systems

    Science.gov (United States)

    Reiss, E. L.

    1980-01-01

    The paper considers jump phenomena composed of large amplitude responses of nonlinear systems caused by small amplitude disturbances. Physical problems where large jumps in the solution amplitude are important features of the response are described, including snap buckling of elastic shells, chemical reactions leading to combustion and explosion, and long-term climatic changes of the earth's atmosphere. A new method of rational functions was then developed which consists of representing the solutions of the jump problems as rational functions of the small disturbance parameter; this method can solve jump problems explicitly.

  4. Multiobjective CVaR Optimization Model and Solving Method for Hydrothermal System Considering Uncertain Load Demand

    Directory of Open Access Journals (Sweden)

    Zhongfu Tan

    2015-01-01

    Full Text Available In order to solve the influence of load uncertainty on hydrothermal power system operation and achieve the optimal objectives of system power generation consumption, pollutant emissions, and first-stage hydropower station storage capacity, this paper introduced CVaR method and built a multiobjective optimization model and its solving method. In the optimization model, load demand’s actual values and deviation values are regarded as random variables, scheduling objective is redefined to meet confidence level requirement and system operation constraints and loss function constraints are taken into consideration. To solve the proposed model, this paper linearized nonlinear constraints, applied fuzzy satisfaction, fuzzy entropy, and weighted multiobjective function theories to build a fuzzy entropy multiobjective CVaR model. The model is a mixed integer linear programming problem. Then, six thermal power plants and three cascade hydropower stations are taken as the hydrothermal system for numerical simulation. The results verified that multiobjective CVaR method is applicable to solve hydrothermal scheduling problems. It can better reflect risk level of the scheduling result. The fuzzy entropy satisfaction degree solving algorithm can simplify solving difficulty and get the optimum operation scheduling scheme.

  5. The Improvement of Communication and Inference Skills in Colloid System Material by Problem Solving Learning Model

    OpenAIRE

    maisarera, yunita; diawati, chansyanah; fadiawati, noor

    2012-01-01

    The aim of this research is to describe the effectiveness of problem solving learning in improving communication and inference skills in colloid system material. Subjects in this research were students of XIIPA1 and XI IPA2 classrooms in Persada Junior High School in Bandar Lampung in academic year 2011-2012 where students of both classrooms had the same characteristics. This research used quasi experiment method and pretest-posttest control group design. Effectiveness of problem solving le...

  6. New approach to solve fully fuzzy system of linear equations using ...

    Indian Academy of Sciences (India)

    This paper proposes two new methods to solve fully fuzzy system of linear equations. The fuzzy system has been converted to a crisp system of linear equations by using single and double parametric form of fuzzy numbers to obtain the non-negative solution. Double parametric form of fuzzy numbers is defined and applied ...

  7. Large thermal protection system panel

    Science.gov (United States)

    Weinberg, David J. (Inventor); Myers, Franklin K. (Inventor); Tran, Tu T. (Inventor)

    2003-01-01

    A protective panel for a reusable launch vehicle provides enhanced moisture protection, simplified maintenance, and increased temperature resistance. The protective panel includes an outer ceramic matrix composite (CMC) panel, and an insulative bag assembly coupled to the outer CMC panel for isolating the launch vehicle from elevated temperatures and moisture. A standoff attachment system attaches the outer CMC panel and the bag assembly to the primary structure of the launch vehicle. The insulative bag assembly includes a foil bag having a first opening shrink fitted to the outer CMC panel such that the first opening and the outer CMC panel form a water tight seal at temperatures below a desired temperature threshold. Fibrous insulation is contained within the foil bag for protecting the launch vehicle from elevated temperatures. The insulative bag assembly further includes a back panel coupled to a second opening of the foil bag such that the fibrous insulation is encapsulated by the back panel, the foil bag, and the outer CMC panel. The use of a CMC material for the outer panel in conjunction with the insulative bag assembly eliminates the need for waterproofing processes, and ultimately allows for more efficient reentry profiles.

  8. Chosen interval methods for solving linear interval systems with special type of matrix

    Science.gov (United States)

    Szyszka, Barbara

    2013-10-01

    The paper is devoted to chosen direct interval methods for solving linear interval systems with special type of matrix. This kind of matrix: band matrix with a parameter, from finite difference problem is obtained. Such linear systems occur while solving one dimensional wave equation (Partial Differential Equations of hyperbolic type) by using the central difference interval method of the second order. Interval methods are constructed so as the errors of method are enclosed in obtained results, therefore presented linear interval systems contain elements that determining the errors of difference method. The chosen direct algorithms have been applied for solving linear systems because they have no errors of method. All calculations were performed in floating-point interval arithmetic.

  9. Models of resource allocation optimization when solving the control problems in organizational systems

    Science.gov (United States)

    Menshikh, V.; Samorokovskiy, A.; Avsentev, O.

    2018-03-01

    The mathematical model of optimizing the allocation of resources to reduce the time for management decisions and algorithms to solve the general problem of resource allocation. The optimization problem of choice of resources in organizational systems in order to reduce the total execution time of a job is solved. This problem is a complex three-level combinatorial problem, for the solving of which it is necessary to implement the solution to several specific problems: to estimate the duration of performing each action, depending on the number of performers within the group that performs this action; to estimate the total execution time of all actions depending on the quantitative composition of groups of performers; to find such a distribution of the existing resource of performers in groups to minimize the total execution time of all actions. In addition, algorithms to solve the general problem of resource allocation are proposed.

  10. A New Numerical Technique for Solving Systems Of Nonlinear Fractional Partial Differential Equations

    Directory of Open Access Journals (Sweden)

    Mountassir Hamdi Cherif

    2017-11-01

    Full Text Available In this paper, we apply an efficient method called the Aboodh decomposition method to solve systems of nonlinear fractional partial differential equations. This method is a combined form of Aboodh transform with Adomian decomposition method. The theoretical analysis of this investigated for systems of nonlinear fractional partial differential equations is calculated in the explicit form of a power series with easily computable terms. Some examples are given to shows that this method is very efficient and accurate. This method can be applied to solve others nonlinear systems problems.

  11. Effective methods of solving of model equations of certain class of thermal systems

    International Nuclear Information System (INIS)

    Lach, J.

    1985-01-01

    A number of topics connected with solving of model equations of certain class of thermal systems by the method of successive approximations is touched. A system of partial differential equations of the first degree, appearing most frequently in practical applications of heat and mass transfer theory is reduced to an equivalent system of Volterra integral equations of the second kind. Among a few sample applications the thermal processes appearing in the fuel channel of nuclear reactor are solved. The theoretical analysis is illustrated by the results of numerical calculations given in tables and diagrams. 111 refs., 17 figs., 16 tabs. (author)

  12. Reliability of large and complex systems

    CERN Document Server

    Kolowrocki, Krzysztof

    2014-01-01

    Reliability of Large and Complex Systems, previously titled Reliability of Large Systems, is an innovative guide to the current state and reliability of large and complex systems. In addition to revised and updated content on the complexity and safety of large and complex mechanisms, this new edition looks at the reliability of nanosystems, a key research topic in nanotechnology science. The author discusses the importance of safety investigation of critical infrastructures that have aged or have been exposed to varying operational conditions. This reference provides an asympt

  13. MACEDONIAN ADMINISTRATIVE JUDICIAL SYSTEM FOR SOLVING ADMINISTRATIVE DISPUTES COMPARABLE TO EUROPEAN SYSTEMS

    Directory of Open Access Journals (Sweden)

    Sladjana Eftimova

    2015-07-01

    Full Text Available The structure of the legal system, through history until today mostly depends on law and policy which is conducted by the country. In European countries, there is position for historical and cultural conceptions for administrative judicature, differences and similarity that leave mark for solving administrative disputes. The obligation – an internal judicial reform to be established in legal system, is conducted by each of the countries after the breaking down and division of Social Federative Republic of Yugoslavia or SFRY due to following the European law for constitution of legitimacy and constitutionality of acts as well as implementing of independent administrative judicature. Analyze of the current condition regarding the independency and objectiveness of the judicature is necessary in our country and it is important to be seen how the conditions for working of the administrative judicature can be improved.

  14. Automating large-scale reactor systems

    International Nuclear Information System (INIS)

    Kisner, R.A.

    1985-01-01

    This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig

  15. A Flowchart-Based Intelligent Tutoring System for Improving Problem-Solving Skills of Novice Programmers

    Science.gov (United States)

    Hooshyar, D.; Ahmad, R. B.; Yousefi, M.; Yusop, F. D.; Horng, S.-J.

    2015-01-01

    Intelligent tutoring and personalization are considered as the two most important factors in the research of learning systems and environments. An effective tool that can be used to improve problem-solving ability is an Intelligent Tutoring System which is capable of mimicking a human tutor's actions in implementing a one-to-one personalized and…

  16. He's homotopy perturbation method for solving systems of Volterra integral equations of the second kind

    International Nuclear Information System (INIS)

    Biazar, J.; Ghazvini, H.

    2009-01-01

    In this paper, the He's homotopy perturbation method is applied to solve systems of Volterra integral equations of the second kind. Some examples are presented to illustrate the ability of the method for linear and non-linear such systems. The results reveal that the method is very effective and simple.

  17. Large scale network-centric distributed systems

    CERN Document Server

    Sarbazi-Azad, Hamid

    2014-01-01

    A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu

  18. On a new iterative method for solving linear systems and comparison results

    Science.gov (United States)

    Jing, Yan-Fei; Huang, Ting-Zhu

    2008-10-01

    In Ujevic [A new iterative method for solving linear systems, Appl. Math. Comput. 179 (2006) 725-730], the author obtained a new iterative method for solving linear systems, which can be considered as a modification of the Gauss-Seidel method. In this paper, we show that this is a special case from a point of view of projection techniques. And a different approach is established, which is both theoretically and numerically proven to be better than (at least the same as) Ujevic's. As the presented numerical examples show, in most cases, the convergence rate is more than one and a half that of Ujevic.

  19. Photolithography diagnostic expert systems: a systematic approach to problem solving in a wafer fabrication facility

    Science.gov (United States)

    Weatherwax Scott, Caroline; Tsareff, Christopher R.

    1990-06-01

    One of the main goals of process engineering in the semiconductor industry is to improve wafer fabrication productivity and throughput. Engineers must work continuously toward this goal in addition to performing sustaining and development tasks. To accomplish these objectives, managers must make efficient use of engineering resources. One of the tools being used to improve efficiency is the diagnostic expert system. Expert systems are knowledge based computer programs designed to lead the user through the analysis and solution of a problem. Several photolithography diagnostic expert systems have been implemented at the Hughes Technology Center to provide a systematic approach to process problem solving. This systematic approach was achieved by documenting cause and effect analyses for a wide variety of processing problems. This knowledge was organized in the form of IF-THEN rules, a common structure for knowledge representation in expert system technology. These rules form the knowledge base of the expert system which is stored in the computer. The systems also include the problem solving methodology used by the expert when addressing a problem in his area of expertise. Operators now use the expert systems to solve many process problems without engineering assistance. The systems also facilitate the collection of appropriate data to assist engineering in solving unanticipated problems. Currently, several expert systems have been implemented to cover all aspects of the photolithography process. The systems, which have been in use for over a year, include wafer surface preparation (HMDS), photoresist coat and softbake, align and expose on a wafer stepper, and develop inspection. These systems are part of a plan to implement an expert system diagnostic environment throughout the wafer fabrication facility. In this paper, the systems' construction is described, including knowledge acquisition, rule construction, knowledge refinement, testing, and evaluation. The roles

  20. Distributed simulation of large computer systems

    International Nuclear Information System (INIS)

    Marzolla, M.

    2001-01-01

    Sequential simulation of large complex physical systems is often regarded as a computationally expensive task. In order to speed-up complex discrete-event simulations, the paradigm of Parallel and Distributed Discrete Event Simulation (PDES) has been introduced since the late 70s. The authors analyze the applicability of PDES to the modeling and analysis of large computer system; such systems are increasingly common in the area of High Energy and Nuclear Physics, because many modern experiments make use of large 'compute farms'. Some feasibility tests have been performed on a prototype distributed simulator

  1. Large-scale Complex IT Systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2011-01-01

    This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that identifies the major challen...

  2. Large-scale complex IT systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2012-01-01

    12 pages, 2 figures This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that ident...

  3. Stability of large scale interconnected dynamical systems

    International Nuclear Information System (INIS)

    Akpan, E.P.

    1993-07-01

    Large scale systems modelled by a system of ordinary differential equations are considered and necessary and sufficient conditions are obtained for the uniform asymptotic connective stability of the systems using the method of cone-valued Lyapunov functions. It is shown that this model significantly improves the existing models. (author). 9 refs

  4. Solving differential–algebraic equation systems by means of index reduction methodology

    DEFF Research Database (Denmark)

    Sørensen, Kim; Houbak, Niels; Condra, Thomas

    2006-01-01

    of a number of differential equations and algebraic equations — a so called DAE system. Two of the DAE systems are of index 1 and they can be solved by means of standard DAE-solvers. For the actual application, the equation systems are integrated by means of MATLAB’s solver: ode23t, that solves moderately...... stiff ODEs and index 1 DAEs by means of the trapezoidal rule. The last sub-model that models the boilers steam drum consist of two differential and three algebraic equations. The index of this model is greater than 1, which means that ode23t cannot integrate this equation system. In this paper......, it is shown how the equation system, by means of an index reduction methodology, can be reduced to a system of ordinary differential equations — ODEs....

  5. Solving differential-algebraic equation systems by means of index reduction methodology

    DEFF Research Database (Denmark)

    Sørensen, Kim; Houbak, Niels; Condra, Thomas Joseph

    2006-01-01

    of a number of differential equations and algebraic equations - a so called DAE system. Two of the DAE systems are of index 1 and they can be solved by means of standard DAE-solvers. For the actual application, the equation systems are integrated by means of MATLAB’s solver: ode23t, that solves moderately...... stiff ODE’s and index 1 DAE’s by means of the trapezoidal rule. The last sub-model that models the boilers steam drum consist of two differential and three algebraic equations. The index of this model is greater than 1, which means that ode23t cannot integrate this equation system. In this paper......, it is shown how the equation system, by means of an index reduction methodology, can be reduced to a system of Ordinary- Differential-Equations - ODE’s....

  6. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data

  7. ENVIRONMENTAL PROBLEM SOLVING WITH GEOGRAPHIC INFORMATION SYSTEMS: 1994 AND 1999 CONFERENCE PROCEEDINGS

    Science.gov (United States)

    These two national conferences, held in Cincinnati, Ohio in 1994 and 1999, addressed the area of environmental problem solving with Geographic Information Systems. This CD-ROM is a compilation of the proceedings in PDF format. The emphasis of the conference presentations were on ...

  8. The Interactions between Problem Solving and Conceptual Change: System Dynamic Modelling as a Platform for Learning

    Science.gov (United States)

    Lee, Chwee Beng

    2010-01-01

    This study examines the interactions between problem solving and conceptual change in an elementary science class where students build system dynamic models as a form of problem representations. Through mostly qualitative findings, we illustrate the interplay of three emerging intervening conditions (epistemological belief, structural knowledge…

  9. Optimal Homotopy Asymptotic Method for Solving System of Fredholm Integral Equations

    Directory of Open Access Journals (Sweden)

    Bahman Ghazanfari

    2013-08-01

    Full Text Available In this paper, optimal homotopy asymptotic method (OHAM is applied to solve system of Fredholm integral equations. The effectiveness of optimal homotopy asymptotic method is presented. This method provides easy tools to control the convergence region of approximating solution series wherever necessary. The results of OHAM are compared with homotopy perturbation method (HPM and Taylor series expansion method (TSEM.

  10. Asymptotic behavior of a diffusive scheme solving the inviscid one-dimensional pressureless gases system

    OpenAIRE

    Boudin , Laurent; Mathiaud , Julien

    2012-01-01

    In this work, we discuss some numerical properties of the viscous numerical scheme introduced in [Boudin, Mathiaud, NMPDE 2012] to solve the one-dimensional pressureless gases system, and study in particular, from a computational viewpoint, its asymptotic behavior when the viscosity parameter used in the scheme becomes smaller.

  11. The Prehistory of Discovery: Precursors of Representational Change in Solving Gear System Problems.

    Science.gov (United States)

    Dixon, James A.; Bangert, Ashley S.

    2002-01-01

    This study investigated whether the process of representational change undergoes developmental change or different processes occupy different niches in the course of knowledge acquisition. Subjects--college, third-, and sixth-grade students--solved gear system problems over two sessions. Findings indicated that for all grades, discovery of the…

  12. System to solve three designs of the fuel management; Sistema para resolver tres disenos de la administracion de combustible

    Energy Technology Data Exchange (ETDEWEB)

    Castillo M, J. A.; Ortiz S, J. J.; Montes T, J. L.; Perusquia del C, R. [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico); Marinez R, R., E-mail: alejandro.castillo@inin.gob.mx [Universidad Autonoma de Campeche, Av. Agustin Melgar s/n, Col. Buenavista, 24039 San Francisco de Campeche, Campeche (Mexico)

    2015-09-15

    In this paper preliminary results are presented, obtained with the development of a computer system that resolves three stages of the nuclear fuel management, which are: the axial and radial designs of fuel, as well as the design of nuclear fuel reloads. The novelty of the system is that the solution is obtained solving the 3 mentioned stages, in coupled form. For this, heuristic techniques are used for each stage, in each one of these has a function objective that is applied to particular problems, but in all cases the obtained partial results are used as input data for the next stage. The heuristic techniques that were used to solve the coupled problem are: tabu search, neural networks and a hybrid between the scatter search and path re linking. The system applies an iterative process from the design of a fuel cell to the reload design, since are preliminary results the reload is designed using the operation strategy Haling type. In each one of the stages nuclear parameters inherent to the design are monitored. The results so far show the advantage of solving the problem in a coupled manner, even when a large amount of computer resources is used. (Author)

  13. Solution methods for large systems of linear equations in BACCHUS

    International Nuclear Information System (INIS)

    Homann, C.; Dorr, B.

    1993-05-01

    The computer programme BACCHUS is used to describe steady state and transient thermal-hydraulic behaviour of a coolant in a fuel element with intact geometry in a fast breeder reactor. In such computer programmes generally large systems of linear equations with sparse matrices of coefficients, resulting from discretization of coolant conservation equations, must be solved thousands of times giving rise to large demands of main storage and CPU time. Direct and iterative solution methods of the systems of linear equations, available in BACCHUS, are described, giving theoretical details and experience with their use in the programme. Besides use of a method of lines, a Runge-Kutta-method, for solution of the partial differential equation is outlined. (orig.) [de

  14. Theory and algorithms for solving large-scale numerical problems. Application to the management of electricity production

    International Nuclear Information System (INIS)

    Chiche, A.

    2012-01-01

    This manuscript deals with large-scale optimization problems, and more specifically with solving the electricity unit commitment problem arising at EDF. First, we focused on the augmented Lagrangian algorithm. The behavior of that algorithm on an infeasible convex quadratic optimization problem is analyzed. It is shown that the algorithm finds a point that satisfies the shifted constraints with the smallest possible shift in the sense of the Euclidean norm and that it minimizes the objective on the corresponding shifted constrained set. The convergence to such a point is realized at a global linear rate, which depends explicitly on the augmentation parameter. This suggests us a rule for determining the augmentation parameter to control the speed of convergence of the shifted constraint norm to zero. This rule has the advantage of generating bounded augmentation parameters even when the problem is infeasible. As a by-product, the algorithm computes the smallest translation in the Euclidean norm that makes the constraints feasible. Furthermore, this work provides solution methods for stochastic optimization industrial problems decomposed on a scenario tree, based on the progressive hedging algorithm introduced by [Rockafellar et Wets, 1991]. We also focus on the convergence of that algorithm. On the one hand, we offer a counter-example showing that the algorithm could diverge if its augmentation parameter is iteratively updated. On the other hand, we show how to recover the multipliers associated with the non-dualized constraints defined on the scenario tree from those associated with the corresponding constraints of the scenario subproblems. Their convergence is also analyzed for convex problems. The practical interest of theses solutions techniques is corroborated by numerical experiments performed on the electric production management problem. We apply the progressive hedging algorithm to a realistic industrial problem. More precisely, we solve the French medium

  15. Trust dynamics in a large system implementation

    DEFF Research Database (Denmark)

    Schlichter, Bjarne Rerup; Rose, Jeremy

    2013-01-01

    outcomes, but largely ignored the dynamics of trust relations. Giddens, as part of his study of modernity, theorises trust dynamics in relation to abstract social systems, though without focusing on information systems. We use Giddens’ concepts to investigate evolving trust relationships in a longitudinal......A large information systems implementation (such as Enterprise Resource Planning systems) relies on the trust of its stakeholders to succeed. Such projects impact diverse groups of stakeholders, each with their legitimate interests and expectations. Levels of stakeholder trust can be expected...... case analysis of a large Integrated Hospital System implementation for the Faroe Islands. Trust relationships suffered a serious breakdown, but the project was able to recover and meet its goals. We develop six theoretical propositions theorising the relationship between trust and project outcomes...

  16. Modeling and simulation of large HVDC systems

    Energy Technology Data Exchange (ETDEWEB)

    Jin, H.; Sood, V.K.

    1993-01-01

    This paper addresses the complexity and the amount of work in preparing simulation data and in implementing various converter control schemes and the excessive simulation time involved in modelling and simulation of large HVDC systems. The Power Electronic Circuit Analysis program (PECAN) is used to address these problems and a large HVDC system with two dc links is simulated using PECAN. A benchmark HVDC system is studied to compare the simulation results with those from other packages. The simulation time and results are provided in the paper.

  17. A semi-analytical approach for solving of nonlinear systems of functional differential equations with delay

    Science.gov (United States)

    Rebenda, Josef; Šmarda, Zdeněk

    2017-07-01

    In the paper, we propose a correct and efficient semi-analytical approach to solve initial value problem for systems of functional differential equations with delay. The idea is to combine the method of steps and differential transformation method (DTM). In the latter, formulas for proportional arguments and nonlinear terms are used. An example of using this technique for a system with constant and proportional delays is presented.

  18. Workflow management in large distributed systems

    International Nuclear Information System (INIS)

    Legrand, I; Newman, H; Voicu, R; Dobre, C; Grigoras, C

    2011-01-01

    The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.

  19. Workflow management in large distributed systems

    Science.gov (United States)

    Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.

    2011-12-01

    The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.

  20. A toolbox to solve coupled systems of differential and difference equations

    International Nuclear Information System (INIS)

    Ablinger, Jakob; Schneider, Carsten; Bluemlein, Johannes; Freitas, Abilio de

    2016-01-01

    We present algorithms to solve coupled systems of linear differential equations, arising in the calculation of massive Feynman diagrams with local operator insertions at 3-loop order, which do not request special choices of bases. Here we assume that the desired solution has a power series representation and we seek for the coefficients in closed form. In particular, if the coefficients depend on a small parameter ε (the dimensional parameter), we assume that the coefficients themselves can be expanded in formal Laurent series w.r.t. ε and we try to compute the first terms in closed form. More precisely, we have a decision algorithm which solves the following problem: if the terms can be represented by an indefinite nested hypergeometric sum expression (covering as special cases the harmonic sums, cyclotomic sums, generalized harmonic sums or nested binomial sums), then we can calculate them. If the algorithm fails, we obtain a proof that the terms cannot be represented by the class of indefinite nested hypergeometric sum expressions. Internally, this problem is reduced by holonomic closure properties to solving a coupled system of linear difference equations. The underlying method in this setting relies on decoupling algorithms, difference ring algorithms and recurrence solving. We demonstrate by a concrete example how this algorithm can be applied with the new Mathematica package SolveCoupledSystem which is based on the packages Sigma, HarmonicSums and OreSys. In all applications the representation in x-space is obtained as an iterated integral representation over general alphabets, generalizing Poincare iterated integrals.

  1. A toolbox to solve coupled systems of differential and difference equations

    Energy Technology Data Exchange (ETDEWEB)

    Ablinger, Jakob; Schneider, Carsten [Linz Univ. (Austria). Research Inst. for Symbolic Computation (RISC); Bluemlein, Johannes; Freitas, Abilio de [DESY Zeuthen (Germany)

    2016-01-15

    We present algorithms to solve coupled systems of linear differential equations, arising in the calculation of massive Feynman diagrams with local operator insertions at 3-loop order, which do not request special choices of bases. Here we assume that the desired solution has a power series representation and we seek for the coefficients in closed form. In particular, if the coefficients depend on a small parameter ε (the dimensional parameter), we assume that the coefficients themselves can be expanded in formal Laurent series w.r.t. ε and we try to compute the first terms in closed form. More precisely, we have a decision algorithm which solves the following problem: if the terms can be represented by an indefinite nested hypergeometric sum expression (covering as special cases the harmonic sums, cyclotomic sums, generalized harmonic sums or nested binomial sums), then we can calculate them. If the algorithm fails, we obtain a proof that the terms cannot be represented by the class of indefinite nested hypergeometric sum expressions. Internally, this problem is reduced by holonomic closure properties to solving a coupled system of linear difference equations. The underlying method in this setting relies on decoupling algorithms, difference ring algorithms and recurrence solving. We demonstrate by a concrete example how this algorithm can be applied with the new Mathematica package SolveCoupledSystem which is based on the packages Sigma, HarmonicSums and OreSys. In all applications the representation in x-space is obtained as an iterated integral representation over general alphabets, generalizing Poincare iterated integrals.

  2. Volume reduction system by soil classifying. Soil-washing system and problems to be solved

    International Nuclear Information System (INIS)

    Oshino, Yoshio

    2016-01-01

    Radioactive contamination of ground surfaces, buildings and forests in a broad area was caused by the accident at the Fukushima Daiichi Nuclear Power Plant in Japan. The national government and municipalities have still been carrying out the decontamination works for livelihood areas after five years from the accident. The government estimated that the amounts of soils and wastes removed by the decontamination works would be about 28,000,000 cubic meters maximum including 20,000,000 cubic meters maximum of soils. The removed soils will be stored in “Interim Storage Facility” then will be finally disposed outside of Fukushima prefecture within 30 years. On the other hand, shortage of the soils as materials needed for the revitalization in Fukushima prefecture is expected. Technical Advisory Council on Remediation and Waste Management, which consists of about 90 companies, started a working group to investigate solutions to these problems. The working group focused on the investigation of the possibility to recycle the soils by washing and classification to use them as the materials for civil engineering works, and to reduce the volume of the interim storage. In the first part of this report, we have evaluated the applicability of various technologies for purification and volume reduction of the removed soils, and have researched usages, required quantities and specifications of the recycled soils. In the second part, we have made trial calculations of the volume reduction effects and costs using the washing and classification system. The calculated results showed the possibilities of reducing the storage capacity of the interim storage facility, as well as the construction and the operation costs by recycling the removed soils with the washing and classification system inside the interim storage facility. At the end of this report, we proposed problems to be solved in order to adopt the washing and classification system. (author)

  3. Large autonomous spacecraft electrical power system (LASEPS)

    Science.gov (United States)

    Dugal-Whitehead, Norma R.; Johnson, Yvette B.

    1992-01-01

    NASA - Marshall Space Flight Center is creating a large high voltage electrical power system testbed called LASEPS. This testbed is being developed to simulate an end-to-end power system from power generation and source to loads. When the system is completed it will have several power configurations, which will include several battery configurations. These configurations are: two 120 V batteries, one or two 150 V batteries, and one 250 to 270 V battery. This breadboard encompasses varying levels of autonomy from remote power converters to conventional software control to expert system control of the power system elements. In this paper, the construction and provisions of this breadboard are discussed.

  4. The application of an artificial immune system for solving the identification problem

    Directory of Open Access Journals (Sweden)

    Astachova Irina

    2017-01-01

    Full Text Available Ecological prognosis sets the identification task, which is to find the capacity of pollution sources based on the available experimental data. This problem is an inverse problem, for the solution of which the method of symbolic regression is considered. The distributed artificial immune system is used as an algorithm for the problem solving. The artificial immune system (AIS is a model that allows solving various problems of identification, its concept was borrowed from biology. The solution is sought using a distributed version of the artificial immune system, which is implemented through a network. This distributed network can operate in any heterogeneous environment, which is achieved through the use of cross-platform Python programming language. AIS demonstrates the ability to restore the original function in the problem of identification. The obtained solution for the test data is represented by the graph.

  5. On designing geometric motion planners to solve regulating and trajectory tracking problems for robotic locomotion systems

    International Nuclear Information System (INIS)

    Asnafi, Alireza; Mahzoon, Mojtaba

    2011-01-01

    Based on a geometric fiber bundle structure, a generalized method to solve both regulation and trajectory tracking problems for locomotion systems is presented. The method is especially applied to two case studies of robotic locomotion systems; a three link articulated fish-like robot as a prototype of locomotion systems with symmetry, and the snakeboard as a prototype of mixed locomotion systems. Our results show that although these motion planners have an open loop structure, due to their generalities, they can steer case studies with negligible errors for almost any complicated path.

  6. On designing geometric motion planners to solve regulating and trajectory tracking problems for robotic locomotion systems

    Energy Technology Data Exchange (ETDEWEB)

    Asnafi, Alireza [Hydro-Aeronautical Research Center, Shiraz University, Shiraz, 71348-13668 (Iran, Islamic Republic of); Mahzoon, Mojtaba [Department of Mechanical Engineering, School of Engineering, Shiraz University, Shiraz, 71348-13668 (Iran, Islamic Republic of)

    2011-09-15

    Based on a geometric fiber bundle structure, a generalized method to solve both regulation and trajectory tracking problems for locomotion systems is presented. The method is especially applied to two case studies of robotic locomotion systems; a three link articulated fish-like robot as a prototype of locomotion systems with symmetry, and the snakeboard as a prototype of mixed locomotion systems. Our results show that although these motion planners have an open loop structure, due to their generalities, they can steer case studies with negligible errors for almost any complicated path.

  7. Solution approach for a large scale personnel transport system for a large company in Latin America

    Energy Technology Data Exchange (ETDEWEB)

    Garzón-Garnica, Eduardo-Arturo; Caballero-Morales, Santiago-Omar; Martínez-Flores, José-Luis

    2017-07-01

    The present paper focuses on the modelling and solution of a large-scale personnel transportation system in Mexico where many routes and vehicles are currently used to service 525 points. The routing system proposed can be applied to many cities in the Latin-American region. Design/methodology/approach: This system was modelled as a VRP model considering the use of real-world transit times, and the fact that routes start at the farthest point from the destination center. Experiments were performed on different sized sets of service points. As the size of the instances was increased, the performance of the heuristic method was assessed in comparison with the results of an exact algorithm, the results remaining very close between both. When the size of the instance was full-scale and the exact algorithm took too much time to solve the problem, then the heuristic algorithm provided a feasible solution. Supported by the validation with smaller scale instances, where the difference between both solutions was close to a 6%, the full –scale solution obtained with the heuristic algorithm was considered to be within that same range. Findings: The proposed modelling and solving method provided a solution that would produce significant savings in the daily operation of the routes. Originality/value: The urban distribution of the cities in Latin America is unique to other regions in the world. The general layout of the large cities in this region includes a small town center, usually antique, and a somewhat disordered outer region. The lack of a vehicle-centered urban planning poses distinct challenges for vehicle routing problems in the region. The use of a heuristic VRP combined with the results of an exact VRP, allowed the obtention of an improved routing plan specific to the requirements of the region.

  8. Solution approach for a large scale personnel transport system for a large company in Latin America

    International Nuclear Information System (INIS)

    Garzón-Garnica, Eduardo-Arturo; Caballero-Morales, Santiago-Omar; Martínez-Flores, José-Luis

    2017-01-01

    The present paper focuses on the modelling and solution of a large-scale personnel transportation system in Mexico where many routes and vehicles are currently used to service 525 points. The routing system proposed can be applied to many cities in the Latin-American region. Design/methodology/approach: This system was modelled as a VRP model considering the use of real-world transit times, and the fact that routes start at the farthest point from the destination center. Experiments were performed on different sized sets of service points. As the size of the instances was increased, the performance of the heuristic method was assessed in comparison with the results of an exact algorithm, the results remaining very close between both. When the size of the instance was full-scale and the exact algorithm took too much time to solve the problem, then the heuristic algorithm provided a feasible solution. Supported by the validation with smaller scale instances, where the difference between both solutions was close to a 6%, the full –scale solution obtained with the heuristic algorithm was considered to be within that same range. Findings: The proposed modelling and solving method provided a solution that would produce significant savings in the daily operation of the routes. Originality/value: The urban distribution of the cities in Latin America is unique to other regions in the world. The general layout of the large cities in this region includes a small town center, usually antique, and a somewhat disordered outer region. The lack of a vehicle-centered urban planning poses distinct challenges for vehicle routing problems in the region. The use of a heuristic VRP combined with the results of an exact VRP, allowed the obtention of an improved routing plan specific to the requirements of the region.

  9. Solution approach for a large scale personnel transport system for a large company in Latin America

    Directory of Open Access Journals (Sweden)

    Eduardo-Arturo Garzón-Garnica

    2017-10-01

    Full Text Available Purpose: The present paper focuses on the modelling and solution of a large-scale personnel transportation system in Mexico where many routes and vehicles are currently used to service 525 points. The routing system proposed can be applied to many cities in the Latin-American region. Design/methodology/approach: This system was modelled as a VRP model considering the use of real-world transit times, and the fact that routes start at the farthest point from the destination center. Experiments were performed on different sized sets of service points. As the size of the instances was increased, the performance of the heuristic method was assessed in comparison with the results of an exact algorithm, the results remaining very close between both.  When the size of the instance was full-scale and the exact algorithm took too much time to solve the problem, then the heuristic algorithm provided a feasible solution. Supported by the validation with smaller scale instances, where the difference between both solutions was close to a 6%, the full –scale solution obtained with the heuristic algorithm was considered to be within that same range. Findings: The proposed modelling and solving method provided a solution that would produce significant savings in the daily operation of the routes. Originality/value: The urban distribution of the cities in Latin America is unique to other regions in the world. The general layout of the large cities in this region includes a small town center, usually antique, and a somewhat disordered outer region. The lack of a vehicle-centered urban planning poses distinct challenges for vehicle routing problems in the region. The use of a heuristic VRP combined with the results of an exact VRP, allowed the obtention of an improved routing plan specific to the requirements of the region.

  10. Models for large superconducting toroidal magnet systems

    International Nuclear Information System (INIS)

    Arendt, F.; Brechna, H.; Erb, J.; Komarek, P.; Krauth, H.; Maurer, W.

    1976-01-01

    Prior to the design of large GJ toroidal magnet systems it is appropriate to procure small scale models, which can simulate their pertinent properties and allow to investigate their relevant phenomena. The important feature of the model is to show under which circumstances the system performance can be extrapolated to large magnets. Based on parameters such as the maximum magnetic field and the current density, the maximum tolerable magneto-mechanical stresses, a simple method of designing model magnets is presented. It is shown how pertinent design parameters are changed when the toroidal dimensions are altered. In addition some conductor cost estimations are given based on reactor power output and wall loading

  11. Data acquisition system issues for large experiments

    International Nuclear Information System (INIS)

    Siskind, E.J.

    2007-01-01

    This talk consists of personal observations on two classes of data acquisition ('DAQ') systems for Silicon trackers in large experiments with which the author has been concerned over the last three or more years. The first half is a classic 'lessons learned' recital based on experience with the high-level debug and configuration of the DAQ system for the GLAST LAT detector. The second half is concerned with a discussion of the promises and pitfalls of using modern (and future) generations of 'system-on-a-chip' ('SOC') or 'platform' field-programmable gate arrays ('FPGAs') in future large DAQ systems. The DAQ system pipeline for the 864k channels of Si tracker in the GLAST LAT consists of five tiers of hardware buffers which ultimately feed into the main memory of the (two-active-node) level-3 trigger processor farm. The data formats and buffer volumes of these tiers are briefly described, as well as the flow control employed between successive tiers. Lessons learned regarding data formats, buffer volumes, and flow control/data discard policy are discussed. The continued development of platform FPGAs containing large amounts of configurable logic fabric, embedded PowerPC hard processor cores, digital signal processing components, large volumes of on-chip buffer memory, and multi-gigabit serial I/O capability permits DAQ system designers to vastly increase the amount of data preprocessing that can be performed in parallel within the DAQ pipeline for detector systems in large experiments. The capabilities of some currently available FPGA families are reviewed, along with the prospects for next-generation families of announced, but not yet available, platform FPGAs. Some experience with an actual implementation is presented, and reconciliation between advertised and achievable specifications is attempted. The prospects for applying these components to space-borne Si tracker detectors are briefly discussed

  12. Large-scale theoretical calculations in molecular science - design of a large computer system for molecular science and necessary conditions for future computers

    Energy Technology Data Exchange (ETDEWEB)

    Kashiwagi, H [Institute for Molecular Science, Okazaki, Aichi (Japan)

    1982-06-01

    A large computer system was designed and established for molecular science under the leadership of molecular scientists. Features of the computer system are an automated operation system and an open self-service system. Large-scale theoretical calculations have been performed to solve many problems in molecular science, using the computer system. Necessary conditions for future computers are discussed on the basis of this experience.

  13. Large-scale theoretical calculations in molecular science - design of a large computer system for molecular science and necessary conditions for future computers

    International Nuclear Information System (INIS)

    Kashiwagi, H.

    1982-01-01

    A large computer system was designed and established for molecular science under the leadership of molecular scientists. Features of the computer system are an automated operation system and an open self-service system. Large-scale theoretical calculations have been performed to solve many problems in molecular science, using the computer system. Necessary conditions for future computers are discussed on the basis of this experience. (orig.)

  14. Projective-Dual Method for Solving Systems of Linear Equations with Nonnegative Variables

    Science.gov (United States)

    Ganin, B. V.; Golikov, A. I.; Evtushenko, Yu. G.

    2018-02-01

    In order to solve an underdetermined system of linear equations with nonnegative variables, the projection of a given point onto its solutions set is sought. The dual of this problem—the problem of unconstrained maximization of a piecewise-quadratic function—is solved by Newton's method. The problem of unconstrained optimization dual of the regularized problem of finding the projection onto the solution set of the system is considered. A connection of duality theory and Newton's method with some known algorithms of projecting onto a standard simplex is shown. On the example of taking into account the specifics of the constraints of the transport linear programming problem, the possibility to increase the efficiency of calculating the generalized Hessian matrix is demonstrated. Some examples of numerical calculations using MATLAB are presented.

  15. Siemens: Smart Technologies for Large Control Systems

    CERN Multimedia

    CERN. Geneva; BAKANY, Elisabeth

    2015-01-01

    The CERN Large Hadron Collider (LHC) is known to be one of the most complex scientific machines ever built by mankind. Its correct functioning relies on the integration of a multitude of interdependent industrial control systems, which provide different and essential services to run and protect the accelerators and experiments. These systems have to deal with several millions of data points (e.g. sensors, actuators, configuration parameters, etc…) which need to be acquired, processed, archived and analysed. Since more than 20 years, CERN and Siemens have developed a strong collaboration to deal with the challenges for these large systems. The presentation will cover the current work on the SCADA (Supervisory Control and Data Acquisition) systems and Data Analytics Frameworks.

  16. Geothermal ORC Systems Using Large Screw Expanders

    OpenAIRE

    Biederman, Tim R.; Brasz, Joost J.

    2014-01-01

    Geothermal ORC Systems using Large Screw Expanders Tim Biederman Cyrq Energy Abstract This paper describes a low-temperature Organic Rankine Cycle Power Recovery system with a screw expander a derivative of developed of Kaishan's line of screw compressors, as its power unit. The screw expander design is a modified version of its existing refrigeration compressor used on water-cooled chillers. Starting the ORC development program with existing refrigeration screw compre...

  17. Quality Function Deployment for Large Systems

    Science.gov (United States)

    Dean, Edwin B.

    1992-01-01

    Quality Function Deployment (QFD) is typically applied to small subsystems. This paper describes efforts to extend QFD to large scale systems. It links QFD to the system engineering process, the concurrent engineering process, the robust design process, and the costing process. The effect is to generate a tightly linked project management process of high dimensionality which flushes out issues early to provide a high quality, low cost, and, hence, competitive product. A pre-QFD matrix linking customers to customer desires is described.

  18. The Effect of Student Collaboration in Solving Physics Problems Using an Online Interactive Response System

    OpenAIRE

    Balta, Nuri; Awedh, Mohammad Hamza

    2016-01-01

    Advanced technology helps educational institutes to improve student learning performance and outcomes. In this study, our aim is to measure and assess student engagement and collaborative learning in engineering classes when using online technology in solving physics problems. The interactive response system used in this study is a collaborative learning tool that allows teachers to monitor their students’ response and progress in real time. Our results indicated that students have highly pos...

  19. Development and adjustment of programs for solving systems of linear equations

    International Nuclear Information System (INIS)

    Fujimura, Toichiro

    1978-03-01

    Programs for solving the systems of linear equations have been adjusted and developed in expanding the scientific subroutine library SSL. The principal programs adjusted are based on the congruent method, method of product form of the inverse, orthogonal method, Crout's method for sparse system, and acceleration of iterative methods. The programs developed are based on the escalator method, direct parallel residue method and block tridiagonal method for band system. Described are usage of the programs developed and their future improvement. FORTRAN lists with simple examples in tests of the programs are also given. (auth.)

  20. Row Reduced Echelon Form for Solving Fully Fuzzy System with Unknown Coefficients

    Directory of Open Access Journals (Sweden)

    Ghassan Malkawi

    2014-08-01

    Full Text Available This study proposes a new method for finding a feasible fuzzy solution in positive Fully Fuzzy Linear System (FFLS, where the coefficients are unknown. The fully fuzzy system is transferred to linear system in order to obtain the solution using row reduced echelon form, thereafter; the crisp solution is restricted in obtaining the positive fuzzy solution. The fuzzy solution of FFLS is included crisp intervals, to assign alternative values of unknown entries of fuzzy numbers. To illustrate the proposed method, numerical examples are solved, where the entries of coefficients are unknown in right or left hand side, to demonstrate the contributions in this study.

  1. Advanced manipulator system for large hot cells

    International Nuclear Information System (INIS)

    Vertut, J.; Moreau, C.; Brossard, J.P.

    1981-01-01

    Large hot cells can be approached as extrapolated from smaller ones as wide, higher or longer in size with the same concept of using mechanical master slave manipulators and high density windows. This concept leads to a large number of working places and corresponding equipments, with a number of penetrations through the biological protection. When the large cell does not need a permanent operation of number of work places, as in particular to serve PIE machines and maintain the facility, use of servo manipulators with a large supporting unit and extensive use of television appears optimal. The advance on MA 23 and supports will be described including the extra facilities related to manipulators introduction and maintenance. The possibility to combine a powered manipulator and MA 23 (single or pair) on the same boom crane system will be described. An advance control system to bring the minimal dead time to control support movement, associated to the master slave arm operation is under development. The general television system includes over view cameras, associated with the limited number of windows, and manipulators camera. A special new system will be described which brings an automatic control of manipulator cameras and saves operator load and dead time. Full scale tests with MA 23 and support will be discussed. (author)

  2. Sequential decoders for large MIMO systems

    KAUST Repository

    Ali, Konpal S.; Abediseid, Walid; Alouini, Mohamed-Slim

    2014-01-01

    the Sequential Decoder using the Fano Algorithm for large MIMO systems. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity

  3. Beyond Multiplexing Gain in Large MIMO Systems

    DEFF Research Database (Denmark)

    Cakmak, Burak; Müller, Ralf R.; Fleury, Bernard Henri

    growth (multiplexing gain). Even when the channel entries are i.i.d. the deviation from the linear growth is significant. We also find an additive property of the deviation for a concatenated MIMO system. Finally, we quantify the deviation of the large SNR capacity from the exact capacity and find...

  4. Solving the Coupled System Improves Computational Efficiency of the Bidomain Equations

    KAUST Repository

    Southern, J.A.; Plank, G.; Vigmond, E.J.; Whiteley, J.P.

    2009-01-01

    The bidomain equations are frequently used to model the propagation of cardiac action potentials across cardiac tissue. At the whole organ level, the size of the computational mesh required makes their solution a significant computational challenge. As the accuracy of the numerical solution cannot be compromised, efficiency of the solution technique is important to ensure that the results of the simulation can be obtained in a reasonable time while still encapsulating the complexities of the system. In an attempt to increase efficiency of the solver, the bidomain equations are often decoupled into one parabolic equation that is computationally very cheap to solve and an elliptic equation that is much more expensive to solve. In this study, the performance of this uncoupled solution method is compared with an alternative strategy in which the bidomain equations are solved as a coupled system. This seems counterintuitive as the alternative method requires the solution of a much larger linear system at each time step. However, in tests on two 3-D rabbit ventricle benchmarks, it is shown that the coupled method is up to 80% faster than the conventional uncoupled method-and that parallel performance is better for the larger coupled problem.

  5. An effortless hybrid method to solve economic load dispatch problem in power systems

    International Nuclear Information System (INIS)

    Pourakbari-Kasmaei, M.; Rashidi-Nejad, M.

    2011-01-01

    Highlights: → We proposed a fast method to get feasible solution and avoid futile search. → The method dramatically improves search efficiency and solution quality. → Applied to solve constrained ED problems of power systems with 6 and 15 unit. → Superiority of this method in both aspects of financial and CPU time is remarkable. - Abstract: This paper proposes a new approach and coding scheme for solving economic dispatch problems (ED) in power systems through an effortless hybrid method (EHM). This novel coding scheme can effectively prevent futile searching and also prevents obtaining infeasible solutions through the application of stochastic search methods, consequently dramatically improves search efficiency and solution quality. The dominant constraint of an economic dispatch problem is power balance. The operational constraints, such as generation limitations, ramp rate limits, prohibited operating zones (POZ), network loss are considered for practical operation. Firstly, in the EHM procedure, the output of generator is obtained with a lambda iteration method and without considering POZ and later in a genetic based algorithm this constraint is satisfied. To demonstrate its efficiency, feasibility and fastness, the EHM algorithm was applied to solve constrained ED problems of power systems with 6 and 15 units. The simulation results obtained from the EHM were compared to those achieved from previous literature in terms of solution quality and computational efficiency. Results reveal that the superiority of this method in both aspects of financial and CPU time.

  6. Solving the Coupled System Improves Computational Efficiency of the Bidomain Equations

    KAUST Repository

    Southern, J.A.

    2009-10-01

    The bidomain equations are frequently used to model the propagation of cardiac action potentials across cardiac tissue. At the whole organ level, the size of the computational mesh required makes their solution a significant computational challenge. As the accuracy of the numerical solution cannot be compromised, efficiency of the solution technique is important to ensure that the results of the simulation can be obtained in a reasonable time while still encapsulating the complexities of the system. In an attempt to increase efficiency of the solver, the bidomain equations are often decoupled into one parabolic equation that is computationally very cheap to solve and an elliptic equation that is much more expensive to solve. In this study, the performance of this uncoupled solution method is compared with an alternative strategy in which the bidomain equations are solved as a coupled system. This seems counterintuitive as the alternative method requires the solution of a much larger linear system at each time step. However, in tests on two 3-D rabbit ventricle benchmarks, it is shown that the coupled method is up to 80% faster than the conventional uncoupled method-and that parallel performance is better for the larger coupled problem.

  7. Ant colony system (ACS with hybrid local search to solve vehicle routing problems

    Directory of Open Access Journals (Sweden)

    Suphan Sodsoon

    2016-02-01

    Full Text Available This research applied an Ant Colony System algorithm with a Hybrid Local Search to solve Vehicle Routing Problems (VRP from a single depot when the customers’ requirements are known. VRP is an NP-hard optimization problem and has usually been successfully solved optimum by heuristics. A fleet of vehicles of a specific capacity are used to serve a number of customers at minimum cost, without violating the constraints of vehicle capacity. There are meta-heuristic approaches to solve these problems, such as Simulated Annealing, Genetic Algorithm, Tabu Search and the Ant Colony System algorithm. In this case a hybrid local search was used (Cross-Exchange, Or-Opt and 2-Opt algorithm with an Ant Colony System algorithm. The Experimental Design was tested on 7 various problems from the data set online in the OR-Library. There are five different problems in which customers are randomly distributed with the depot in an approximately central location. The customers were grouped into clusters. The results are evaluated in terms of optimal routes using optimal distances. The experimental results are compared with those obtained from meta-heuristics and they show that the proposed method outperforms six meta-heuristics in the literature.

  8. Fires in large scale ventilation systems

    International Nuclear Information System (INIS)

    Gregory, W.S.; Martin, R.A.; White, B.W.; Nichols, B.D.; Smith, P.R.; Leslie, I.H.; Fenton, D.L.; Gunaji, M.V.; Blythe, J.P.

    1991-01-01

    This paper summarizes the experience gained simulating fires in large scale ventilation systems patterned after ventilation systems found in nuclear fuel cycle facilities. The series of experiments discussed included: (1) combustion aerosol loading of 0.61x0.61 m HEPA filters with the combustion products of two organic fuels, polystyrene and polymethylemethacrylate; (2) gas dynamic and heat transport through a large scale ventilation system consisting of a 0.61x0.61 m duct 90 m in length, with dampers, HEPA filters, blowers, etc.; (3) gas dynamic and simultaneous transport of heat and solid particulate (consisting of glass beads with a mean aerodynamic diameter of 10μ) through the large scale ventilation system; and (4) the transport of heat and soot, generated by kerosene pool fires, through the large scale ventilation system. The FIRAC computer code, designed to predict fire-induced transients in nuclear fuel cycle facility ventilation systems, was used to predict the results of experiments (2) through (4). In general, the results of the predictions were satisfactory. The code predictions for the gas dynamics, heat transport, and particulate transport and deposition were within 10% of the experimentally measured values. However, the code was less successful in predicting the amount of soot generation from kerosene pool fires, probably due to the fire module of the code being a one-dimensional zone model. The experiments revealed a complicated three-dimensional combustion pattern within the fire room of the ventilation system. Further refinement of the fire module within FIRAC is needed. (orig.)

  9. Entry control system for large populations

    International Nuclear Information System (INIS)

    Merillat, P.D.

    1982-01-01

    An Entry Control System has been developed which is appropriate for use at an installation with a large population requiring access over a large area. This is accomplished by centralizing the data base management and enrollment functions and decentralizing the guard-assisted, positive personnel identification and access functions. Current information pertaining to all enrollees is maintained through user-friendly enrollment stations. These stations may be used to enroll individuals, alter their area access authorizations, change expiration dates, and other similar functions. An audit trail of data base alterations is provided to the System Manager. Decentrailized systems exist at each area to which access is controlled. The central system provides these systems with the necessary entry control information to allow them to operate microprocessor-driven entry control devices. The system is comprised of commercially available entry control components and is structured such that it will be able to incorporate improved devices as technology porogresses. Currently, access is granted to individuals who possess a valid credential, have current access authorization, can supply a memorized personal identification number, and whose physical hand dimensions match their profile obtained during enrollment. The entry control devices report misuses as security violations to a Guard Alarm Display and Assessment System

  10. Large-Scale Studies on the Transferability of General Problem-Solving Skills and the Pedagogic Potential of Physics

    Science.gov (United States)

    Mashood, K. K.; Singh, Vijay A.

    2013-01-01

    Research suggests that problem-solving skills are transferable across domains. This claim, however, needs further empirical substantiation. We suggest correlation studies as a methodology for making preliminary inferences about transfer. The correlation of the physics performance of students with their performance in chemistry and mathematics in…

  11. Spatial problem-solving strategies of middle school students: Wayfinding with geographic information systems

    Science.gov (United States)

    Wigglesworth, John C.

    2000-06-01

    Geographic Information Systems (GIS) is a powerful computer software package that emphasizes the use of maps and the management of spatially referenced environmental data archived in a systems data base. Professional applications of GIS have been in place since the 1980's, but only recently has GIS gained significant attention in the K--12 classroom. Students using GIS are able to manipulate and query data in order to solve all manners of spatial problems. Very few studies have examined how this technological innovation can support classroom learning. In particular, there has been little research on how experience in using the software correlates with a child's spatial cognition and his/her ability to understand spatial relationships. This study investigates the strategies used by middle school students to solve a wayfinding (route-finding) problem using the ArcView GIS software. The research design combined an individual background questionnaire, results from the Group Assessment of Logical Thinking (GALT) test, and analysis of reflective think-aloud sessions to define the characteristics of the strategies students' used to solve this particular class of spatial problem. Three uniquely different spatial problem solving strategies were identified. Visual/Concrete Wayfinders used a highly visual strategy; Logical/Abstract Wayfinders used GIS software tools to apply a more analytical and systematic approach; Transitional Wayfinders used an approach that showed evidence of one that was shifting from a visual strategy to one that was more analytical. The triangulation of data sources indicates that this progression of wayfinding strategy can be correlated both to Piagetian stages of logical thought and to experience with the use of maps. These findings suggest that GIS teachers must be aware that their students' performance will lie on a continuum that is based on cognitive development, spatial ability, and prior experience with maps. To be most effective, GIS teaching

  12. Application of a Modal Approach in Solving the Static Stability Problem for Electric Power Systems

    Science.gov (United States)

    Sharov, J. V.

    2017-12-01

    Application of a modal approach in solving the static stability problem for power systems is examined. It is proposed to use the matrix exponent norm as a generalized transition function of the power system disturbed motion. Based on the concept of a stability radius and the pseudospectrum of Jacobian matrix, the necessary and sufficient conditions for existence of the static margins were determined. The capabilities and advantages of the modal approach in designing centralized or distributed control and the prospects for the analysis of nonlinear oscillations and rendering the dynamic stability are demonstrated.

  13. An Application of Computer Vision Systems to Solve the Problem of Unmanned Aerial Vehicle Control

    Directory of Open Access Journals (Sweden)

    Aksenov Alexey Y.

    2014-09-01

    Full Text Available The paper considers an approach for application of computer vision systems to solve the problem of unmanned aerial vehicle control. The processing of images obtained through onboard camera is required for absolute positioning of aerial platform (automatic landing and take-off, hovering etc. used image processing on-board camera. The proposed method combines the advantages of existing systems and gives the ability to perform hovering over a given point, the exact take-off and landing. The limitations of implemented methods are determined and the algorithm is proposed to combine them in order to improve the efficiency.

  14. Large-scale Intelligent Transporation Systems simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ewing, T.; Canfield, T.; Hannebutte, U.; Levine, D.; Tentner, A.

    1995-06-01

    A prototype computer system has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS) capable of running on massively parallel computers and distributed (networked) computer systems. The prototype includes the modelling of instrumented ``smart`` vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces to support human-factors studies. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of our design is that vehicles will be represented by autonomus computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.

  15. Model-based verification method for solving the parameter uncertainty in the train control system

    International Nuclear Information System (INIS)

    Cheng, Ruijun; Zhou, Jin; Chen, Dewang; Song, Yongduan

    2016-01-01

    This paper presents a parameter analysis method to solve the parameter uncertainty problem for hybrid system and explore the correlation of key parameters for distributed control system. For improving the reusability of control model, the proposed approach provides the support for obtaining the constraint sets of all uncertain parameters in the abstract linear hybrid automata (LHA) model when satisfying the safety requirements of the train control system. Then, in order to solve the state space explosion problem, the online verification method is proposed to monitor the operating status of high-speed trains online because of the real-time property of the train control system. Furthermore, we construct the LHA formal models of train tracking model and movement authority (MA) generation process as cases to illustrate the effectiveness and efficiency of the proposed method. In the first case, we obtain the constraint sets of uncertain parameters to avoid collision between trains. In the second case, the correlation of position report cycle and MA generation cycle is analyzed under both the normal and the abnormal condition influenced by packet-loss factor. Finally, considering stochastic characterization of time distributions and real-time feature of moving block control system, the transient probabilities of wireless communication process are obtained by stochastic time petri nets. - Highlights: • We solve the parameters uncertainty problem by using model-based method. • We acquire the parameter constraint sets by verifying linear hybrid automata models. • Online verification algorithms are designed to monitor the high-speed trains. • We analyze the correlation of key parameters and uncritical parameters. • The transient probabilities are obtained by using reliability analysis.

  16. Database management system for large container inspection system

    International Nuclear Information System (INIS)

    Gao Wenhuan; Li Zheng; Kang Kejun; Song Binshan; Liu Fang

    1998-01-01

    Large Container Inspection System (LCIS) based on radiation imaging technology is a powerful tool for the Customs to check the contents inside a large container without opening it. The author has discussed a database application system, as a part of Signal and Image System (SIS), for the LCIS. The basic requirements analysis was done first. Then the selections of computer hardware, operating system, and database management system were made according to the technology and market products circumstance. Based on the above considerations, a database application system with central management and distributed operation features has been implemented

  17. Detector correction in large container inspection systems

    CERN Document Server

    Kang Ke Jun; Chen Zhi Qiang

    2002-01-01

    In large container inspection systems, the image is constructed by parallel scanning with a one-dimensional detector array with a linac used as the X-ray source. The linear nonuniformity and nonlinearity of multiple detectors and the nonuniform intensity distribution of the X-ray sector beam result in horizontal striations in the scan image. This greatly impairs the image quality, so the image needs to be corrected. The correction parameters are determined experimentally by scaling the detector responses at multiple points with logarithm interpolation of the results. The horizontal striations are eliminated by modifying the original image data with the correction parameters. This method has proven to be effective and applicable in large container inspection systems

  18. [Large vessels vasculopathy in systemic sclerosis].

    Science.gov (United States)

    Tejera Segura, Beatriz; Ferraz-Amaro, Iván

    2015-12-07

    Vasculopathy in systemic sclerosis is a severe, in many cases irreversible, manifestation that can lead to amputation. While the classical clinical manifestations of the disease have to do with the involvement of microcirculation, proximal vessels of upper and lower limbs can also be affected. This involvement of large vessels may be related to systemic sclerosis, vasculitis or atherosclerotic, and the differential diagnosis is not easy. To conduct a proper and early diagnosis, it is essential to start prompt appropriate treatment. In this review, we examine the involvement of large vessels in scleroderma, an understudied manifestation with important prognostic and therapeutic implications. Copyright © 2015 Elsevier España, S.L.U. All rights reserved.

  19. Energy cascading in large district heating systems

    International Nuclear Information System (INIS)

    Mayer, F.W.

    1978-01-01

    District heat transfer is the most economical utilization of the waste heat of power plants. Optimum utilization and heat transfer over large distances are possible because of a new energy distribution system, the ''energy cascading system,'' in which heat is transferred to several consumer regions at different temperature ranges. It is made more profitable by the use of heat pumps. The optimum flow-line temperature is 368 0 K, and the optimum return-line temperature is 288 0 K, resulting in an approximately 50% reduction of electric power loss at the power plant

  20. Large Efficient Intelligent Heating Relay Station System

    Science.gov (United States)

    Wu, C. Z.; Wei, X. G.; Wu, M. Q.

    2017-12-01

    The design of large efficient intelligent heating relay station system aims at the improvement of the existing heating system in our country, such as low heating efficiency, waste of energy and serious pollution, and the control still depends on the artificial problem. In this design, we first improve the existing plate heat exchanger. Secondly, the ATM89C51 is used to control the whole system and realize the intelligent control. The detection part is using the PT100 temperature sensor, pressure sensor, turbine flowmeter, heating temperature, detection of user end liquid flow, hydraulic, and real-time feedback, feedback signal to the microcontroller through the heating for users to adjust, realize the whole system more efficient, intelligent and energy-saving.

  1. Large Coil Program magnetic system design study

    International Nuclear Information System (INIS)

    Moses, S.D.; Johnson, N.E.

    1977-01-01

    The primary objective of the Large Coil Program (LCP) is to demonstrate the reliable operation of large superconducting coils to provide a basis for the design principles, materials, and fabrication techniques proposed for the toroidal magnets for the THE NEXT STEP (TNS) and other future tokamak devices. This paper documents a design study of the Large Coil Test Facility (LCTF) in which the structural response of the Toroidal Field (TF) Coils and the supporting structure was evaluated under simulated reactor conditions. The LCP test facility structural system consists of six TF Coils, twelve coil-to-coil torsional restraining beams (torque rings), a central bucking post with base, and a Pulse Coil system. The NASTRAN Finite Element Structural Analysis computer Code was utilized to determine the distribution of deflections, forces, and stresses for each of the TF Coils, torque rings, and the central bucking post. Eleven load conditions were selected to represent probable test operations. Pulse Coils suspended in the bore of the test coil were energized to simulate the pulsed field environment characteristic of the TNS reactor system. The TORMAC Computer Code was utilized to develop the magnetic forces in the TF Coils for each of the eleven loading conditions examined, with or without the Pulse Coils energized. The TORMAC computer program output forces were used directly as input load conditions for the NASTRAN analyses. Results are presented which demonstrate the reliability of the LCTF under simulated reactor operating conditions

  2. Status: Large-scale subatmospheric cryogenic systems

    International Nuclear Information System (INIS)

    Peterson, T.

    1989-01-01

    In the late 1960's and early 1970's an interest in testing and operating RF cavities at 1.8K motivated the development and construction of four large (300 Watt) 1.8K refrigeration systems. in the past decade, development of successful superconducting RF cavities and interest in obtaining higher magnetic fields with the improved Niobium-Titanium superconductors has once again created interest in large-scale 1.8K refrigeration systems. The L'Air Liquide plant for Tore Supra is a recently commissioned 300 Watt 1.8K system which incorporates new technology, cold compressors, to obtain the low vapor pressure for low temperature cooling. CEBAF proposes to use cold compressors to obtain 5KW at 2.0K. Magnetic refrigerators of 10 Watt capacity or higher at 1.8K are now being developed. The state of the art of large-scale refrigeration in the range under 4K will be reviewed. 28 refs., 4 figs., 7 tabs

  3. Results of numerically solving an integral equation for a two-fermion system

    International Nuclear Information System (INIS)

    Skachkov, N.B.; Solov'eva, T.M.

    2003-01-01

    A two-particle system is described by integral equations whose kernels are dependent on the total energy of the system. Such equations can be reduced to an eigenvalue problem featuring an eigenvalue-dependent operator. This nonlinear eigenvalue problem is solved by means of an iterative procedure developed by the present authors. The energy spectra of a two-fermion system formed by particles of identical masses are obtained for two cases, that where the total spin of the system is equal to zero and that where the total spin of the system is equal to unity. The splitting of the ground-state levels of positronium and dimuonium, the frequency of the transition from the ground state of orthopositronium to its first excited state, and the probabilities of parapositronium and paradimuonium decays are computed. The results obtained in this way are found to be in good agreement with experimental data

  4. Algorithms to solve coupled systems of differential equations in terms of power series

    International Nuclear Information System (INIS)

    Ablinger, Jakob; Schneider, Carsten

    2016-08-01

    Using integration by parts relations, Feynman integrals can be represented in terms of coupled systems of differential equations. In the following we suppose that the unknown Feynman integrals can be given in power series representations, and that sufficiently many initial values of the integrals are given. Then there exist algorithms that decide constructively if the coefficients of their power series representations can be given within the class of nested sums over hypergeometric products. In this article we work out the calculation steps that solve this problem. First, we present a successful tactic that has been applied recently to challenging problems coming from massive 3-loop Feynman integrals. Here our main tool is to solve scalar linear recurrences within the class of nested sums over hypergeometric products. Second, we will present a new variation of this tactic which relies on more involved summation technologies but succeeds in reducing the problem to solve scalar recurrences with lower recurrence orders. The article works out the different challenges of this new tactic and demonstrates how they can be treated efficiently with our existing summation technologies.

  5. Mathematical Modelling to Solve Tasks of Profiled Cross of Robot Systems with a Wheel-Legged Propulsion

    Directory of Open Access Journals (Sweden)

    A. S. Diakov

    2014-01-01

    Full Text Available One of the main trends for development of promising military equipment is to create transport robot systems (TRS.To conduct a theoretical study of the potential properties of TRS mobility was used a software package for invariant simulation of multibody dynamics system "Euler", which allows us to solve problems regarding the "large displacements", typical for TRS.The modelling results of TRS motion dynamics when overcoming the single-stage and two stages, which are higher than the roller diameter of propeller are obtained.Analysis of modelling results of the TRS motion dynamics to overcome obstacles commensurate with its dimensions allows us to conclude that the use of wheel-legged three-roller propulsion can provide the required level of permeability and, as a result, increasing TRS mobility.

  6. Curriculum providing cognitive knowledge and problem-solving skills for anesthesia systems-based practice.

    Science.gov (United States)

    Wachtel, Ruth E; Dexter, Franklin

    2010-12-01

    Residency programs accredited by the ACGME are required to teach core competencies, including systems-based practice (SBP). Projects are important for satisfying this competency, but the level of knowledge and problem-solving skills required presupposes a basic understanding of the field. The responsibilities of anesthesiologists include the coordination of patient flow in the surgical suite. Familiarity with this topic is crucial for many improvement projects. A course in operations research for surgical services was originally developed for hospital administration students. It satisfies 2 of the Institute of Medicine's core competencies for health professionals: evidence-based practice and work in interdisciplinary teams. The course lasts 3.5 days (eg, 2 weekends) and consists of 45 cognitive objectives taught using 7 published articles, 10 lectures, and 156 computer-assisted problem-solving exercises based on 17 case studies. We tested the hypothesis that the cognitive objectives of the curriculum provide the knowledge and problem-solving skills necessary to perform projects that satisfy the SBP competency. Standardized terminology was used to define each component of the SBP competency for the minimum level of knowledge needed. The 8 components of the competency were examined independently. Most cognitive objectives contributed to at least 4 of the 8 core components of the SBP competency. Each component of SBP is addressed at the minimum requirement level of exemplify by at least 6 objectives. There is at least 1 cognitive objective at the level of summarize for each SBP component. A curriculum in operating room management can provide the knowledge and problem-solving skills anesthesiologists need for participation in projects that satisfy the SBP competency.

  7. Construction of a large laser fusion system

    International Nuclear Information System (INIS)

    Hurley, C.A.

    1977-01-01

    Construction of a large laser fusion machine is nearing completion at the Lawrence Livermore Laboratory (LLL). Shiva, a 20-terawatt neodymium doped glass system, will be complete in early 1978. This system will have the high power needed to demonstrate significant thermonuclear burn. Shiva will irradiate a microscopic D-T pellet with 20 separate laser beams arriving simultaneously at the target. This requires precise alignment, and stability to maintain alignment. Hardware for the 20 laser chains is composed of 140 amplifiers, 100 spatial filters, 80 isolation stages, 40 large turning mirrors, and a front-end splitter system of over 100 parts. These are mounted on a high stability, three dimensional spaceframe which serves as an optical bench. The mechanical design effort, spanning approximately 3 years, followed a classic engineering evolution. The conceptual design phase led directly to system optimization through cost and technical tradeoffs. Additional manpower was then required for detailed design and specification of hardware and fabrication. Design of long-lead items was started early in order to initiate fabrication and assembly while the rest of the design was completed. All components were ready for assembly and construction as fiscal priorities and schedules permitted

  8. Relationship between Systems Coaching and Problem-Solving Implementation Fidelity in a Response-to-Intervention Model

    Science.gov (United States)

    March, Amanda L.; Castillo, Jose M.; Batsche, George M.; Kincaid, Donald

    2016-01-01

    The literature on RTI has indicated that professional development and coaching are critical to facilitating problem-solving implementation with fidelity. This study examined the extent to which systems coaching related to the fidelity of problem-solving implementation in 31 schools from six districts. Schools participated in three years of a…

  9. A heuristic for solving the redundancy allocation problem for multi-state series-parallel systems

    International Nuclear Information System (INIS)

    Ramirez-Marquez, Jose E.; Coit, David W.

    2004-01-01

    The redundancy allocation problem is formulated with the objective of minimizing design cost, when the system exhibits a multi-state reliability behavior, given system-level performance constraints. When the multi-state nature of the system is considered, traditional solution methodologies are no longer valid. This study considers a multi-state series-parallel system (MSPS) with capacitated binary components that can provide different multi-state system performance levels. The different demand levels, which must be supplied during the system-operating period, result in the multi-state nature of the system. The new solution methodology offers several distinct benefits compared to traditional formulations of the MSPS redundancy allocation problem. For some systems, recognizing that different component versions yield different system performance is critical so that the overall system reliability estimation and associated design models the true system reliability behavior more realistically. The MSPS design problem, solved in this study, has been previously analyzed using genetic algorithms (GAs) and the universal generating function. The specific problem being addressed is one where there are multiple component choices, but once a component selection is made, only the same component type can be used to provide redundancy. This is the first time that the MSPS design problem has been addressed without using GAs. The heuristic offers more efficient and straightforward analyses. Solutions to three different problem types are obtained illustrating the simplicity and ease of application of the heuristic without compromising the intended optimization needs

  10. A New Method to Solve Numeric Solution of Nonlinear Dynamic System

    Directory of Open Access Journals (Sweden)

    Min Hu

    2016-01-01

    Full Text Available It is well known that the cubic spline function has advantages of simple forms, good convergence, approximation, and second-order smoothness. A particular class of cubic spline function is constructed and an effective method to solve the numerical solution of nonlinear dynamic system is proposed based on the cubic spline function. Compared with existing methods, this method not only has high approximation precision, but also avoids the Runge phenomenon. The error analysis of several methods is given via two numeric examples, which turned out that the proposed method is a much more feasible tool applied to the engineering practice.

  11. Pulsed rf systems for large storage rings

    International Nuclear Information System (INIS)

    Wilson, P.B.

    1979-03-01

    The possibility is considered that by using a pulsed rf system a substantial reduction can be made in the rf power requirement for the next generation of large storage rings. For a ring with a sufficiently large circumference, the time between bunch passages, T/sub b/, can exceed the cavity filling time, T/sub f/. As the ratio T/sub b//T/sub f/ increases, it is clear that at some point the average power requirement can be reduced by pulsing the rf to the cavities. In this mode of operation, the rf power is turned on a filling time or so before the arrival of a bunch and is switched off again at the time of bunch passage. There is no rf energy in the accelerating structure, and hence no power dissipation, for most of the period between bunches

  12. Optical interconnect for large-scale systems

    Science.gov (United States)

    Dress, William

    2013-02-01

    This paper presents a switchless, optical interconnect module that serves as a node in a network of identical distribution modules for large-scale systems. Thousands to millions of hosts or endpoints may be interconnected by a network of such modules, avoiding the need for multi-level switches. Several common network topologies are reviewed and their scaling properties assessed. The concept of message-flow routing is discussed in conjunction with the unique properties enabled by the optical distribution module where it is shown how top-down software control (global routing tables, spanning-tree algorithms) may be avoided.

  13. Scalable Molecular Dynamics for Large Biomolecular Systems

    Directory of Open Access Journals (Sweden)

    Robert K. Brunner

    2000-01-01

    Full Text Available We present an optimized parallelization scheme for molecular dynamics simulations of large biomolecular systems, implemented in the production-quality molecular dynamics program NAMD. With an object-based hybrid force and spatial decomposition scheme, and an aggressive measurement-based predictive load balancing framework, we have attained speeds and speedups that are much higher than any reported in literature so far. The paper first summarizes the broad methodology we are pursuing, and the basic parallelization scheme we used. It then describes the optimizations that were instrumental in increasing performance, and presents performance results on benchmark simulations.

  14. A frequency-domain method for solving linear time delay systems with constant coefficients

    Science.gov (United States)

    Jin, Mengshi; Chen, Wei; Song, Hanwen; Xu, Jian

    2018-03-01

    In an active control system, time delay will occur due to processes such as signal acquisition and transmission, calculation, and actuation. Time delay systems are usually described by delay differential equations (DDEs). Since it is hard to obtain an analytical solution to a DDE, numerical solution is of necessity. This paper presents a frequency-domain method that uses a truncated transfer function to solve a class of DDEs. The theoretical transfer function is the sum of infinite items expressed in terms of poles and residues. The basic idea is to select the dominant poles and residues to truncate the transfer function, thus ensuring the validity of the solution while improving the efficiency of calculation. Meanwhile, the guideline of selecting these poles and residues is provided. Numerical simulations of both stable and unstable delayed systems are given to verify the proposed method, and the results are presented and analysed in detail.

  15. Solving Disparities Through Payment And Delivery System Reform: A Program To Achieve Health Equity.

    Science.gov (United States)

    DeMeester, Rachel H; Xu, Lucy J; Nocon, Robert S; Cook, Scott C; Ducas, Andrea M; Chin, Marshall H

    2017-06-01

    Payment systems generally do not directly encourage or support the reduction of health disparities. In 2013 the Finding Answers: Solving Disparities through Payment and Delivery System Reform program of the Robert Wood Johnson Foundation sought to understand how alternative payment models might intentionally incorporate a disparities-reduction component to promote health equity. A qualitative analysis of forty proposals to the program revealed that applicants generally did not link payment reform tightly to disparities reduction. Most proposed general pay-for-performance, global payment, or shared savings plans, combined with multicomponent system interventions. None of the applicants proposed making any financial payments contingent on having successfully reduced disparities. Most applicants did not address how they would optimize providers' intrinsic and extrinsic motivation to reduce disparities. A better understanding of how payment and care delivery models might be designed and implemented to reduce health disparities is essential. Project HOPE—The People-to-People Health Foundation, Inc.

  16. Modular pump limiter systems for large tokamaks

    International Nuclear Information System (INIS)

    Uckan, T.; Klepper, C.C.; Mioduszewski, P.K.; McGrath, R.T.

    1987-09-01

    Long-pulse (>10-s) operation of large tokamaks with high-power (>10-MW) heating and extensive external fueling will require correspondingly efficient particle exhaust for density control. A pump limiter can provide the needed exhaust capability by removing a small percentage of the particles, which would otherwise be recycled. Single pump limiter modules have been operated successfully on ISX-B, PDX, TEXTOR, and PLT. An axisymmetric pump limiter is now being installed and will be studied in TEXTOR. A third type of pump limiter is a system that consists of several modules and exhibits performance different from that of a single module. To take advantage of the flexibility of a modular pump limiter system in a high-power, long-pulse device, the power load must be distributed among a number of modules. Because each added module changes the performance of all the others, a set of design criteria must be defined for the overall limiter system. The design parameters for the modules are then determined from the system requirements for particle and power removal. Design criteria and parameters are presented, and the impact on module design of the state of the art in engineering technology is discussed. The relationship between modules are considered from the standpoint of flux coverage and shadowing effects. The results are applied to the Tore Supra tokamak. A preliminary conceptual design for the Tore Supra pump limiter system is discussed, and the design parameters of the limiter modules are presented. 21 refs., 12 figs

  17. Artificial Immune Systems as a Modern Tool for Solving Multi-Purpose Optimization Tasks in the Field of Logistics

    Directory of Open Access Journals (Sweden)

    Skitsko Volodymyr I.

    2017-03-01

    Full Text Available The article investigates various aspects of the functioning of artificial immune systems and their using to solve different tasks. The analysis of the studied literature showed that nowadays there exist combinations of artificial immune systems, in particular with genetic algorithms, the particle swarm optimization method, artificial neural networks, etc., to solve different tasks. However, the solving of economic tasks is paid little attention. The article presents the basic terminology of artificial immune systems; the steps of the clonal selection algorithm are described, as well as a brief description of the negative selection algorithm, the immune network algorithm and the dendritic algorithm is given; conceptual aspects of the use of an artificial immune system for solving multi-purpose optimization problems are formulated, and an example of solving a problem in the field of logistics is described. Artificial immune systems as a means of solving various weakly structured, multi-criteria and multi-purpose economic tasks, in particular in the sphere of logistics, are a promising tool that requires further research. Therefore, it is advisable in the future to focus on the use of various existing immune algorithms for solving various economic problems.

  18. Improvement in Generic Problem-Solving Abilities of Students by Use of Tutor-less Problem-Based Learning in a Large Classroom Setting

    Science.gov (United States)

    Klegeris, Andis; Bahniwal, Manpreet; Hurren, Heather

    2013-01-01

    Problem-based learning (PBL) was originally introduced in medical education programs as a form of small-group learning, but its use has now spread to large undergraduate classrooms in various other disciplines. Introduction of new teaching techniques, including PBL-based methods, needs to be justified by demonstrating the benefits of such techniques over classical teaching styles. Previously, we demonstrated that introduction of tutor-less PBL in a large third-year biochemistry undergraduate class increased student satisfaction and attendance. The current study assessed the generic problem-solving abilities of students from the same class at the beginning and end of the term, and compared student scores with similar data obtained in three classes not using PBL. Two generic problem-solving tests of equal difficulty were administered such that students took different tests at the beginning and the end of the term. Blinded marking showed a statistically significant 13% increase in the test scores of the biochemistry students exposed to PBL, while no trend toward significant change in scores was observed in any of the control groups not using PBL. Our study is among the first to demonstrate that use of tutor-less PBL in a large classroom leads to statistically significant improvement in generic problem-solving skills of students. PMID:23463230

  19. Integrated fringe projection 3D scanning system for large-scale metrology based on laser tracker

    Science.gov (United States)

    Du, Hui; Chen, Xiaobo; Zhou, Dan; Guo, Gen; Xi, Juntong

    2017-10-01

    Large scale components exist widely in advance manufacturing industry,3D profilometry plays a pivotal role for the quality control. This paper proposes a flexible, robust large-scale 3D scanning system by integrating a robot with a binocular structured light scanner and a laser tracker. The measurement principle and system construction of the integrated system are introduced. And a mathematical model is established for the global data fusion. Subsequently, a flexible and robust method and mechanism is introduced for the establishment of the end coordination system. Based on this method, a virtual robot noumenon is constructed for hand-eye calibration. And then the transformation matrix between end coordination system and world coordination system is solved. Validation experiment is implemented for verifying the proposed algorithms. Firstly, hand-eye transformation matrix is solved. Then a car body rear is measured for 16 times for the global data fusion algorithm verification. And the 3D shape of the rear is reconstructed successfully.

  20. Solving LFC problem in an interconnected power system using superconducting magnetic energy storage

    Energy Technology Data Exchange (ETDEWEB)

    Farahani, Mohsen, E-mail: mhs.farahani@gmail.com [Sama Technical and Vocational Training College, Islamic Azad University, Karaj Branch, Karaj (Iran, Islamic Republic of); Ganjefar, Soheil [Department of Electrical Engineering, Bu-Ali Sina University, Hamedan (Iran, Islamic Republic of)

    2013-04-15

    Highlights: ► Load frequency control of PID type is combined with a SMES. ► Damping speed of frequency and tie-line power flow deviations are considerably increased. ► Optimal parameters of PID and SMES control loop are obtained by PS optimization. -- Abstract: This paper proposes the combination of a load frequency control (LFC) with superconducting magnetic energy storage (SMES) to solve the LFC problem in interconnected power systems. By using this combination, the speed damping of frequency and tie-line power flow deviations is considerably increased. A new control strategy of SMES is proposed in this paper. The problem of determining optimal parameters of PID and SMES control loop is considered as an optimization problem and a pattern search algorithm (PS) optimization is employed to solve it. The simulation results show that if an SMES unit is installed in an interconnected power system, in addition to eliminating oscillations and deviations, the settling time in the frequency and tie-line power flow responses is considerably reduced.

  1. Solving LFC problem in an interconnected power system using superconducting magnetic energy storage

    International Nuclear Information System (INIS)

    Farahani, Mohsen; Ganjefar, Soheil

    2013-01-01

    Highlights: ► Load frequency control of PID type is combined with a SMES. ► Damping speed of frequency and tie-line power flow deviations are considerably increased. ► Optimal parameters of PID and SMES control loop are obtained by PS optimization. -- Abstract: This paper proposes the combination of a load frequency control (LFC) with superconducting magnetic energy storage (SMES) to solve the LFC problem in interconnected power systems. By using this combination, the speed damping of frequency and tie-line power flow deviations is considerably increased. A new control strategy of SMES is proposed in this paper. The problem of determining optimal parameters of PID and SMES control loop is considered as an optimization problem and a pattern search algorithm (PS) optimization is employed to solve it. The simulation results show that if an SMES unit is installed in an interconnected power system, in addition to eliminating oscillations and deviations, the settling time in the frequency and tie-line power flow responses is considerably reduced

  2. Solving problems in social-ecological systems: definition, practice and barriers of transdisciplinary research.

    Science.gov (United States)

    Angelstam, Per; Andersson, Kjell; Annerstedt, Matilda; Axelsson, Robert; Elbakidze, Marine; Garrido, Pablo; Grahn, Patrik; Jönsson, K Ingemar; Pedersen, Simen; Schlyter, Peter; Skärbäck, Erik; Smith, Mike; Stjernquist, Ingrid

    2013-03-01

    Translating policies about sustainable development as a social process and sustainability outcomes into the real world of social-ecological systems involves several challenges. Hence, research policies advocate improved innovative problem-solving capacity. One approach is transdisciplinary research that integrates research disciplines, as well as researchers and practitioners. Drawing upon 14 experiences of problem-solving, we used group modeling to map perceived barriers and bridges for researchers' and practitioners' joint knowledge production and learning towards transdisciplinary research. The analysis indicated that the transdisciplinary research process is influenced by (1) the amount of traditional disciplinary formal and informal control, (2) adaptation of project applications to fill the transdisciplinary research agenda, (3) stakeholder participation, and (4) functional team building/development based on self-reflection and experienced leadership. Focusing on implementation of green infrastructure policy as a common denominator for the delivery of ecosystem services and human well-being, we discuss how to diagnose social-ecological systems, and use knowledge production and collaborative learning as treatments.

  3. Large-scale modelling of neuronal systems

    International Nuclear Information System (INIS)

    Castellani, G.; Verondini, E.; Giampieri, E.; Bersani, F.; Remondini, D.; Milanesi, L.; Zironi, I.

    2009-01-01

    The brain is, without any doubt, the most, complex system of the human body. Its complexity is also due to the extremely high number of neurons, as well as the huge number of synapses connecting them. Each neuron is capable to perform complex tasks, like learning and memorizing a large class of patterns. The simulation of large neuronal systems is challenging for both technological and computational reasons, and can open new perspectives for the comprehension of brain functioning. A well-known and widely accepted model of bidirectional synaptic plasticity, the BCM model, is stated by a differential equation approach based on bistability and selectivity properties. We have modified the BCM model extending it from a single-neuron to a whole-network model. This new model is capable to generate interesting network topologies starting from a small number of local parameters, describing the interaction between incoming and outgoing links from each neuron. We have characterized this model in terms of complex network theory, showing how this, learning rule can be a support For network generation.

  4. Recent symbolic summation methods to solve coupled systems of differential and difference equations

    International Nuclear Information System (INIS)

    Schneider, Carsten; Bluemlein, Johannes; Freitas, Abilio de

    2014-07-01

    We outline a new algorithm to solve coupled systems of differential equations in one continuous variable x (resp. coupled difference equations in one discrete variable N) depending on a small parameter ε: given such a system and given sufficiently many initial values, we can determine the first coefficients of the Laurent-series solutions in ε if they are expressible in terms of indefinite nested sums and products. This systematic approach is based on symbolic summation algorithms in the context of difference rings/fields and uncoupling algorithms. The proposed method gives rise to new interesting applications in connection with integration by parts (IBP) methods. As an illustrative example, we will demonstrate how one can calculate the ε-expansion of a ladder graph with 6 massive fermion lines.

  5. The potential for expert system support in solving the pressurized water reactor fuel shuffling problem

    International Nuclear Information System (INIS)

    Rothleder, B.M.; Poetschat, G.R.; Faught, W.S.; Eich, V.J.

    1988-01-01

    The fuel shuffling problem is posed by the need to reposition partially burned assemblies to achieve minimum X-Y pin power peaks reload cycles of pressurized water reactors. This problem is a classic artificial intelligence (AI) problem and is highly suitable for AI expert system solution assistance, in contrast to the conventional solution, which ultimately depends solely on trial and error. Such a fuel shuffling assistant would significantly reduce engineering and computer execution time for conventional loading patterns and, much more importantly, even more significantly for low-leakage loading patterns. A successful hardware/software demonstrator has been introduced, paving the way for development of a broadly expert system program. Such a program, upon incorporating the recently developed technique perverse depletion, would provide a directed path for solving the low-leakage problem

  6. Recent symbolic summation methods to solve coupled systems of differential and difference equations

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, Carsten [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation (RISC); Bluemlein, Johannes; Freitas, Abilio de [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2014-07-15

    We outline a new algorithm to solve coupled systems of differential equations in one continuous variable x (resp. coupled difference equations in one discrete variable N) depending on a small parameter ε: given such a system and given sufficiently many initial values, we can determine the first coefficients of the Laurent-series solutions in ε if they are expressible in terms of indefinite nested sums and products. This systematic approach is based on symbolic summation algorithms in the context of difference rings/fields and uncoupling algorithms. The proposed method gives rise to new interesting applications in connection with integration by parts (IBP) methods. As an illustrative example, we will demonstrate how one can calculate the ε-expansion of a ladder graph with 6 massive fermion lines.

  7. Problem Solving of Newton's Second Law through a System of Total Mass Motion

    Science.gov (United States)

    Abdullah, Helmi

    2014-01-01

    Nowadays, many researchers discovered various effective strategies in teaching physics, from traditional to modern strategy. However, research on physics problem solving is still inadequate. Physics problem is an integral part of physics learning and requires strategy to solve it. Besides that, problem solving is the best way to convey principle,…

  8. Pragmatic evaluation of the Toyota Production System (TPS analysis procedure for problem solving with entry-level nurses

    Directory of Open Access Journals (Sweden)

    Lukasz Maciej Mazur

    2008-12-01

    Full Text Available Medication errors occurring in hospitals are a growing national concern. These medication errors and their related costs (or wastes are seen as major factors leading to increased patient safety risks and increased waste in the hospital setting.  This article presents a study in which sixteen entry-level nurses utilized a Toyota Production System (TPS analysis procedure to solve medication delivery problems at one community hospital. The objective of this research was to study and evaluate the TPS analysis procedure for problem solving with entry-level nurses. Personal journals, focus group discussions, and a survey study were used to collect data about entry-level nurses’ perceptions of using the TPS problem solving approach to study medication delivery. A regression analysis was used to identify characteristics that enhance problem solving efforts. In addition, propositions for effective problem solving by entry-level nurses to aid in the reduction of medication errors in healthcare delivery settings are offered.

  9. Sequential decoders for large MIMO systems

    KAUST Repository

    Ali, Konpal S.

    2014-05-01

    Due to their ability to provide high data rates, multiple-input multiple-output (MIMO) systems have become increasingly popular. Decoding of these systems with acceptable error performance is computationally very demanding. In this paper, we employ the Sequential Decoder using the Fano Algorithm for large MIMO systems. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity and vice versa for higher bias values. Numerical results are done that show moderate bias values result in a decent performance-complexity trade-off. We also attempt to bound the error by bounding the bias, using the minimum distance of a lattice. The variations in complexity with SNR have an interesting trend that shows room for considerable improvement. Our work is compared against linear decoders (LDs) aided with Element-based Lattice Reduction (ELR) and Complex Lenstra-Lenstra-Lovasz (CLLL) reduction. © 2014 IFIP.

  10. New Ideas on the Design of the Web-Based Learning System Oriented to Problem Solving from the Perspective of Question Chain and Learning Community

    Science.gov (United States)

    Zhang, Yin; Chu, Samuel K. W.

    2016-01-01

    In recent years, a number of models concerning problem solving systems have been put forward. However, many of them stress on technology and neglect the research of problem solving itself, especially the learning mechanism related to problem solving. In this paper, we analyze the learning mechanism of problem solving, and propose that when…

  11. Empirical Mining of Large Data Sets Already Helps to Solve Practical Ecological Problems; A Panoply of Working Examples (Invited)

    Science.gov (United States)

    Hargrove, W. W.; Hoffman, F. M.; Kumar, J.; Spruce, J.; Norman, S. P.

    2013-12-01

    Here we present diverse examples where empirical mining and statistical analysis of large data sets have already been shown to be useful for a wide variety of practical decision-making problems within the realm of large-scale ecology. Because a full understanding and appreciation of particular ecological phenomena are possible only after hypothesis-directed research regarding the existence and nature of that process, some ecologists may feel that purely empirical data harvesting may represent a less-than-satisfactory approach. Restricting ourselves exclusively to process-driven approaches, however, may actually slow progress, particularly for more complex or subtle ecological processes. We may not be able to afford the delays caused by such directed approaches. Rather than attempting to formulate and ask every relevant question correctly, empirical methods allow trends, relationships and associations to emerge freely from the data themselves, unencumbered by a priori theories, ideas and prejudices that have been imposed upon them. Although they cannot directly demonstrate causality, empirical methods can be extremely efficient at uncovering strong correlations with intermediate "linking" variables. In practice, these correlative structures and linking variables, once identified, may provide sufficient predictive power to be useful themselves. Such correlation "shadows" of causation can be harnessed by, e.g., Bayesian Belief Nets, which bias ecological management decisions, made with incomplete information, toward favorable outcomes. Empirical data-harvesting also generates a myriad of testable hypotheses regarding processes, some of which may even be correct. Quantitative statistical regionalizations based on quantitative multivariate similarity have lended insights into carbon eddy-flux direction and magnitude, wildfire biophysical conditions, phenological ecoregions useful for vegetation type mapping and monitoring, forest disease risk maps (e.g., sudden oak

  12. Large-scale digitizer system, analog converters

    International Nuclear Information System (INIS)

    Althaus, R.F.; Lee, K.L.; Kirsten, F.A.; Wagner, L.J.

    1976-10-01

    Analog to digital converter circuits that are based on the sharing of common resources, including those which are critical to the linearity and stability of the individual channels, are described. Simplicity of circuit composition is valued over other more costly approaches. These are intended to be applied in a large-scale processing and digitizing system for use with high-energy physics detectors such as drift-chambers or phototube-scintillator arrays. Signal distribution techniques are of paramount importance in maintaining adequate signal-to-noise ratio. Noise in both amplitude and time-jitter senses is held sufficiently low so that conversions with 10-bit charge resolution and 12-bit time resolution are achieved

  13. A nearly orthogonal 2D grid system in solving the shallow water equations in the head bay of Bengal

    International Nuclear Information System (INIS)

    Roy, G.D. . E.mail: roy_gd@hotmail.com; Hussain, Farzana . E.mail: farzana@sust.edu

    2001-11-01

    A typical nearly orthogonal grid system is considered to solve the shallow water equations along the head bay of Bengal. A pencil of straight lines at uniform angular distance through a suitable origin, O at the mean sea level (MSL), are considered as a system of grid lines. A system of concentric and uniformly distributed ellipses with center at O is considered as the other system of grid lines. In order to solve the shallow water equations numerically, a system of transformations is applied so that the grid system in the transformed domain becomes a rectangular one. Shallow water equations are solved using appropriate initial and boundary conditions to estimate the water level due to tide and surge. The typical grid system is found to be suitable in incorporating the bending of the coastline and the island boundaries accurately in the numerical scheme along the coast of Bangladesh. (author)

  14. Iterative solution of large sparse systems of equations

    CERN Document Server

    Hackbusch, Wolfgang

    2016-01-01

    In the second edition of this classic monograph, complete with four new chapters and updated references, readers will now have access to content describing and analysing classical and modern methods with emphasis on the algebraic structure of linear iteration, which is usually ignored in other literature. The necessary amount of work increases dramatically with the size of systems, so one has to search for algorithms that most efficiently and accurately solve systems of, e.g., several million equations. The choice of algorithms depends on the special properties the matrices in practice have. An important class of large systems arises from the discretization of partial differential equations. In this case, the matrices are sparse (i.e., they contain mostly zeroes) and well-suited to iterative algorithms. The first edition of this book grew out of a series of lectures given by the author at the Christian-Albrecht University of Kiel to students of mathematics. The second edition includes quite novel approaches.

  15. Solving Classification Problems for Large Sets of Protein Sequences with the Example of Hox and ParaHox Proteins

    Directory of Open Access Journals (Sweden)

    Stefanie D. Hueber

    2016-02-01

    Full Text Available Phylogenetic methods are key to providing models for how a given protein family evolved. However, these methods run into difficulties when sequence divergence is either too low or too high. Here, we provide a case study of Hox and ParaHox proteins so that additional insights can be gained using a new computational approach to help solve old classification problems. For two (Gsx and Cdx out of three ParaHox proteins the assignments differ between the currently most established view and four alternative scenarios. We use a non-phylogenetic, pairwise-sequence-similarity-based method to assess which of the previous predictions, if any, are best supported by the sequence-similarity relationships between Hox and ParaHox proteins. The overall sequence-similarities show Gsx to be most similar to Hox2–3, and Cdx to be most similar to Hox4–8. The results indicate that a purely pairwise-sequence-similarity-based approach can provide additional information not only when phylogenetic inference methods have insufficient information to provide reliable classifications (as was shown previously for central Hox proteins, but also when the sequence variation is so high that the resulting phylogenetic reconstructions are likely plagued by long-branch-attraction artifacts.

  16. Large-scale stochasticity in Hamiltonian systems

    International Nuclear Information System (INIS)

    Escande, D.F.

    1982-01-01

    Large scale stochasticity (L.S.S.) in Hamiltonian systems is defined on the paradigm Hamiltonian H(v,x,t) =v 2 /2-M cos x-P cos k(x-t) which describes the motion of one particle in two electrostatic waves. A renormalization transformation Tsub(r) is described which acts as a microscope that focusses on a given KAM (Kolmogorov-Arnold-Moser) torus in phase space. Though approximate, Tsub(r) yields the threshold of L.S.S. in H with an error of 5-10%. The universal behaviour of KAM tori is predicted: for instance the scale invariance of KAM tori and the critical exponent of the Lyapunov exponent of Cantori. The Fourier expansion of KAM tori is computed and several conjectures by L. Kadanoff and S. Shenker are proved. Chirikov's standard mapping for stochastic layers is derived in a simpler way and the width of the layers is computed. A simpler renormalization scheme for these layers is defined. A Mathieu equation for describing the stability of a discrete family of cycles is derived. When combined with Tsub(r), it allows to prove the link between KAM tori and nearby cycles, conjectured by J. Greene and, in particular, to compute the mean residue of a torus. The fractal diagrams defined by G. Schmidt are computed. A sketch of a methodology for computing the L.S.S. threshold in any two-degree-of-freedom Hamiltonian system is given. (Auth.)

  17. A coordination model for ultra-large scale systems of systems

    Directory of Open Access Journals (Sweden)

    Manuela L. Bujorianu

    2013-11-01

    Full Text Available The ultra large multi-agent systems are becoming increasingly popular due to quick decay of the individual production costs and the potential of speeding up the solving of complex problems. Examples include nano-robots, or systems of nano-satellites for dangerous meteorite detection, or cultures of stem cells for organ regeneration or nerve repair. The topics associated with these systems are usually dealt within the theories of intelligent swarms or biologically inspired computation systems. Stochastic models play an important role and they are based on various formulations of the mechanical statistics. In these cases, the main assumption is that the swarm elements have a simple behaviour and that some average properties can be deduced for the entire swarm. In contrast, complex systems in areas like aeronautics are formed by elements with sophisticated behaviour, which are even autonomous. In situations like this, a new approach to swarm coordination is necessary. We present a stochastic model where the swarm elements are communicating autonomous systems, the coordination is separated from the component autonomous activity and the entire swarm can be abstracted away as a piecewise deterministic Markov process, which constitutes one of the most popular model in stochastic control. Keywords: ultra large multi-agent systems, system of systems, autonomous systems, stochastic hybrid systems.

  18. A composite step conjugate gradients squared algorithm for solving nonsymmetric linear systems

    Science.gov (United States)

    Chan, Tony; Szeto, Tedd

    1994-03-01

    We propose a new and more stable variant of the CGS method [27] for solving nonsymmetric linear systems. The method is based on squaring the Composite Step BCG method, introduced recently by Bank and Chan [1,2], which itself is a stabilized variant of BCG in that it skips over steps for which the BCG iterate is not defined and causes one kind of breakdown in BCG. By doing this, we obtain a method (Composite Step CGS or CSCGS) which not only handles the breakdowns described above, but does so with the advantages of CGS, namely, no multiplications by the transpose matrix and a faster convergence rate than BCG. Our strategy for deciding whether to skip a step does not involve any machine dependent parameters and is designed to skip near breakdowns as well as produce smoother iterates. Numerical experiments show that the new method does produce improved performance over CGS on practical problems.

  19. Solving Graph Laplacian Systems Through Recursive Bisections and Two-Grid Preconditioning

    Energy Technology Data Exchange (ETDEWEB)

    Ponce, Colin [Cornell Univ., Ithaca, NY (United States); Vassilevski, Panayot S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-02-18

    We present a parallelizable direct method for computing the solution to graph Laplacian-based linear systems derived from graphs that can be hierarchically bipartitioned with small edge cuts. For a graph of size n with constant-size edge cuts, our method decomposes a graph Laplacian in time O(n log n), and then uses that decomposition to perform a linear solve in time O(n log n). We then use the developed technique to design a preconditioner for graph Laplacians that do not have this property. Finally, we augment this preconditioner with a two-grid method that accounts for much of the preconditioner's weaknesses. We present an analysis of this method, as well as a general theorem for the condition number of a general class of two-grid support graph-based preconditioners. Numerical experiments illustrate the performance of the studied methods.

  20. SPECIFIC REGULATIONS REGARDING THE SOLVING OF LABOR DISPUTES IN ROMANIAN LEGAL SYSTEM

    Directory of Open Access Journals (Sweden)

    Onica -Chipea Lavinia

    2012-01-01

    Full Text Available The paper aims to briefly review specific provisions of labor legislation for the solving of labor disputes. Those rules are found in matters of discrimination in the payment settlements, the public sector staff as well as some personnel status or disciplinary (work stops at Status of Teachers and established a derogationfrom the common law (Labor Code Law nr.62/2011 of Social Dialogue in resolving individual labor conflicts(former conflicts of rights. The role and importance of these regulations is that they give the parties the employment relationship, particularly employees, way, way more for rights enshrined in law. Appeals, complaints or expressions of individual grievances be settled outside the judicial system organ (the courts,authorizing officers, judicial administrative organs, which aim at restoring order violated.

  1. Solving block linear systems with low-rank off-diagonal blocks is easily parallelizable

    Energy Technology Data Exchange (ETDEWEB)

    Menkov, V. [Indiana Univ., Bloomington, IN (United States)

    1996-12-31

    An easily and efficiently parallelizable direct method is given for solving a block linear system Bx = y, where B = D + Q is the sum of a non-singular block diagonal matrix D and a matrix Q with low-rank blocks. This implicitly defines a new preconditioning method with an operation count close to the cost of calculating a matrix-vector product Qw for some w, plus at most twice the cost of calculating Qw for some w. When implemented on a parallel machine the processor utilization can be as good as that of those operations. Order estimates are given for the general case, and an implementation is compared to block SSOR preconditioning.

  2. Viewing Systems for Large Underground Storage Tanks

    International Nuclear Information System (INIS)

    Heckendorn, F.M.; Robinson, C.W.; Anderson, E.K.; Pardini, A.F.

    1996-01-01

    Specialized remote video systems have been successfully developed and deployed in a number of large radiological Underground Storage Tanks (USTs)that tolerate the hostile tank interior, while providing high resolution video to a remotely located operator. The deployment is through 100 mm (4 in) tank openings, while incorporating full video functions of the camera, lights, and zoom lens. The usage of remote video minimizes the potential for personnel exposure to radiological and hazardous conditions, and maximizes the quality of the visual data used to assess the interior conditions of both tank and contents. The robustness of this type of remote system has a direct effect on the potential for radiological exposure that personnel may encounter. The USTs typical of the Savannah River and Hanford Department Of Energy - (DOE) sites are typically 4.5 million liter (1.2 million gal) units under earth. or concrete overburden with limited openings to the surface. The interior is both highly contaminated and radioactive with a wide variety of nuclear processing waste material. Some of the tanks are -flammable rated -to Class 1, Division 1,and personnel presence at or near the openings should be minimized. The interior of these USTs must be assessed periodically as part of the ongoing management of the tanks and as a step towards tank remediation. The systems are unique in their deployment technology, which virtually eliminates the potential for entrapment in a tank, and their ability to withstand flammable environments. A multiplicity of components used within a common packaging allow for cost effective and appropriate levels of technology, with radiation hardened components on some units and lesser requirements on other units. All units are completely self contained for video, zoom lens, lighting, deployment,as well as being self purging, and modular in construction

  3. Modified multiple time scale method for solving strongly nonlinear damped forced vibration systems

    Science.gov (United States)

    Razzak, M. A.; Alam, M. Z.; Sharif, M. N.

    2018-03-01

    In this paper, modified multiple time scale (MTS) method is employed to solve strongly nonlinear forced vibration systems. The first-order approximation is only considered in order to avoid complexicity. The formulations and the determination of the solution procedure are very easy and straightforward. The classical multiple time scale (MS) and multiple scales Lindstedt-Poincare method (MSLP) do not give desire result for the strongly damped forced vibration systems with strong damping effects. The main aim of this paper is to remove these limitations. Two examples are considered to illustrate the effectiveness and convenience of the present procedure. The approximate external frequencies and the corresponding approximate solutions are determined by the present method. The results give good coincidence with corresponding numerical solution (considered to be exact) and also provide better result than other existing results. For weak nonlinearities with weak damping effect, the absolute relative error measures (first-order approximate external frequency) in this paper is only 0.07% when amplitude A = 1.5 , while the relative error gives MSLP method is surprisingly 28.81%. Furthermore, for strong nonlinearities with strong damping effect, the absolute relative error found in this article is only 0.02%, whereas the relative error obtained by MSLP method is 24.18%. Therefore, the present method is not only valid for weakly nonlinear damped forced systems, but also gives better result for strongly nonlinear systems with both small and strong damping effect.

  4. A trial of patient-oriented problem-solving system for immunology teaching in China: a comparison with dialectic lectures

    OpenAIRE

    Zhang Zhiren; Liu Wei; Han Junfeng; Guo Sheng; Wu Yuzhang

    2013-01-01

    Abstract Background The most common teaching method used in China is lecturing, but recently, efforts have been widely undertaken to promote the transition from teacher-centered to student-centered education. The patient-oriented problem-solving (POPS) system is an innovative teaching-learning method that permits students to work in small groups to solve clinical problems, promotes self-learning, encourages clinical reasoning and develops long-lasting memory. To our best knowledge, however, P...

  5. Research on unit commitment with large-scale wind power connected power system

    Science.gov (United States)

    Jiao, Ran; Zhang, Baoqun; Chi, Zhongjun; Gong, Cheng; Ma, Longfei; Yang, Bing

    2017-01-01

    Large-scale integration of wind power generators into power grid brings severe challenges to power system economic dispatch due to its stochastic volatility. Unit commitment including wind farm is analyzed from the two parts of modeling and solving methods. The structures and characteristics can be summarized after classification has been done according to different objective function and constraints. Finally, the issues to be solved and possible directions of research and development in the future are discussed, which can adapt to the requirements of the electricity market, energy-saving power generation dispatching and smart grid, even providing reference for research and practice of researchers and workers in this field.

  6. The influence of mechatronic learning systems on creative problem solving of pupils participating in technology class A pilot study

    Directory of Open Access Journals (Sweden)

    Kai-Christian Tönnsen

    2017-12-01

    Full Text Available Without being creative and finding solutions for various problems of life mankind wouldn’t be what it is today. Problem solving always has been a key ability for development, in the past, the present and it will also be a key for the future. Creative problem solving is one of the most important ways of technical thinking and acting. Therefore, the ability of finding solutions for problems and realizing them is a primary goal for technological education, especially if it is part of a comprehensive school education. It can be assumed that the available resources affect the possibilities and the result of problem solving processes. In terms of technology classes there are numerous resources that aim for the development of pupils’ creative problem solving skills like for instance mechatronic educational environments (MEEs. Unfortunately there is currently no test instrument for rating the influence of these MEEs on the outcome in terms of creative technical problem solving processes. Therefore, we designed a trial for such purpose and tested it in a pilot study: 33 students (9th grade, average age of 15.24 years of comprehensive schools were given a problem, which had to be solved using three different MEEs. Solutions found by the students have been documented and analyzed to identify system characteristics which enhance or inhibit the creative outcome.Key words: Creative problem solving, technology education, mechatronic educational environments, Festo MecLab, Fischertechnik RoboTX, Lego Mindstorms EV3

  7. A New Method Based On Modified Shuffled Frog Leaping Algorithm In Order To Solve Nonlinear Large Scale Problem

    Directory of Open Access Journals (Sweden)

    Aliasghar Baziar

    2015-03-01

    Full Text Available Abstract In order to handle large scale problems this study has used shuffled frog leaping algorithm. This algorithm is an optimization method based on natural memetics that uses a new two-phase modification to it to have a better search in the problem space. The suggested algorithm is evaluated by comparing to some well known algorithms using several benchmark optimization problems. The simulation results have clearly shown the superiority of this algorithm over other well-known methods in the area.

  8. On problems to be solved for utilizing shock isolation systems to NPP

    International Nuclear Information System (INIS)

    Shibata, H.; Shigeta, T.; Komine, H.

    1989-01-01

    This paper discusses the development of difficulties with light water fast breeder reactors (LFBR). The authors focus their discussion on thin wall reactor vessels, thin wall sodium loops, and large sodium pools with free surfaces. Conclusions considered are to lower the center of gravity and the use of shock isolation system. Since the success of Super-phenix, the interest to develop a large fast reactor, so called LFBR, has become more realistic one in Japan. However, the anti-earthquake design of a pool-type large fast reactor is more difficult than that of light water reactors for high seismicity areas like Japan. The reason of difficulties come from the difference of the structural requirement for LFBR. Three major points are as follows: thin wall reactor vessel, thin wall sodium loops, large sodium pool with free surface

  9. Developing a Creativity and Problem Solving Course in Support of the Information Systems Curriculum

    Science.gov (United States)

    Martz, Ben; Hughes, Jim; Braun, Frank

    2016-01-01

    This paper looks at and assesses the development and implementation of a problem solving and creativity class for the purpose of providing a basis for a Business Informatics curriculum. The development was fueled by the desire to create a broad based class that 1. Familiarized students to the underlying concepts of problem solving; 2. Introduced…

  10. Problem Solving Method Based on E-Learning System for Engineering Education

    Science.gov (United States)

    Khazaal, Hasan F.

    2015-01-01

    Encouraging engineering students to handle advanced technology with multimedia, as well as motivate them to have the skills of solving the problem, are the missions of the teacher in preparing students for a modern professional career. This research proposes a scenario of problem solving in basic electrical circuits based on an e-learning system…

  11. Environmental effects and large space systems

    Science.gov (United States)

    Garrett, H. B.

    1981-01-01

    When planning large scale operations in space, environmental impact must be considered in addition to radiation, spacecraft charging, contamination, high power and size. Pollution of the atmosphere and space is caused by rocket effluents and by photoelectrons generated by sunlight falling on satellite surfaces even light pollution may result (the SPS may reflect so much light as to be a nuisance to astronomers). Large (100 Km 2) structures also will absorb the high energy particles that impinge on them. Altogether, these effects may drastically alter the Earth's magnetosphere. It is not clear if these alterations will in any way affect the Earth's surface climate. Large structures will also generate large plasma wakes and waves which may cause interference with communications to the vehicle. A high energy, microwave beam from the SPS will cause ionospheric turbulence, affecting UHF and VHF communications. Although none of these effects may ultimately prove critical, they must be considered in the design of large structures.

  12. Using interactive problem-solving techniques to enhance control systems education for non English-speakers

    Science.gov (United States)

    Lamont, L. A.; Chaar, L.; Toms, C.

    2010-03-01

    Interactive learning is beneficial to students in that it allows the continual development and testing of many skills. An interactive approach enables students to improve their technical capabilities, as well as developing both verbal and written communicative ability. Problem solving and communication skills are vital for engineering students; in the workplace they will be required to communicate with people of varying technical abilities and from different linguistic and engineering backgrounds. In this paper, a case study is presented that discusses how the traditional method of teaching control systems can be improved. 'Control systems' is a complex engineering topic requiring students to process an extended amount of mathematical formulae. MATLAB software, which enables students to interactively compare a range of possible combinations and analyse the optimal solution, is used to this end. It was found that students became more enthusiastic and interested when given ownership of their learning objectives. As well as improving the students' technical knowledge, other important engineering skills are also improved by introducing an interactive method of teaching.

  13. Large solar energy systems within IEA task 14

    NARCIS (Netherlands)

    Geus, A.C. de; Isakson, P.; Bokhoven, T.P.; Vanoli, K.; Tepe, R.

    1996-01-01

    Within IEA Task 14 (Advanced Solar Systems) a working group was established dealing with large advanced solar energy systems (the Large Systems Working group). The goal of this working group was to generate a common base of experiences for the design and construction of advanced large solar systems.

  14. Solving problems by interrogating sets of knowledge systems: Toward a theory of multiple knowledge systems

    Science.gov (United States)

    Dekorvin, Andre

    1989-01-01

    The main purpose is to develop a theory for multiple knowledge systems. A knowledge system could be a sensor or an expert system, but it must specialize in one feature. The problem is that we have an exhaustive list of possible answers to some query (such as what object is it). By collecting different feature values, in principle, it should be possible to give an answer to the query, or at least narrow down the list. Since a sensor, or for that matter an expert system, does not in most cases yield a precise value for the feature, uncertainty must be built into the model. Also, researchers must have a formal mechanism to be able to put the information together. Researchers chose to use the Dempster-Shafer approach to handle the problems mentioned above. Researchers introduce the concept of a state of recognition and point out that there is a relation between receiving updates and defining a set valued Markov Chain. Also, deciding what the value of the next set valued variable is can be phrased in terms of classical decision making theory such as minimizing the maximum regret. Other related problems are examined.

  15. Student Learning of Complex Earth Systems: A Model to Guide Development of Student Expertise in Problem-Solving

    Science.gov (United States)

    Holder, Lauren N.; Scherer, Hannah H.; Herbert, Bruce E.

    2017-01-01

    Engaging students in problem-solving concerning environmental issues in near-surface complex Earth systems involves developing student conceptualization of the Earth as a system and applying that scientific knowledge to the problems using practices that model those used by professionals. In this article, we review geoscience education research…

  16. Solving the challenges of data preprocessing, uploading, archiving, retrieval, analysis and visualization for large heterogeneous paleo- and rock magnetic datasets

    Science.gov (United States)

    Minnett, R.; Koppers, A. A.; Tauxe, L.; Constable, C.; Jarboe, N. A.

    2011-12-01

    The Magnetics Information Consortium (MagIC) provides an archive for the wealth of rock- and paleomagnetic data and interpretations from studies on natural and synthetic samples. As with many fields, most peer-reviewed paleo- and rock magnetic publications only include high level results. However, access to the raw data from which these results were derived is critical for compilation studies and when updating results based on new interpretation and analysis methods. MagIC provides a detailed metadata model with places for everything from raw measurements to their interpretations. Prior to MagIC, these raw data were extremely cumbersome to collect because they mostly existed in a lab's proprietary format on investigator's personal computers or undigitized in field notebooks. MagIC has developed a suite of offline and online tools to enable the paleomagnetic, rock magnetic, and affiliated scientific communities to easily contribute both their previously published data and data supporting an article undergoing peer-review, to retrieve well-annotated published interpretations and raw data, and to analyze and visualize large collections of published data online. Here we present the technology we chose (including VBA in Excel spreadsheets, Python libraries, FastCGI JSON webservices, Oracle procedures, and jQuery user interfaces) and how we implemented it in order to serve the scientific community as seamlessly as possible. These tools are now in use in labs worldwide, have helped archive many valuable legacy studies and datasets, and routinely enable new contributions to the MagIC Database (http://earthref.org/MAGIC/).

  17. Island Megalopolises: Tunnel Systems as a Critical Alternative in Solving Transport Problems

    Directory of Open Access Journals (Sweden)

    Vladimir V. Makarov

    2018-02-01

    Full Text Available A principal difficulty with island megalopolises is the transport problem, which results from limited surface land on an already developed island, on which roads and car parking can be placed. This limitation leads to traffic jams on the small number of roads and to intrusive car parking in any available surface location, resulting in safety issues. The city of Vladivostok is located on the Muravyov-Amursky Peninsula in the Russia Far East region (the Primorsky Krai. This city is essentially the third capital of Russia because of its important geopolitical location. To address the car traffic problems in Vladivostok, and because of the absence of places to build new roads, the city administration has proposed the usage of the beaches and waterfronts along the sea coast in this regard. This decision is in sharp conflict with Vladivostok’s ecological and social aspirations to be recognized as a world-class city. It also neglects the lessons that have been learned in many other waterfront cities around the world, as such cities have first built aboveground waterfront highways and later decided to remove them at great expense, in order to allow their citizens to properly enjoy the environmental and historical assets of their waterfronts. A key alternative would be to create an independent tunneled transport system along with added underground parking so that the transport problems can be addressed in a manner that enhances the ecology and livability of the city. A comparison of the two alternatives for solving the transport problem, that is, underground versus aboveground, shows the significant advantages of the independent tunnel system. Complex efficiency criteria have been developed in order to quantify the estimation of the alternative variants of the Vladivostok transport system. It was determined that the underground project is almost 1.8 times more advantageous than the aboveground alternative. Keywords: Megalopolises, Transport, Tunnels

  18. Development of distortion measurement system for large deployable antenna via photogrammetry in vacuum and cryogenic environment

    Science.gov (United States)

    Zhang, Pengsong; Jiang, Shanping; Yang, Linhua; Zhang, Bolun

    2018-01-01

    In order to meet the requirement of high precision thermal distortion measurement foraΦ4.2m deployable mesh antenna of satellite in vacuum and cryogenic environment, based on Digital Close-range Photogrammetry and Space Environment Test Technology of Spacecraft, a large scale antenna distortion measurement system under vacuum and cryogenic environment is developed in this paper. The antenna Distortion measurement system (ADMS) is the first domestic independently developed thermal distortion measurement system for large antenna, which has successfully solved non-contact high precision distortion measurement problem in large spacecraft structure under vacuum and cryogenic environment. The measurement accuracy of ADMS is better than 50 μm/5m, which has reached international advanced level. The experimental results show that the measurement system has great advantages in large structural measurement of spacecrafts, and also has broad application prospects in space or other related fields.

  19. Solving the Bateman equations in CASMO5 using implicit ode numerical methods for stiff systems

    International Nuclear Information System (INIS)

    Hykes, J. M.; Ferrer, R. M.

    2013-01-01

    The Bateman equations, which describe the transmutation of nuclides over time as a result of radioactive decay, absorption, and fission, are often numerically stiff. This is especially true if short-lived nuclides are included in the system. This paper describes the use of implicit numerical methods for o D Es applied to the stiff Bateman equations, specifically employing the Backward Differentiation Formulas (BDF) form of the linear multistep method. As is true in other domains, using an implicit method removes or lessens the (sometimes severe) step-length constraints by which explicit methods must abide. To gauge its accuracy and speed, the BDF method is compared to a variety of other solution methods, including Runge-Kutta explicit methods and matrix exponential methods such as the Chebyshev Rational Approximation Method (CRAM). A preliminary test case was chosen as representative of a PWR lattice depletion step and was solved with numerical libraries called from a Python front-end. The Figure of Merit (a combined measure of accuracy and efficiency) for the BDF method was nearly identical to that for CRAM, while explicit methods and other matrix exponential approximations trailed behind. The test case includes 319 nuclides, in which the shortest-lived nuclide is 98 Nb with a half-life of 2.86 seconds. Finally, the BDF and CRAM methods were compared within CASMO5, where CRAM had a FOM about four times better than BDF, although the BDF implementation was not fully optimized. (authors)

  20. Integral transform method for solving time fractional systems and fractional heat equation

    Directory of Open Access Journals (Sweden)

    Arman Aghili

    2014-01-01

    Full Text Available In the present paper, time fractional partial differential equation is considered, where the fractional derivative is defined in the Caputo sense. Laplace transform method has been applied to obtain an exact solution. The authors solved certain homogeneous and nonhomogeneous time fractional heat equations using integral transform. Transform method is a powerful tool for solving fractional singular Integro - differential equations and PDEs. The result reveals that the transform method is very convenient and effective.

  1. Optimization of Large-Scale Structural Systems

    DEFF Research Database (Denmark)

    Jensen, F. M.

    solutions to small problems with one or two variables to the optimization of large structures such as bridges, ships and offshore structures. The methods used for salving these problems have evolved from being classical differential calculus and calculus of variation to very advanced numerical techniques...

  2. 75 FR 21455 - Large Trader Reporting System

    Science.gov (United States)

    2010-04-23

    ... essence, a ``large trader'' would be defined as a person whose transactions in NMS securities equal or... directly or indirectly effect securities transactions.\\14\\ \\12\\ Section 13(h) of the Exchange Act defines a... term ``identifying activity level'' is defined in Section 13(h) as ``transactions in publicly traded...

  3. An improved safeguards system and its application to large back-end facilities

    International Nuclear Information System (INIS)

    1978-12-01

    The paper suggests that the safeguards problems of large plants could be solved by a combination of containment and surveillance, and material accountancy. The paper places considerable importance on the installation of 'anomaly detectors' at various points. These would detect movements through the containment and unusual changes in the process itself. The importance of the quantification, the contribution of accountancy and containment and surveillance system is emphasised and a mathematical approach is outlined

  4. Iterative algorithms for large sparse linear systems on parallel computers

    Science.gov (United States)

    Adams, L. M.

    1982-01-01

    Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.

  5. Soft Systems Methodology and Problem Framing: Development of an Environmental Problem Solving Model Respecting a New Emergent Reflexive Paradigm.

    Science.gov (United States)

    Gauthier, Benoit; And Others

    1997-01-01

    Identifies the more representative problem-solving models in environmental education. Suggests the addition of a strategy for defining a problem situation using Soft Systems Methodology to environmental education activities explicitly designed for the development of critical thinking. Contains 45 references. (JRH)

  6. Instructional Supports for Representational Fluency in Solving Linear Equations with Computer Algebra Systems and Paper-and-Pencil

    Science.gov (United States)

    Fonger, Nicole L.; Davis, Jon D.; Rohwer, Mary Lou

    2018-01-01

    This research addresses the issue of how to support students' representational fluency--the ability to create, move within, translate across, and derive meaning from external representations of mathematical ideas. The context of solving linear equations in a combined computer algebra system (CAS) and paper-and-pencil classroom environment is…

  7. Strategy for solving semi-analytically three-dimensional transient flow in a coupled N-layer aquifer system

    NARCIS (Netherlands)

    Veling, E.J.M.; Maas, C.

    2008-01-01

    Efficient strategies for solving semi-analytically the transient groundwater head in a coupled N-layer aquifer system phi(i)(r, z, t), i = 1, ..., N, with radial symmetry, with full z-dependency, and partially penetrating wells are presented. Aquitards are treated as aquifers with their own

  8. Fire extinguishing system in large underground garages

    Directory of Open Access Journals (Sweden)

    Ivan Antonov

    2017-04-01

    Full Text Available In the work is considered an acceptable constructive scheme from a practical point of view at fire extinguishing in underground garages. The garage space is divided into quadrants which covering, for example, 2 cars. In case of ignition on one of them, a sprinkler nozzle system is triggered by the effect of the vertical convective jet. A protective curtain preventing the spread of fire to adjacent vehicles is realized. The solution is based on an integrated method which allows the calculation from hydrodynamic point of view on extinguishing time of the fire extinguishing system.

  9. The Design of Large Technological Systems

    DEFF Research Database (Denmark)

    Pineda, Andres Felipe Valderrama

    implies a reconfiguration of the designing team, the supporting actors and the diverse user groups. By tracing material scripts, the author accounts for the unfolding of visions, politics and materialities that constitute the system. The analysis contributes to understanding the complex sociotechnical...

  10. Automatic management software for large-scale cluster system

    International Nuclear Information System (INIS)

    Weng Yunjian; Chinese Academy of Sciences, Beijing; Sun Gongxing

    2007-01-01

    At present, the large-scale cluster system faces to the difficult management. For example the manager has large work load. It needs to cost much time on the management and the maintenance of large-scale cluster system. The nodes in large-scale cluster system are very easy to be chaotic. Thousands of nodes are put in big rooms so that some managers are very easy to make the confusion with machines. How do effectively carry on accurate management under the large-scale cluster system? The article introduces ELFms in the large-scale cluster system. Furthermore, it is proposed to realize the large-scale cluster system automatic management. (authors)

  11. A Trigonometrically Fitted Block Method for Solving Oscillatory Second-Order Initial Value Problems and Hamiltonian Systems

    OpenAIRE

    Ngwane, F. F.; Jator, S. N.

    2017-01-01

    In this paper, we present a block hybrid trigonometrically fitted Runge-Kutta-Nyström method (BHTRKNM), whose coefficients are functions of the frequency and the step-size for directly solving general second-order initial value problems (IVPs), including Hamiltonian systems such as the energy conserving equations and systems arising from the semidiscretization of partial differential equations (PDEs). Four discrete hybrid formulas used to formulate the BHTRKNM are provided by a continuous one...

  12. Problem-Solving Test: Conditional Gene Targeting Using the Cre/loxP Recombination System

    Science.gov (United States)

    Szeberényi, József

    2013-01-01

    Terms to be familiar with before you start to solve the test: gene targeting, knock-out mutation, bacteriophage, complementary base-pairing, homologous recombination, deletion, transgenic organisms, promoter, polyadenylation element, transgene, DNA replication, RNA polymerase, Shine-Dalgarno sequence, restriction endonuclease, polymerase chain…

  13. Iterative solution of large linear systems

    CERN Document Server

    Young, David Matheson

    1971-01-01

    This self-contained treatment offers a systematic development of the theory of iterative methods. Its focal point resides in an analysis of the convergence properties of the successive overrelaxation (SOR) method, as applied to a linear system with a consistently ordered matrix. The text explores the convergence properties of the SOR method and related techniques in terms of the spectral radii of the associated matrices as well as in terms of certain matrix norms. Contents include a review of matrix theory and general properties of iterative methods; SOR method and stationary modified SOR meth

  14. Large computer systems and new architectures

    International Nuclear Information System (INIS)

    Bloch, T.

    1978-01-01

    The super-computers of today are becoming quite specialized and one can no longer expect to get all the state-of-the-art software and hardware facilities in one package. In order to achieve faster and faster computing it is necessary to experiment with new architectures, and the cost of developing each experimental architecture into a general-purpose computer system is too high when one considers the relatively small market for these computers. The result is that such computers are becoming 'back-ends' either to special systems (BSP, DAP) or to anything (CRAY-1). Architecturally the CRAY-1 is the most attractive today since it guarantees a speed gain of a factor of two over a CDC 7600 thus allowing us to regard any speed up resulting from vectorization as a bonus. It looks, however, as if it will be very difficult to make substantially faster computers using only pipe-lining techniques and that it will be necessary to explore multiple processors working on the same problem. The experience which will be gained with the BSP and the DAP over the next few years will certainly be most valuable in this respect. (Auth.)

  15. Performance regression manager for large scale systems

    Science.gov (United States)

    Faraj, Daniel A.

    2017-08-01

    System and computer program product to perform an operation comprising generating, based on a first output generated by a first execution instance of a command, a first output file specifying a value of at least one performance metric, wherein the first output file is formatted according to a predefined format, comparing the value of the at least one performance metric in the first output file to a value of the performance metric in a second output file, the second output file having been generated based on a second output generated by a second execution instance of the command, and outputting for display an indication of a result of the comparison of the value of the at least one performance metric of the first output file to the value of the at least one performance metric of the second output file.

  16. A synergetic combination of small and large neighborhood schemes in developing an effective procedure for solving the job shop scheduling problem.

    Science.gov (United States)

    Amirghasemi, Mehrdad; Zamani, Reza

    2014-01-01

    This paper presents an effective procedure for solving the job shop problem. Synergistically combining small and large neighborhood schemes, the procedure consists of four components, namely (i) a construction method for generating semi-active schedules by a forward-backward mechanism, (ii) a local search for manipulating a small neighborhood structure guided by a tabu list, (iii) a feedback-based mechanism for perturbing the solutions generated, and (iv) a very large-neighborhood local search guided by a forward-backward shifting bottleneck method. The combination of shifting bottleneck mechanism and tabu list is used as a means of the manipulation of neighborhood structures, and the perturbation mechanism employed diversifies the search. A feedback mechanism, called repeat-check, detects consequent repeats and ignites a perturbation when the total number of consecutive repeats for two identical makespan values reaches a given threshold. The results of extensive computational experiments on the benchmark instances indicate that the combination of these four components is synergetic, in the sense that they collectively make the procedure fast and robust.

  17. Magmatic systems of large continental igneous provinces

    Directory of Open Access Journals (Sweden)

    E. Sharkov

    2017-07-01

    Full Text Available Large igneous provinces (LIPs formed by mantle superplume events have irreversibly changed their composition in the geological evolution of the Earth from high-Mg melts (during Archean and early Paleoproterozoic to Phanerozoic-type geochemically enriched Fe-Ti basalts and picrites at 2.3 Ga. We propose that this upheaval could be related to the change in the source and nature of the mantle superplumes of different generations. The first generation plumes were derived from the depleted mantle, whereas the second generation (thermochemical originated from the core-mantle boundary (CMB. This study mainly focuses on the second (Phanerozoic type of LIPs, as exemplified by the mid-Paleoproterozoic Jatulian–Ludicovian LIP in the Fennoscandian Shield, the Permian–Triassic Siberian LIP, and the late Cenozoic flood basalts of Syria. The latter LIP contains mantle xenoliths represented by green and black series. These xenoliths are fragments of cooled upper margins of the mantle plume heads, above zones of adiabatic melting, and provide information about composition of the plume material and processes in the plume head. Based on the previous studies on the composition of the mantle xenoliths in within-plate basalts around the world, it is inferred that the heads of the mantle (thermochemical plumes are made up of moderately depleted spinel peridotites (mainly lherzolites and geochemically-enriched intergranular fluid/melt. Further, it is presumed that the plume heads intrude the mafic lower crust and reach up to the bottom of the upper crust at depths ∼20 km. The generation of two major types of mantle-derived magmas (alkali and tholeiitic basalts was previously attributed to the processes related to different PT-parameters in the adiabatic melting zone whereas this study relates to the fluid regime in the plume heads. It is also suggested that a newly-formed melt can occur on different sides of a critical plane of silica undersaturation and can

  18. Balancing modern Power System with large scale of wind power

    DEFF Research Database (Denmark)

    Basit, Abdul; Altin, Müfit; Hansen, Anca Daniela

    2014-01-01

    Power system operators must ensure robust, secure and reliable power system operation even with a large scale integration of wind power. Electricity generated from the intermittent wind in large propor-tion may impact on the control of power system balance and thus deviations in the power system...... frequency in small or islanded power systems or tie line power flows in interconnected power systems. Therefore, the large scale integration of wind power into the power system strongly concerns the secure and stable grid operation. To ensure the stable power system operation, the evolving power system has...... to be analysed with improved analytical tools and techniques. This paper proposes techniques for the active power balance control in future power systems with the large scale wind power integration, where power balancing model provides the hour-ahead dispatch plan with reduced planning horizon and the real time...

  19. An integrated information management system based DSS for problem solving and decision making in open & distance learning institutions of India

    Directory of Open Access Journals (Sweden)

    Pankaj Khanna

    2014-04-01

    Full Text Available An integrated information system based DSS is developed for Open and Distance Learning (ODL institutions in India. The system has been web structured with the most suitable newly developed modules. A DSS model has been developed for solving semi-structured and unstructured problems including decision making with regard to various programmes and activities operating in the ODLIs. The DSS model designed for problem solving is generally based on quantitative formulas, whereas for problems involving imprecision and uncertainty, a fuzzy theory based DSS is employed. The computer operated system thus developed would help the ODLI management to quickly identify programmes and activities that require immediate attention. It shall also provide guidance for obtaining the most appropriate managerial decisions without any loss of time. As a result, the various subsystems operating in the ODLI are able to administer its activities more efficiently and effectively to enhance the overall performance of the concerned ODL institution to a new level.

  20. Algorithms for large scale singular value analysis of spatially variant tomography systems

    International Nuclear Information System (INIS)

    Cao-Huu, Tuan; Brownell, G.; Lachiver, G.

    1996-01-01

    The problem of determining the eigenvalues of large matrices occurs often in the design and analysis of modem tomography systems. As there is an interest in solving systems containing an ever-increasing number of variables, current research effort is being made to create more robust solvers which do not depend on some special feature of the matrix for convergence (e.g. block circulant), and to improve the speed of already known and understood solvers so that solving even larger systems in a reasonable time becomes viable. Our standard techniques for singular value analysis are based on sparse matrix factorization and are not applicable when the input matrices are large because the algorithms cause too much fill. Fill refers to the increase of non-zero elements in the LU decomposition of the original matrix A (the system matrix). So we have developed iterative solutions that are based on sparse direct methods. Data motion and preconditioning techniques are critical for performance. This conference paper describes our algorithmic approaches for large scale singular value analysis of spatially variant imaging systems, and in particular of PCR2, a cylindrical three-dimensional PET imager 2 built at the Massachusetts General Hospital (MGH) in Boston. We recommend the desirable features and challenges for the next generation of parallel machines for optimal performance of our solver

  1. A Note on Using Partitioning Techniques for Solving Unconstrained Optimization Problems on Parallel Systems

    Directory of Open Access Journals (Sweden)

    Mehiddin Al-Baali

    2015-12-01

    Full Text Available We deal with the design of parallel algorithms by using variable partitioning techniques to solve nonlinear optimization problems. We propose an iterative solution method that is very efficient for separable functions, our scope being to discuss its performance for general functions. Experimental results on an illustrative example have suggested some useful modifications that, even though they improve the efficiency of our parallel method, leave some questions open for further investigation.

  2. Constructing Frozen Jacobian Iterative Methods for Solving Systems of Nonlinear Equations, Associated with ODEs and PDEs Using the Homotopy Method

    Directory of Open Access Journals (Sweden)

    Uswah Qasim

    2016-03-01

    Full Text Available A homotopy method is presented for the construction of frozen Jacobian iterative methods. The frozen Jacobian iterative methods are attractive because the inversion of the Jacobian is performed in terms of LUfactorization only once, for a single instance of the iterative method. We embedded parameters in the iterative methods with the help of the homotopy method: the values of the parameters are determined in such a way that a better convergence rate is achieved. The proposed homotopy technique is general and has the ability to construct different families of iterative methods, for solving weakly nonlinear systems of equations. Further iterative methods are also proposed for solving general systems of nonlinear equations.

  3. Application of Decomposition Methodology to Solve Integrated Process Design and Controller Design Problems for Reactor-Separator-Recycle System

    DEFF Research Database (Denmark)

    Abd.Hamid, Mohd-Kamaruddin; Sin, Gürkan; Gani, Rafiqul

    2010-01-01

    This paper presents the integrated process design and controller design (IPDC) for a reactor-separator-recycle (RSR) system and evaluates a decomposition methodology to solve the IPDC problem. Accordingly, the IPDC problem is solved by decomposing it into four hierarchical stages: (i) pre...... the design of a RSR system involving consecutive reactions, A B -> C and shown to provide effective solutions that satisfy design, control and cost criteria. The advantage of the proposed methodology is that it is systematic, makes use of thermodynamic-process knowledge and provides valuable insights......-analysis, (ii) design analysis, (iii) controller design analysis, and (iv) final selection and verification. The methodology makes use of thermodynamic-process insights and the reverse design approach to arrive at the final process-controller design decisions. The developed methodology is illustrated through...

  4. Applications of systems thinking and soft operations research in managing complexity from problem framing to problem solving

    CERN Document Server

    2016-01-01

    This book captures current trends and developments in the field of systems thinking and soft operations research which can be applied to solve today's problems of dynamic complexity and interdependency. Such ‘wicked problems’ and messes are seemingly intractable problems characterized as value-laden, ambiguous, and unstable, that resist being tamed by classical problem solving. Actions and interventions associated with this complex problem space can have highly unpredictable and unintended consequences. Examples of such complex problems include health care reform, global climate change, transnational serious and organized crime, terrorism, homeland security, human security, disaster management, and humanitarian aid. Moving towards the development of solutions to these complex problem spaces depends on the lens we use to examine them and how we frame the problem. It will be shown that systems thinking and soft operations research has had great success in contributing to the management of complexity. .

  5. An advanced dispatching technology for large container inspection system

    International Nuclear Information System (INIS)

    Chen Zhiqiang; Zhang Li; Kang Kejun; Gao Wenhuan

    2001-01-01

    The author describes the transmitting and dispatching technology of large container inspection system. It introduces the structure of the double buffer graded pipe lining used in the system. Strategies of queue mechanism and waiting dispatch policy are illustrated

  6. Estimating the state of large spatio-temporally chaotic systems

    International Nuclear Information System (INIS)

    Ott, E.; Hunt, B.R.; Szunyogh, I.; Zimin, A.V.; Kostelich, E.J.; Corazza, M.; Kalnay, E.; Patil, D.J.; Yorke, J.A.

    2004-01-01

    We consider the estimation of the state of a large spatio-temporally chaotic system from noisy observations and knowledge of a system model. Standard state estimation techniques using the Kalman filter approach are not computationally feasible for systems with very many effective degrees of freedom. We present and test a new technique (called a Local Ensemble Kalman Filter), generally applicable to large spatio-temporally chaotic systems for which correlations between system variables evaluated at different points become small at large separation between the points

  7. Needs, opportunities, and options for large scale systems research

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  8. Developing a File System Structure to Solve Healthy Big Data Storage and Archiving Problems Using a Distributed File System

    Directory of Open Access Journals (Sweden)

    Atilla Ergüzen

    2018-06-01

    Full Text Available Recently, the use of internet has become widespread, increasing the use of mobile phones, tablets, computers, Internet of Things (IoT devices and other digital sources. In the health sector with the help of new generation digital medical equipment, this digital world also has tended to grow in an unpredictable way in that it has nearly 10% of the global wide data itself and continues to keep grow beyond what the other sectors have. This progress has greatly enlarged the amount of produced data which cannot be resolved with conventional methods. In this work, an efficient model for the storage of medical images using a distributed file system structure has been developed. With this work, a robust, available, scalable, and serverless solution structure has been produced, especially for storing large amounts of data in the medical field. Furthermore, the security level of the system is extreme by use of static Internet protocol (IP, user credentials, and synchronously encrypted file contents. One of the most important key features of the system is high performance and easy scalability. In this way, the system can work with fewer hardware elements and be more robust than others that use name node architecture. According to the test results, it is seen that the performance of the designed system is better than 97% from a Not Only Structured Query Language (NoSQL system, 80% from a relational database management system (RDBMS, and 74% from an operating system (OS.

  9. Power quality load management for large spacecraft electrical power systems

    Science.gov (United States)

    Lollar, Louis F.

    1988-01-01

    In December, 1986, a Center Director's Discretionary Fund (CDDF) proposal was granted to study power system control techniques in large space electrical power systems. Presented are the accomplishments in the area of power system control by power quality load management. In addition, information concerning the distortion problems in a 20 kHz ac power system is presented.

  10. Finding a Roadmap to achieve Large Neuromorphic Hardware Systems

    Directory of Open Access Journals (Sweden)

    Jennifer eHasler

    2013-09-01

    Full Text Available Neuromorphic systems are gaining increasing importance in an era where CMOS digital computing techniques are meeting hard physical limits. These silicon systems mimic extremely energy efficient neural computing structures, potentially both for solving engineering applications as well as understanding neural computation. Towards this end, the authors provide a glimpse at what the technology evolution roadmap looks like for these systems so that Neuromorphic engineers may gain the same benefit of anticipation and foresight that IC designers gained from Moore's law many years ago. Scaling of energy efficiency, performance, and size will be discussed as well as how the implementation and application space of Neuromorphic systems are expected to evolve over time.

  11. Virtual microscopy system at Chinese medical university: an assisted teaching platform for promoting active learning and problem-solving skills.

    Science.gov (United States)

    Tian, Yanping; Xiao, Wengang; Li, Chengren; Liu, Yunlai; Qin, Maolin; Wu, Yi; Xiao, Lan; Li, Hongli

    2014-04-09

    Chinese medical universities typically have a high number of students, a shortage of teachers and limited equipment, and as such histology courses have been taught using traditional lecture-based formats, with textbooks and conventional microscopy. This method, however, has reduced creativity and problem-solving skills training in the curriculum. The virtual microscope (VM) system has been shown to be an effective and efficient educational strategy. The present study aims to describe a VM system for undergraduates and to evaluate the effects of promoting active learning and problem-solving skills. Two hundred and twenty-nine second-year undergraduate students in the Third Military Medical University were divided into two groups. The VM group contained 115 students and was taught using the VM system. The light microscope (LM) group consisted of 114 students and was taught using the LM system. Post-teaching performances were assessed by multiple-choice questions, short essay questions, case analysis questions and the identification of structure of tissue. Students' teaching preferences and satisfaction were assessed using questionnaires. Test scores in the VM group showed a significant improvement compared with those in the LM group (p 0.05); however, there were notable differences in the mean score rate of case analysis questions and identification of structure of tissue (p effects of the VM system in terms of additional learning resources, critical thinking, ease of communication and confidence. The VM system is an effective tool at Chinese medical university to promote undergraduates' active learning and problem-solving skills as an assisted teaching platform.

  12. A finite element based substructuring procedure for design analysis of large smart structural systems

    International Nuclear Information System (INIS)

    Ashwin, U; Raja, S; Dwarakanathan, D

    2009-01-01

    A substructuring based design analysis procedure is presented for large smart structural system using the Craig–Bampton method. The smart structural system is distinctively characterized as an active substructure, modelled as a design problem, and a passive substructure, idealized as an analysis problem. Furthermore, a novel thought has been applied by introducing the electro–elastic coupling into the reduction scheme to solve the global structural control problem in a local domain. As an illustration, a smart composite box beam with surface bonded actuators/sensors is considered, and results of the local to global control analysis are presented to show the potential use of the developed procedure. The present numerical scheme is useful for optimally designing the active substructures to study their locations, coupled structure–actuator interaction and provide a solution to the global design of large smart structural systems

  13. Performance prediction of gas turbines by solving a system of non-linear equations

    Energy Technology Data Exchange (ETDEWEB)

    Kaikko, J

    1998-09-01

    This study presents a novel method for implementing the performance prediction of gas turbines from the component models. It is based on solving the non-linear set of equations that corresponds to the process equations, and the mass and energy balances for the engine. General models have been presented for determining the steady state operation of single components. Single and multiple shad arrangements have been examined with consideration also being given to heat regeneration and intercooling. Emphasis has been placed upon axial gas turbines of an industrial scale. Applying the models requires no information of the structural dimensions of the gas turbines. On comparison with the commonly applied component matching procedures, this method incorporates several advantages. The application of the models for providing results is facilitated as less attention needs to be paid to calculation sequences and routines. Solving the set of equations is based on zeroing co-ordinate functions that are directly derived from the modelling equations. Therefore, controlling the accuracy of the results is easy. This method gives more freedom for the selection of the modelling parameters since, unlike for the matching procedures, exchanging these criteria does not itself affect the algorithms. Implicit relationships between the variables are of no significance, thus increasing the freedom for the modelling equations as well. The mathematical models developed in this thesis will provide facilities to optimise the operation of any major gas turbine configuration with respect to the desired process parameters. The computational methods used in this study may also be adapted to any other modelling problems arising in industry. (orig.) 36 refs.

  14. Design techniques for large scale linear measurement systems

    International Nuclear Information System (INIS)

    Candy, J.V.

    1979-03-01

    Techniques to design measurement schemes for systems modeled by large scale linear time invariant systems, i.e., physical systems modeled by a large number (> 5) of ordinary differential equations, are described. The techniques are based on transforming the physical system model to a coordinate system facilitating the design and then transforming back to the original coordinates. An example of a three-stage, four-species, extraction column used in the reprocessing of spent nuclear fuel elements is presented. The basic ideas are briefly discussed in the case of noisy measurements. An example using a plutonium nitrate storage vessel (reprocessing) with measurement uncertainty is also presented

  15. Reduction of Large Dynamical Systems by Minimization of Evolution Rate

    Science.gov (United States)

    Girimaji, Sharath S.

    1999-01-01

    Reduction of a large system of equations to a lower-dimensional system of similar dynamics is investigated. For dynamical systems with disparate timescales, a criterion for determining redundant dimensions and a general reduction method based on the minimization of evolution rate are proposed.

  16. Comparing direct and iterative equation solvers in a large structural analysis software system

    Science.gov (United States)

    Poole, E. L.

    1991-01-01

    Two direct Choleski equation solvers and two iterative preconditioned conjugate gradient (PCG) equation solvers used in a large structural analysis software system are described. The two direct solvers are implementations of the Choleski method for variable-band matrix storage and sparse matrix storage. The two iterative PCG solvers include the Jacobi conjugate gradient method and an incomplete Choleski conjugate gradient method. The performance of the direct and iterative solvers is compared by solving several representative structural analysis problems. Some key factors affecting the performance of the iterative solvers relative to the direct solvers are identified.

  17. Large deviations for noninteracting infinite-particle systems

    International Nuclear Information System (INIS)

    Donsker, M.D.; Varadhan, S.R.S.

    1987-01-01

    A large deviation property is established for noninteracting infinite particle systems. Previous large deviation results obtained by the authors involved a single I-function because the cases treated always involved a unique invariant measure for the process. In the context of this paper there is an infinite family of invariant measures and a corresponding infinite family of I-functions governing the large deviations

  18. Optimization of MIMO Systems Capacity Using Large Random Matrix Methods

    Directory of Open Access Journals (Sweden)

    Philippe Loubaton

    2012-11-01

    Full Text Available This paper provides a comprehensive introduction of large random matrix methods for input covariance matrix optimization of mutual information of MIMO systems. It is first recalled informally how large system approximations of mutual information can be derived. Then, the optimization of the approximations is discussed, and important methodological points that are not necessarily covered by the existing literature are addressed, including the strict concavity of the approximation, the structure of the argument of its maximum, the accuracy of the large system approach with regard to the number of antennas, or the justification of iterative water-filling optimization algorithms. While the existing papers have developed methods adapted to a specific model, this contribution tries to provide a unified view of the large system approximation approach.

  19. Improved control system power unit for large parachutes

    Science.gov (United States)

    Chandler, J. A.; Grubbs, T. M.

    1968-01-01

    Improved control system power unit drives the control surfaces of very large controllable parachutes. The design features subassemblies for determining control surface position and cable loading, and protection of the load sensor against the possibility of damage during manipulation.

  20. Studying the effects of operators' problem solving behaviour when using a diagnostic expert system developed for the nuclear industry

    International Nuclear Information System (INIS)

    Holmstroem, C.B.O.; Volden, F.S.; Endestad, T.

    1992-01-01

    This paper describes an experiment with the purpose to also illustrate and discuss some of the methodological problems when empirically studying problem solving. The experiment which was the second in a series, conducted at the OECD Halden Reactor Project, aimed to assess the effect on nuclear power plant operators diagnostic behaviour when using a rule-based diagnostic expert system. The rule-based expert system used in the experiment is called DISKET (Diagnosis System Using Knowledge Engineering Technique) and was originally developed by the Japan Atomic Energy Research Institute (JAERI). The experiment was performed in the Halden man-machine laboratory using a full scope pressurized water reactor simulator. Existing data collection methods and experimental design principles includes possibilities but also limitations. This is discussed and experiences are presented. Operator performance in terms of quality of diagnosis is improved by the use of DISKET. The use of the DISKET system also influences operators problem solving behaviour. The main difference between the two experimental conditions can be characterized as while the DISKET users during the diagnosis process are following a strategy which is direct and narrowed, the non-DISKET users are using a much broader and less focused search when trying to diagnose a disturbance. (author)

  1. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    Science.gov (United States)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  2. General beam position controlling method for 3D optical systems based on the method of solving ray matrix equations

    Science.gov (United States)

    Chen, Meixiong; Yuan, Jie; Long, Xingwu; Kang, Zhenglong; Wang, Zhiguo; Li, Yingying

    2013-12-01

    A general beam position controlling method for 3D optical systems based on the method of solving ray matrix equations has been proposed in this paper. As a typical 3D optical system, nonplanar ring resonator of Zero-Lock Laser Gyroscopes has been chosen as an example to show its application. The total mismatching error induced by Faraday-wedge in nonplanar ring resonator has been defined and eliminated quite accurately with the error less than 1 μm. Compared with the method proposed in Ref. [14], the precision of the beam position controlling has been improved by two orders of magnitude. The novel method can be used to implement automatic beam position controlling in 3D optical systems with servo circuit. All those results have been confirmed by related alignment experiments. The results in this paper are important for beam controlling, ray tracing, cavity design and alignment in 3D optical systems.

  3. Diomres (k,m): An efficient method based on Krylov subspaces to solve big, dispersed, unsymmetrical linear systems

    Energy Technology Data Exchange (ETDEWEB)

    de la Torre Vega, E. [Instituto de Investigaciones Electricas, Cuernavaca (Mexico); Cesar Suarez Arriaga, M. [Universidad Michoacana SNH, Michoacan (Mexico)

    1995-03-01

    In geothermal simulation processes, MULKOM uses Integrated Finite Differences to solve the corresponding partial differential equations. This method requires to resolve efficiently big linear dispersed systems of non-symmetrical nature on each temporal iteration. The order of the system is usually greater than one thousand its solution could represent around 80% of CPU total calculation time. If the elapsed time solving this class of linear systems is reduced, the duration of numerical simulation decreases notably. When the matrix is big (N{ge}500) and with holes, it is inefficient to handle all the system`s elements, because it is perfectly figured out by its elements distinct of zero, quantity greatly minor than N{sup 2}. In this area, iteration methods introduce advantages with respect to gaussian elimination methods, because these last replenish matrices not having any special distribution of their non-zero elements and because they do not make use of the available solution estimations. The iterating methods of the Conjugated Gradient family, based on the subspaces of Krylov, possess the advantage of improving the convergence speed by means of preconditioning techniques. The creation of DIOMRES(k,m) method guarantees the continuous descent of the residual norm, without incurring in division by zero. This technique converges at most in N iterations if the system`s matrix is symmetrical, it does not employ too much memory to converge and updates immediately the approximation by using incomplete orthogonalization and adequate restarting. A preconditioned version of DIOMRES was applied to problems related to unsymmetrical systems with 1000 unknowns and less than five terms per equation. We found that this technique could reduce notably the time needful to find the solution without requiring memory increment. The coupling of this method to geothermal versions of MULKOM is in process.

  4. Highly Scalable Trip Grouping for Large Scale Collective Transportation Systems

    DEFF Research Database (Denmark)

    Gidofalvi, Gyozo; Pedersen, Torben Bach; Risch, Tore

    2008-01-01

    Transportation-related problems, like road congestion, parking, and pollution, are increasing in most cities. In order to reduce traffic, recent work has proposed methods for vehicle sharing, for example for sharing cabs by grouping "closeby" cab requests and thus minimizing transportation cost...... and utilizing cab space. However, the methods published so far do not scale to large data volumes, which is necessary to facilitate large-scale collective transportation systems, e.g., ride-sharing systems for large cities. This paper presents highly scalable trip grouping algorithms, which generalize previous...

  5. Using LEGO Kits to Teach Higher Level Problem Solving Skills in System Dynamics: A Case Study

    Science.gov (United States)

    Wu, Yi; de Vries, Charlotte; Dunsworth, Qi

    2018-01-01

    System Dynamics is a required course offered to junior Mechanical Engineering students at Penn State Erie, the Behrend College. It addresses the intercoupling dynamics of a wide range of dynamic systems: including mechanical, electrical, fluid, hydraulic, electromechanical, and biomedical systems. This course is challenging for students due to the…

  6. A Data Analysis Expert System For Large Established Distributed Databases

    Science.gov (United States)

    Gnacek, Anne-Marie; An, Y. Kim; Ryan, J. Patrick

    1987-05-01

    The purpose of this work is to analyze the applicability of artificial intelligence techniques for developing a user-friendly, parallel interface to large isolated, incompatible NASA databases for the purpose of assisting the management decision process. To carry out this work, a survey was conducted to establish the data access requirements of several key NASA user groups. In addition, current NASA database access methods were evaluated. The results of this work are presented in the form of a design for a natural language database interface system, called the Deductively Augmented NASA Management Decision Support System (DANMDS). This design is feasible principally because of recently announced commercial hardware and software product developments which allow cross-vendor compatibility. The goal of the DANMDS system is commensurate with the central dilemma confronting most large companies and institutions in America, the retrieval of information from large, established, incompatible database systems. The DANMDS system implementation would represent a significant first step toward this problem's resolution.

  7. Tools for the Automation of Large Distributed Control Systems

    CERN Document Server

    Gaspar, Clara

    2005-01-01

    The new LHC experiments at CERN will have very large numbers of channels to operate. In order to be able to configure and monitor such large systems, a high degree of parallelism is necessary. The control system is built as a hierarchy of sub-systems distributed over several computers. A toolkit - SMI++, combining two approaches: finite state machines and rule-based programming, allows for the description of the various sub-systems as decentralized deciding entities, reacting is real-time to changes in the system, thus providing for the automation of standard procedures and for the automatic recovery from error conditions in a hierarchical fashion. In this paper we will describe the principles and features of SMI++ as well as its integration with an industrial SCADA tool for use by the LHC experiments and we will try to show that such tools, can provide a very convenient mechanism for the automation of large scale, high complexity, applications.

  8. Tools for the automation of large control systems

    CERN Document Server

    Gaspar, Clara

    2005-01-01

    The new LHC experiments at CERN will have very large numbers of channels to operate. In order to be able to configure and monitor such large systems, a high degree of parallelism is necessary. The control system is built as a hierarchy of sub-systems distributed over several computers. A toolkit – SMI++, combining two approaches: finite state machines and rule-based programming, allows for the description of the various sub-systems as decentralized deciding entities, reacting in real-time to changes in the system, thus providing for the automation of standard procedures and the for the automatic recovery from error conditions in a hierarchical fashion. In this paper we will describe the principles and features of SMI++ as well as its integration with an industrial SCADA tool for use by the LHC experiments and we will try to show that such tools, can provide a very convenient mechanism for the automation of large scale, high complexity, applications.

  9. Dynamic state estimation techniques for large-scale electric power systems

    International Nuclear Information System (INIS)

    Rousseaux, P.; Pavella, M.

    1991-01-01

    This paper presents the use of dynamic type state estimators for energy management in electric power systems. Various dynamic type estimators have been developed, but have never been implemented. This is primarily because of dimensionality problems posed by the conjunction of an extended Kalman filter with a large scale power system. This paper precisely focuses on how to circumvent the high dimensionality, especially prohibitive in the filtering step, by using a decomposition-aggregation hierarchical scheme; to appropriately model the power system dynamics, the authors introduce new state variables in the prediction step and rely on a load forecasting method. The combination of these two techniques succeeds in solving the overall dynamic state estimation problem not only in a tractable and realistic way, but also in compliance with real-time computational requirements. Further improvements are also suggested, bound to the specifics of the high voltage electric transmission systems

  10. Social Problem Solving and Depressive Symptoms Over Time: A Randomized Clinical Trial of Cognitive Behavioral Analysis System of Psychotherapy, Brief Supportive Psychotherapy, and Pharmacotherapy

    Science.gov (United States)

    Klein, Daniel N.; Leon, Andrew C.; Li, Chunshan; D’Zurilla, Thomas J.; Black, Sarah R.; Vivian, Dina; Dowling, Frank; Arnow, Bruce A.; Manber, Rachel; Markowitz, John C.; Kocsis, James H.

    2011-01-01

    Objective Depression is associated with poor social problem-solving, and psychotherapies that focus on problem-solving skills are efficacious in treating depression. We examined the associations between treatment, social problem solving, and depression in a randomized clinical trial testing the efficacy of psychotherapy augmentation for chronically depressed patients who failed to fully respond to an initial trial of pharmacotherapy (Kocsis et al., 2009). Method Participants with chronic depression (n = 491) received Cognitive Behavioral Analysis System of Psychotherapy (CBASP), which emphasizes interpersonal problem-solving, plus medication; Brief Supportive Psychotherapy (BSP) plus medication; or medication alone for 12 weeks. Results CBASP plus pharmacotherapy was associated with significantly greater improvement in social problem solving than BSP plus pharmacotherapy, and a trend for greater improvement in problem solving than pharmacotherapy alone. In addition, change in social problem solving predicted subsequent change in depressive symptoms over time. However, the magnitude of the associations between changes in social problem solving and subsequent depressive symptoms did not differ across treatment conditions. Conclusions It does not appear that improved social problem solving is a mechanism that uniquely distinguishes CBASP from other treatment approaches. PMID:21500885

  11. DIFFUSION - WRS system module number 7539 for solving a set of multigroup diffusion equations in one dimension

    International Nuclear Information System (INIS)

    Grimstone, M.J.

    1978-06-01

    The WRS Modular Programming System has been developed as a means by which programmes may be more efficiently constructed, maintained and modified. In this system a module is a self-contained unit typically composed of one or more Fortran routines, and a programme is constructed from a number of such modules. This report describes one WRS module, the function of which is to solve a set of multigroup diffusion equations for a system represented in one-dimensional plane, cylindrical or spherical geometry. The information given in this manual is of use both to the programmer wishing to incorporate the module in a programme, and to the user of such a programme. (author)

  12. Experimental applications of an expert system to operator problem solving in process control

    International Nuclear Information System (INIS)

    Nelson, W.R.; Jenkins, J.P.

    1985-01-01

    The United States Nuclear Regulatory Commission (USNRC) has sponsored a program to assess the effectiveness of expert systems for nuclear reactor operators. The project has included two human factors experimental evaluations of the Response Tree expert system, a prototype expert system for helping nuclear reactor operators respond to emergency conditions. This paper discusses the Response Tree expert system, the experiments which have been performed to test its effectiveness, and the results of the experiments. Reference is made to the accident at TMI. 12 refs

  13. Large-Scale Systems Control Design via LMI Optimization

    Czech Academy of Sciences Publication Activity Database

    Rehák, Branislav

    2015-01-01

    Roč. 44, č. 3 (2015), s. 247-253 ISSN 1392-124X Institutional support: RVO:67985556 Keywords : Combinatorial linear matrix inequalities * large-scale system * decentralized control Subject RIV: BC - Control Systems Theory Impact factor: 0.633, year: 2015

  14. Large superconducting magnet systems for plasma and fusion applications

    International Nuclear Information System (INIS)

    Heinz, W.

    1976-05-01

    Work on superconducting magnet systems and state of the art of superconducting magnet technology are described. Conceptual design consideration and problems of large magnet systems (stability, magnetic forces, cooling modes, safety) are discussed. Recent results of experimental work at Karlsruhe are reported. An outline of American and European programs is given. (orig.) [de

  15. A Chain Perspective on Large-scale Number Systems

    NARCIS (Netherlands)

    Grijpink, J.H.A.M.

    2012-01-01

    As large-scale number systems gain significance in social and economic life (electronic communication, remote electronic authentication), the correct functioning and the integrity of public number systems take on crucial importance. They are needed to uniquely indicate people, objects or phenomena

  16. Disruptions in large value payment systems: an experimental approach

    NARCIS (Netherlands)

    Abbink, K.; Bosman, R.; Heijmans, R.; van Winden, F.

    2010-01-01

    This experimental study investigates the behaviour of banks in a large value payment system. More specifically,we look at 1) the reactions of banks to disruptions in the payment system, 2) the way in which the history of disruptions affects the behaviour of banks (path dependency) and 3) the effect

  17. Disruptions in large value payment systems: An experimental approach

    NARCIS (Netherlands)

    Abbink, K.; Bosman, R.; Heijmans, R.; van Winden, F.; Hellqvist, M.; Laine, T.

    2012-01-01

    This experimental study investigates the behaviour of banks in a large value payment system. More specifically, we look at 1) the reactions of banks to disruptions in the payment system, 2) the way in which the history of disruptions affects the behaviour of banks (path dependency) and 3) the effect

  18. Large amplitude forced vibration analysis of cross-beam system ...

    African Journals Online (AJOL)

    Large amplitude forced vibration behaviour of cross-beam system under harmonic excitation is studied, incorporating the effect of geometric non-linearity. The forced vibration analysis is carried out in an indirect way, in which the dynamic system is assumed to satisfy the force equilibrium condition at peak load value, thus ...

  19. Self-* and Adaptive Mechanisms for Large Scale Distributed Systems

    Science.gov (United States)

    Fragopoulou, P.; Mastroianni, C.; Montero, R.; Andrjezak, A.; Kondo, D.

    Large-scale distributed computing systems and infrastructure, such as Grids, P2P systems and desktop Grid platforms, are decentralized, pervasive, and composed of a large number of autonomous entities. The complexity of these systems is such that human administration is nearly impossible and centralized or hierarchical control is highly inefficient. These systems need to run on highly dynamic environments, where content, network topologies and workloads are continuously changing. Moreover, they are characterized by the high degree of volatility of their components and the need to provide efficient service management and to handle efficiently large amounts of data. This paper describes some of the areas for which adaptation emerges as a key feature, namely, the management of computational Grids, the self-management of desktop Grid platforms and the monitoring and healing of complex applications. It also elaborates on the use of bio-inspired algorithms to achieve self-management. Related future trends and challenges are described.

  20. A knowledge-based diagnosis system for welding machine problem solving

    International Nuclear Information System (INIS)

    Bonnieres, P. de; Boutes, J.L.; Calas, M.A.; Para, S.

    1986-06-01

    This paper presents a knowledge-based diagnosis system which can be a valuable aid in resolving malfunctions and failures encountered using the automatic hot-wire TIG weld cladding process. This knowledge-based system is currently under evaluation by welding operators at the Framatome heavy fabricating facility. Extension to other welding processes is being considered

  1. Software Reliability Issues Concerning Large and Safety Critical Software Systems

    Science.gov (United States)

    Kamel, Khaled; Brown, Barbara

    1996-01-01

    This research was undertaken to provide NASA with a survey of state-of-the-art techniques using in industrial and academia to provide safe, reliable, and maintainable software to drive large systems. Such systems must match the complexity and strict safety requirements of NASA's shuttle system. In particular, the Launch Processing System (LPS) is being considered for replacement. The LPS is responsible for monitoring and commanding the shuttle during test, repair, and launch phases. NASA built this system in the 1970's using mostly hardware techniques to provide for increased reliability, but it did so often using custom-built equipment, which has not been able to keep up with current technologies. This report surveys the major techniques used in industry and academia to ensure reliability in large and critical computer systems.

  2. Accuracy of the photogrametric measuring system for large size elements

    Directory of Open Access Journals (Sweden)

    M. Grzelka

    2011-04-01

    Full Text Available The aim of this paper is to present methods of estimating and guidelines for verifying the accuracy of optical photogrammetric measuringsystems, using for measurement of large size elements. Measuring systems applied to measure workpieces of a large size which oftenreach more than 10000mm require use of appropriate standards. Those standards provided by the manufacturer of photogrammetricsystems are certified and are inspected annually. To make sure that these systems work properly there was developed a special standardVDI / VDE 2634, "Optical 3D measuring systems. Imaging systems with point - by - point probing. " According to recommendationsdescribed in this standard research on accuracy of photogrametric measuring system was conducted using K class gauge blocks dedicatedto calibrate and test accuracy of classic CMMs. The paper presents results of research of estimation the actual error of indication for sizemeasurement MPEE for photogrammetric coordinate measuring system TRITOP.

  3. Large scale gas chromatographic demonstration system for hydrogen isotope separation

    International Nuclear Information System (INIS)

    Cheh, C.H.

    1988-01-01

    A large scale demonstration system was designed for a throughput of 3 mol/day equimolar mixture of H,D, and T. The demonstration system was assembled and an experimental program carried out. This project was funded by Kernforschungszentrum Karlsruhe, Canadian Fusion Fuel Technology Projects and Ontario Hydro Research Division. Several major design innovations were successfully implemented in the demonstration system and are discussed in detail. Many experiments were carried out in the demonstration system to study the performance of the system to separate hydrogen isotopes at high throughput. Various temperature programming schemes were tested, heart-cutting operation was evaluated, and very large (up to 138 NL/injection) samples were separated in the system. The results of the experiments showed that the specially designed column performed well as a chromatographic column and good separation could be achieved even when a 138 NL sample was injected

  4. DOE's nation-wide system for access control can solve problems for the federal government

    International Nuclear Information System (INIS)

    Callahan, S.; Tomes, D.; Davis, G.; Johnson, D.; Strait, S.

    1996-07-01

    The U.S. Department of Energy's (DOE's) ongoing efforts to improve its physical and personnel security systems while reducing its costs, provide a model for federal government visitor processing. Through the careful use of standardized badges, computer databases, and networks of automated access control systems, the DOE is increasing the security associated with travel throughout the DOE complex, and at the same time, eliminating paperwork, special badging, and visitor delays. The DOE is also improving badge accountability, personnel identification assurance, and access authorization timeliness and accuracy. Like the federal government, the DOE has dozens of geographically dispersed locations run by many different contractors operating a wide range of security systems. The DOE has overcome these obstacles by providing data format standards, a complex-wide virtual network for security, the adoption of a standard high security system, and an open-systems-compatible link for any automated access control system. If the location's level of security requires it, positive visitor identification is accomplished by personal identification number (PIN) and/or by biometrics. At sites with automated access control systems, this positive identification is integrated into the portals

  5. FEMSYN - a code system to solve multigroup diffusion theory equations using a variety of solution techniques. Part 4 : SYNTHD - The synthesis module

    International Nuclear Information System (INIS)

    Jagannathan, V.

    1985-01-01

    For solving the multigroup diffusion theory equations in 3-D problems in which the material properties are uniform in large segments of axial direction, the synthesis method is known to give fairly accurate results, at very low computational cost. In the code system FEMSYN, the single channel continuous flux synthesis option has been incorporated. One can generate the radial trail functions by either finite difference method (FDM) or finite element method (FEM). The axial mixing functions can also be found by either FDM or FEM. Use of FEM for both radial and axial directions is found to reduce the calculation time considerably. One can determine eigenvalue, 3-D flux and power distributions with FEMSYN. In this report, a detailed discription of the synthesis module SYNTHD is given. (author)

  6. Development of automated analytical systems for large throughput

    International Nuclear Information System (INIS)

    Ernst, P.C.; Hoffman, E.L.

    1982-01-01

    The need to be able to handle a large throughput of samples for neutron activation analysis has led to the development of automated counting and sample handling systems. These are coupled with available computer-assisted INAA techniques to perform a wide range of analytical services on a commercial basis. A fully automated delayed neutron counting system and a computer controlled pneumatic transfer for INAA use are described, as is a multi-detector gamma-spectroscopy system. (author)

  7. Proposal of a framework for solving human factors of artificial systems and its application to maintenance work

    International Nuclear Information System (INIS)

    Nagamatsu, Takashi; Otsuji, Tomoo; Yoshikawa, Hidekazu; Shiba, Shigenari

    2004-01-01

    A framework for solving human factors of artificial systems is proposed in this study, where a whole system of machines and the human organization that involves in the operation and management of machines is defined as an 'artificial system'. Five aspects of human factors in the artificial system are first discussed, and the types of artificial system with respect to the human factors are divided into three levels from a viewpoint of complexity. A framework that can treat artificial systems by unified methodology has been proposed for treating both the complexity level and the different kinds of human factors. As a concrete example of this framework application, a prototype system has been developed for advanced plant maintenance support by using ES-HMD (Eye Sensing-Head Mounted Display). This is a remote communication system of cooperative maintenance work between the expert in a remote support center and the maintenance worker at a certain machine in the plant site to conduct a complicated task without committing human error. It was confirmed by laboratory experiment that the expert would instruct the worker so that he or she could perform the task successfully, by observing the worker's eye gazing point and by pointing the right place of action on the transferred display of the worker's eyesight through the ES-MHD. (author)

  8. Methods of solving of the optimal stabilization problem for stationary smooth control systems. Part I

    Directory of Open Access Journals (Sweden)

    G. Kondrat'ev

    1999-10-01

    Full Text Available In this article some ideas of Hamilton mechanics and differential-algebraic Geometry are used to exact definition of the potential function (Bellman-Lyapunov function in the optimal stabilization problem of smooth finite-dimensional systems.

  9. MODERNIZATION OF RUSSIAN HEALT PROTECTION SYSTEM SOLVES THE PROBLEM OF RENDERING QUALITY MEDICAL AID TO POPULATION

    Directory of Open Access Journals (Sweden)

    V. A. Alexeev

    2011-01-01

    Full Text Available Last two dozen years, permanent growth of death rate and decrease birth rate take place in Russia. To great extent, this is owing to inadequate health protection system. The practice of personal payments for medical services in amounts unattainable for overwhelming majority of thepopulation grows. RF President’s Administration sets an example: cardiologist’s advice in their clinics costs 40 times more that according to obligatory medical insurance rates. 84% of the country citizens do not approve this approach and consider it socially unjust. An important national project aimed at modernization of the health protection system started in 2011. Non-private medical establishments are to be equipped with up-to-date equipment, modern information systems and medical service standards based on CRM system adopted in developed countries; salaries of health personnel are to be increased.

  10. Learning classifier systems with memory condition to solve non-Markov problems

    OpenAIRE

    Zang, Zhaoxiang; Li, Dehua; Wang, Junying

    2012-01-01

    In the family of Learning Classifier Systems, the classifier system XCS has been successfully used for many applications. However, the standard XCS has no memory mechanism and can only learn optimal policy in Markov environments, where the optimal action is determined solely by the state of current sensory input. In practice, most environments are partially observable environments on agent's sensation, which are also known as non-Markov environments. Within these environments, XCS either fail...

  11. A fuzzy logic approach toward solving the analytic enigma of health system financing.

    Science.gov (United States)

    Chernichovsky, Dov; Bolotin, Arkady; de Leeuw, David

    2003-09-01

    Improved health, equity, macroeconomic efficiency, efficient provision of care, and client satisfaction are the common goals of any health system. The relative significance of these goals varies, however, across nations, communities and with time. As for health care finance, the attainment of these goals under varying circumstances involves alternative policy options for each of the following elements: sources of finance, allocation of finance, payment to providers, and public-private mix. The intricate set of multiple goals, elements and policy options defies human reasoning, and, hence, hinders effective policymaking. Indeed, "health system finance" is not amenable to a clear set of structural relationships. Neither is there a universe that can be subject to statistical scrutiny: each health system is unique. "Fuzzy logic" models human reasoning by managing "expert knowledge" close to the way it is handled by human language. It is used here for guiding policy making by a systematic analysis of health system finance. Assuming equal welfare weights for alternative goals and mutually exclusive policy options under each health-financing element, the exploratory model we present here suggests that a German-type health system is best. Other solutions depend on the welfare weights for system goals and mixes of policy options.

  12. Adaptive Fuzzy Output-Constrained Fault-Tolerant Control of Nonlinear Stochastic Large-Scale Systems With Actuator Faults.

    Science.gov (United States)

    Li, Yongming; Ma, Zhiyao; Tong, Shaocheng

    2017-09-01

    The problem of adaptive fuzzy output-constrained tracking fault-tolerant control (FTC) is investigated for the large-scale stochastic nonlinear systems of pure-feedback form. The nonlinear systems considered in this paper possess the unstructured uncertainties, unknown interconnected terms and unknown nonaffine nonlinear faults. The fuzzy logic systems are employed to identify the unknown lumped nonlinear functions so that the problems of structured uncertainties can be solved. An adaptive fuzzy state observer is designed to solve the nonmeasurable state problem. By combining the barrier Lyapunov function theory, adaptive decentralized and stochastic control principles, a novel fuzzy adaptive output-constrained FTC approach is constructed. All the signals in the closed-loop system are proved to be bounded in probability and the system outputs are constrained in a given compact set. Finally, the applicability of the proposed controller is well carried out by a simulation example.

  13. Challenges in parameter identification of large structural dynamic systems

    International Nuclear Information System (INIS)

    Koh, C.G.

    2001-01-01

    In theory, it is possible to determine the parameters of a structural or mechanical system by subjecting it to some dynamic excitation and measuring the response. Considerable research has been carried out in this subject area known as the system identification over the past two decades. Nevertheless, the challenges associated with numerical convergence are still formidable when the system is large in terms of the number of degrees of freedom and number of unknowns. While many methods work for small systems, the convergence becomes difficult, if not impossible, for large systems. In this keynote lecture, both classical and non-classical system identification methods for dynamic testing and vibration-based inspection are discussed. For classical methods, the extended Kalman filter (EKF) approach is used. On this basis, a substructural identification method has been developed as a strategy to deal with large structural systems. This is achieved by reducing the problem size, thereby significantly improving the numerical convergence and efficiency. Two versions of this method are presented each with its own merits. A numerical example of frame structure with 20 unknown parameters is illustrated. For non-classical methods, the Genetic Algorithm (GA) is shown to be applicable with relative ease due to its 'forward analysis' nature. The computational time is, however, still enormous for large structural systems due to the combinatorial explosion problem. A model GA method has been developed to address this problem and tested with considerable success on a relatively large system of 50 degrees of freedom, accounting for input and output noise effects. An advantages of this GA-based identification method is that the objective function can be defined in response measured. Numerical studies show that the method is relatively robust, as it does in response measured. Numerical studies show that the method is relatively robust, as it dos not require good initial guess and the

  14. An enhanced artificial bee colony algorithm (EABC) for solving dispatching of hydro-thermal system (DHTS) problem.

    Science.gov (United States)

    Yu, Yi; Wu, Yonggang; Hu, Binqi; Liu, Xinglong

    2018-01-01

    The dispatching of hydro-thermal system is a nonlinear programming problem with multiple constraints and high dimensions and the solution techniques of the model have been a hotspot in research. Based on the advantage of that the artificial bee colony algorithm (ABC) can efficiently solve the high-dimensional problem, an improved artificial bee colony algorithm has been proposed to solve DHTS problem in this paper. The improvements of the proposed algorithm include two aspects. On one hand, local search can be guided in efficiency by the information of the global optimal solution and its gradient in each generation. The global optimal solution improves the search efficiency of the algorithm but loses diversity, while the gradient can weaken the loss of diversity caused by the global optimal solution. On the other hand, inspired by genetic algorithm, the nectar resource which has not been updated in limit generation is transformed to a new one by using selection, crossover and mutation, which can ensure individual diversity and make full use of prior information for improving the global search ability of the algorithm. The two improvements of ABC algorithm are proved to be effective via a classical numeral example at last. Among which the genetic operator for the promotion of the ABC algorithm's performance is significant. The results are also compared with those of other state-of-the-art algorithms, the enhanced ABC algorithm has general advantages in minimum cost, average cost and maximum cost which shows its usability and effectiveness. The achievements in this paper provide a new method for solving the DHTS problems, and also offer a novel reference for the improvement of mechanism and the application of algorithms.

  15. An enhanced artificial bee colony algorithm (EABC for solving dispatching of hydro-thermal system (DHTS problem.

    Directory of Open Access Journals (Sweden)

    Yi Yu

    Full Text Available The dispatching of hydro-thermal system is a nonlinear programming problem with multiple constraints and high dimensions and the solution techniques of the model have been a hotspot in research. Based on the advantage of that the artificial bee colony algorithm (ABC can efficiently solve the high-dimensional problem, an improved artificial bee colony algorithm has been proposed to solve DHTS problem in this paper. The improvements of the proposed algorithm include two aspects. On one hand, local search can be guided in efficiency by the information of the global optimal solution and its gradient in each generation. The global optimal solution improves the search efficiency of the algorithm but loses diversity, while the gradient can weaken the loss of diversity caused by the global optimal solution. On the other hand, inspired by genetic algorithm, the nectar resource which has not been updated in limit generation is transformed to a new one by using selection, crossover and mutation, which can ensure individual diversity and make full use of prior information for improving the global search ability of the algorithm. The two improvements of ABC algorithm are proved to be effective via a classical numeral example at last. Among which the genetic operator for the promotion of the ABC algorithm's performance is significant. The results are also compared with those of other state-of-the-art algorithms, the enhanced ABC algorithm has general advantages in minimum cost, average cost and maximum cost which shows its usability and effectiveness. The achievements in this paper provide a new method for solving the DHTS problems, and also offer a novel reference for the improvement of mechanism and the application of algorithms.

  16. hi_class: Horndeski in the cosmic linear anisotropy solving system

    Czech Academy of Sciences Publication Activity Database

    Zumalacarregui, M.; Bellini, E.; Sawicki, Ignacy; Lesgourgues, J.; Ferreira, P.G.

    2017-01-01

    Roč. 2017, č. 8 (2017), s. 1-29, č. článku 019. ISSN 1475-7516 R&D Projects: GA MŠk EF15_003/0000437 Grant - others:OP VVV - CoGraDS(XE) CZ.02.1.01/0.0/0.0/15_003/0000437 Institutional support: RVO:68378271 Keywords : modified gravity * gravitational waves * cosmology * large scale structure Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics OBOR OECD: Astronomy (including astrophysics,space science) Impact factor: 4.734, year: 2016

  17. Concept of large scale PV-WT-PSH energy sources coupled with the national power system

    Directory of Open Access Journals (Sweden)

    Jurasz Jakub

    2017-01-01

    Full Text Available Intermittent/non-dispatchable energy sources are characterized by a significant variation of their energy yield over time. In majority of cases their role in energy systems is marginalized. However, even in Poland which is strongly dedicated to its hard and brown coal fired power plants, the wind generation in terms of installed capacity starts to play a significant role. This paper briefly introduces a concept of wind (WT and solar (PV powered pumped storage hydroelectricity (PSH which seems to be a viable option for solving the problem of the variable nature of PV and WT generation. Additionally we summarize the results of our so far conducted research on the integration of variable renewable energy sources (VRES to the energy systems and present conclusions which strictly refer to the prospects of large scale PV-WT-PSH operating as a part of the polish energy system.

  18. A Trigonometrically Fitted Block Method for Solving Oscillatory Second-Order Initial Value Problems and Hamiltonian Systems

    Directory of Open Access Journals (Sweden)

    F. F. Ngwane

    2017-01-01

    Full Text Available In this paper, we present a block hybrid trigonometrically fitted Runge-Kutta-Nyström method (BHTRKNM, whose coefficients are functions of the frequency and the step-size for directly solving general second-order initial value problems (IVPs, including Hamiltonian systems such as the energy conserving equations and systems arising from the semidiscretization of partial differential equations (PDEs. Four discrete hybrid formulas used to formulate the BHTRKNM are provided by a continuous one-step hybrid trigonometrically fitted method with an off-grid point. We implement BHTRKNM in a block-by-block fashion; in this way, the method does not suffer from the disadvantages of requiring starting values and predictors which are inherent in predictor-corrector methods. The stability property of the BHTRKNM is discussed and the performance of the method is demonstrated on some numerical examples to show accuracy and efficiency advantages.

  19. Performance Health Monitoring of Large-Scale Systems

    Energy Technology Data Exchange (ETDEWEB)

    Rajamony, Ram [IBM Research, Austin, TX (United States)

    2014-11-20

    This report details the progress made on the ASCR funded project Performance Health Monitoring for Large Scale Systems. A large-­scale application may not achieve its full performance potential due to degraded performance of even a single subsystem. Detecting performance faults, isolating them, and taking remedial action is critical for the scale of systems on the horizon. PHM aims to develop techniques and tools that can be used to identify and mitigate such performance problems. We accomplish this through two main aspects. The PHM framework encompasses diagnostics, system monitoring, fault isolation, and performance evaluation capabilities that indicates when a performance fault has been detected, either due to an anomaly present in the system itself or due to contention for shared resources between concurrently executing jobs. Software components called the PHM Control system then build upon the capabilities provided by the PHM framework to mitigate degradation caused by performance problems.

  20. Large Scale GW Calculations on the Cori System

    Science.gov (United States)

    Deslippe, Jack; Del Ben, Mauro; da Jornada, Felipe; Canning, Andrew; Louie, Steven

    The NERSC Cori system, powered by 9000+ Intel Xeon-Phi processors, represents one of the largest HPC systems for open-science in the United States and the world. We discuss the optimization of the GW methodology for this system, including both node level and system-scale optimizations. We highlight multiple large scale (thousands of atoms) case studies and discuss both absolute application performance and comparison to calculations on more traditional HPC architectures. We find that the GW method is particularly well suited for many-core architectures due to the ability to exploit a large amount of parallelism across many layers of the system. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, as part of the Computational Materials Sciences Program.

  1. Accumulation Systems of Electric Energy Solved by Multicriteria Analysis Methods IPA and Topsis

    Directory of Open Access Journals (Sweden)

    Zdenek Hradilek

    2008-01-01

    Full Text Available This work deals with utilization of multicriteria analysis methods IPA and TOPSIS to assess three storage systems (Fuel Cells, Lead Acid Batteries and Pumped Storage Hydro Plants. Procedures of IPA and TOPSIS methods are described here as like as calculation of mentioned problem. Storage systems are assessed in terms of four criteria (Start up Time, Efficiency of Accumulation, Lifetime and Specific Costs/ kW of Power Output. Weights of criteria are also focused here. They are suggested by experts and statistically calculated.

  2. A Decentralized Multivariable Robust Adaptive Voltage and Speed Regulator for Large-Scale Power Systems

    Science.gov (United States)

    Okou, Francis A.; Akhrif, Ouassima; Dessaint, Louis A.; Bouchard, Derrick

    2013-05-01

    This papter introduces a decentralized multivariable robust adaptive voltage and frequency regulator to ensure the stability of large-scale interconnnected generators. Interconnection parameters (i.e. load, line and transormer parameters) are assumed to be unknown. The proposed design approach requires the reformulation of conventiaonal power system models into a multivariable model with generator terminal voltages as state variables, and excitation and turbine valve inputs as control signals. This model, while suitable for the application of modern control methods, introduces problems with regards to current design techniques for large-scale systems. Interconnection terms, which are treated as perturbations, do not meet the common matching condition assumption. A new adaptive method for a certain class of large-scale systems is therefore introduces that does not require the matching condition. The proposed controller consists of nonlinear inputs that cancel some nonlinearities of the model. Auxiliary controls with linear and nonlinear components are used to stabilize the system. They compensate unknown parametes of the model by updating both the nonlinear component gains and excitation parameters. The adaptation algorithms involve the sigma-modification approach for auxiliary control gains, and the projection approach for excitation parameters to prevent estimation drift. The computation of the matrix-gain of the controller linear component requires the resolution of an algebraic Riccati equation and helps to solve the perturbation-mismatching problem. A realistic power system is used to assess the proposed controller performance. The results show that both stability and transient performance are considerably improved following a severe contingency.

  3. A Discrete-Time Recurrent Neural Network for Solving Rank-Deficient Matrix Equations With an Application to Output Regulation of Linear Systems.

    Science.gov (United States)

    Liu, Tao; Huang, Jie

    2017-04-17

    This paper presents a discrete-time recurrent neural network approach to solving systems of linear equations with two features. First, the system of linear equations may not have a unique solution. Second, the system matrix is not known precisely, but a sequence of matrices that converges to the unknown system matrix exponentially is known. The problem is motivated from solving the output regulation problem for linear systems. Thus, an application of our main result leads to an online solution to the output regulation problem for linear systems.

  4. Automatic Flight Control System Design of Level Change Mode for a Large Aircraft

    Directory of Open Access Journals (Sweden)

    Huajun Gong

    2013-02-01

    Full Text Available The level change mode is an essential part of large civil aircraft automatic flight control systems. In cruise, with the decrease of the plane's weight caused by fuel consumption and the influence of bad weather, such as thunderstorms, the level change mode is required to solve this problem. This work establishes a nonlinear model of large aircraft, takes level changed from 9500m to 10100m as an example to design control laws for the level change mode in cruise. The classical engineering method is used to design longitudinal and lateral control laws synthetically. The flight qualities are considered in the design process. Simulation results indicate the control laws can meet design requirements and have a good anti-gust performance.

  5. A neural network method for solving a system of linear variational inequalities

    International Nuclear Information System (INIS)

    Lan Hengyou; Cui Yishun

    2009-01-01

    In this paper, we transmute the solution for a new system of linear variational inequalities to an equilibrium point of neural networks, and by using analytic technique, some sufficient conditions are presented. Further, the estimation of the exponential convergence rates of the neural networks is investigated. The new and useful results obtained in this paper generalize and improve the corresponding results of recent works.

  6. Systemic-institutional approach to solving the problems of prejudgement in the criminal court procedure

    Directory of Open Access Journals (Sweden)

    Bulat Raisovich Burganov

    2015-09-01

    Full Text Available Objective to consider the historical and legal aspects of the use of prejudgement in the criminal procedure and to allocate its characteristic features as of a systemic object to disclose the key system elements of the mechanism of legal regulation of prejudgement in the criminal procedure. Methods the basis of the research was historicallegal method and systemicinstitutional approach to the analysis of regulatory processes as well as special and private law research methods elements of structuralfunctional approach with formal logical analysis. Results the evolution is shown of formation and development of the prejudgement institution the necessity is grounded of systemicholistic analysis of prejudgement legal regulation in the criminal procedure. According to the author it is a systematic approach that will allow to create the effectively functioning prejudgement institution. The article investigates the content side of the prejudgement institution in the criminalprocedural law. Scientific novelty for the first time the article discusses the prejudgement institution as a system of interrelated elements. In this context the substantial side of this system is characterized. Practical significance the research results and conclusions can be used in research legislative and applied activity for the effective application of the principles of prejudgement opportunities in the criminal procedure. nbsp

  7. Session IV. Problem Solving. Vehicle lighting system. Four steps in glare reduction.

    NARCIS (Netherlands)

    Schreuder, D.A.

    1972-01-01

    A number of proposals are described that may help to reduce glare. The systems are either only a partial improvement, or they are expensive, or their introduction raises severe problems. One solution, already widely in use, is applica- tion of overhead lighting. In order to really improve the

  8. A Portfolio for Optimal Collaboration of Human and Cyber Physical Production Systems in Problem-Solving

    Science.gov (United States)

    Ansari, Fazel; Seidenberg, Ulrich

    2016-01-01

    This paper discusses the complementarity of human and cyber physical production systems (CPPS). The discourse of complementarity is elaborated by defining five criteria for comparing the characteristics of human and CPPS. Finally, a management portfolio matrix is proposed for examining the feasibility of optimal collaboration between them. The…

  9. Young Stars in Orion May Solve Mystery of Our Solar System

    Science.gov (United States)

    2001-09-01

    Scientists may have to give the Sun a little more credit. Exotic isotopes present in the early Solar System--which scientists have long-assumed were sprinkled there by a powerful, nearby star explosion--may have instead been forged locally by our Sun during the colossal solar-flare tantrums of its baby years. The isotopes--special forms of atomic nuclei, such as aluminum-26, calcium-41, and beryllium-10--can form in the X-ray solar flares of young stars in the Orion Nebula, which behave just like our Sun would have at such an early age. The finding, based on observations by the Chandra X-ray Observatory, has broad implications for the formation of our own Solar System. Eric Feigelson, professor of astronomy and astrophysics at Penn State, led a team of scientists on this Chandra observation and presents these results in Washington, D.C., today at a conference entitled "Two Years of Science with Chandra". "The Chandra study of Orion gives us the first chance to study the flaring properties of stars resembling the Sun when our solar system was forming," said Feigelson. "We found a much higher rate of flares than expected, sufficient to explain the production of many unusual isotopes locked away in ancient meteorites. If the young stars in Orion can do it, then our Sun should have been able to do it too." Scientists who study how our Solar System formed from a collapsed cloud of dust and gas have been hard pressed to explain the presence of these extremely unusual chemical isotopes. The isotopes are short-lived and had to have been formed no earlier than the creation of the Solar System, some five billion years ago. Yet these elements cannot be produced by a star as massive as our Sun under normal circumstances. (Other elements, such as silver and gold, were created long before the creation of the solar system.) The perplexing presence of these isotopic anomalies, found in ancient meteoroids orbiting the Earth, led to the theory that a supernova explosion occurred

  10. Biology-inspired microphysiological system approaches to solve the prediction dilemma of substance testing

    NARCIS (Netherlands)

    Marx, Uwe; Andersson, Tommy B; Bahinski, Anthony; Beilmann, Mario; Beken, Sonja; Cassee, Flemming R; Cirit, Murat; Daneshian, Mardas; Fitzpatrick, Susan; Frey, Olivier; Gaertner, Claudia; Giese, Christoph; Griffith, Linda; Hartung, Thomas; Heringa, Minne B; Hoeng, Julia; de Jong, Wim H; Kojima, Hajime; Kuehnl, Jochen; Leist, Marcel; Luch, Andreas; Maschmeyer, Ilka; Sakharov, Dmitry; Sips, Adrienne J A M; Steger-Hartmann, Thomas; Tagle, Danilo A; Tonevitsky, Alexander; Tralau, Tewes; Tsyb, Sergej; van de Stolpe, Anja; Vandebriel, Rob; Vulto, Paul; Wang, Jufeng; Wiest, Joachim; Rodenburg, Marleen; Roth, Adrian

    2016-01-01

    The recent advent of microphysiological systems - microfluidic biomimetic devices that aspire to emulate the biology of human tissues, organs and circulation in vitro - is envisaged to enable a global paradigm shift in drug development. An extraordinary US governmental initiative and various

  11. Solving Ratio-Dependent Predatorprey System with Constant Effort Harvesting Using Variational Iteration Method

    DEFF Research Database (Denmark)

    Ghotbi, Abdoul R; Barari, Amin

    2009-01-01

    Due to wide range of interest in use of bio-economic models to gain insight in to the scientific management of renewable resources like fisheries and forestry, variational iteration method (VIM) is employed to approximate the solution of the ratio-dependent predator-prey system with constant effort...

  12. Recovering from trust breakdowns in large system implementations

    DEFF Research Database (Denmark)

    Rerup Schlichter, Bjarne Rerup; Andersen, Povl Erik Rostgård

    2011-01-01

    On the basis of experiences from the Faroese large-scale implementation of integrated healthcare information systems and insights into dynamic aspects of trust, we offer the following lessons learned for the successful management and recovery of trust (breakdowns) in large system implementations......: restore relations by turning towards face-to-face events and procedures, assure a well-functioning and available support organization, demonstrate trust in actors to enhance their own self-confidence and celebrate successes, even the smallest or ones injected by yourself. The propositions are based on a 6...

  13. Biology-inspired Microphysiological System Approaches to Solve the Prediction Dilemma of Substance Testing

    Science.gov (United States)

    Marx, Uwe; Andersson, Tommy B.; Bahinski, Anthony; Beilmann, Mario; Beken, Sonja; Cassee, Flemming R.; Cirit, Murat; Daneshian, Mardas; Fitzpatrick, Susan; Frey, Olivier; Gaertner, Claudia; Giese, Christoph; Griffith, Linda; Hartung, Thomas; Heringa, Minne B.; Hoeng, Julia; de Jong, Wim H.; Kojima, Hajime; Kuehnl, Jochen; Luch, Andreas; Maschmeyer, Ilka; Sakharov, Dmitry; Sips, Adrienne J. A. M.; Steger-Hartmann, Thomas; Tagle, Danilo A.; Tonevitsky, Alexander; Tralau, Tewes; Tsyb, Sergej; van de Stolpe, Anja; Vandebriel, Rob; Vulto, Paul; Wang, Jufeng; Wiest, Joachim; Rodenburg, Marleen; Roth, Adrian

    2017-01-01

    Summary The recent advent of microphysiological systems – microfluidic biomimetic devices that aspire to emulate the biology of human tissues, organs and circulation in vitro – is envisaged to enable a global paradigm shift in drug development. An extraordinary US governmental initiative and various dedicated research programs in Europe and Asia have led recently to the first cutting-edge achievements of human single-organ and multi-organ engineering based on microphysiological systems. The expectation is that test systems established on this basis would model various disease stages, and predict toxicity, immunogenicity, ADME profiles and treatment efficacy prior to clinical testing. Consequently, this technology could significantly affect the way drug substances are developed in the future. Furthermore, microphysiological system-based assays may revolutionize our current global programs of prioritization of hazard characterization for any new substances to be used, for example, in agriculture, food, ecosystems or cosmetics, thus, replacing laboratory animal models used currently. Thirty-five experts from academia, industry and regulatory bodies present here the results of an intensive workshop (held in June 2015, Berlin, Germany). They review the status quo of microphysiological systems available today against industry needs, and assess the broad variety of approaches with fit-for-purpose potential in the drug development cycle. Feasible technical solutions to reach the next levels of human biology in vitro are proposed. Furthermore, key organ-on-a-chip case studies, as well as various national and international programs are highlighted. Finally, a roadmap into the future is outlined, to allow for more predictive and regulatory-accepted substance testing on a global scale. PMID:27180100

  14. Solving the Traveling Salesman Problem Based on The Genetic Reactive Bone Route Algorithm whit Ant Colony System

    Directory of Open Access Journals (Sweden)

    Majid Yousefikhoshbakht

    2016-07-01

    Full Text Available The TSP is considered one of the most well-known combinatorial optimization tasks and researchers have paid so much attention to the TSP for many years. In this problem, a salesman starts to move from an arbitrary place called depot and after visits all of the nodes, finally comes back to the depot. The objective is to minimize the total distance traveled by the salesman.  Because this problem is a non-deterministic polynomial (NP-hard problem in nature, a hybrid meta-heuristic algorithm called REACSGA is used for solving the TSP. In REACSGA, a reactive bone route algorithm that uses the ant colony system (ACS for generating initial diversified solutions and the genetic algorithm (GA as an improved procedure are applied. Since the performance of the Metaheuristic algorithms is significantly influenced by their parameters, Taguchi Method is used to set the parameters of the proposed algorithm. The proposed algorithm is tested on several standard instances involving 24 to 318 nodes from the literature. The computational result shows that the results of the proposed algorithm are competitive with other metaheuristic algorithms for solving the TSP in terms of better quality of solution and computational time respectively. In addition, the proposed REACSGA is significantly efficient and finds closely the best known solutions for most of the instances in which thirteen best known solutions are also found.

  15. TENSOLVE: A software package for solving systems of nonlinear equations and nonlinear least squares problems using tensor methods

    Energy Technology Data Exchange (ETDEWEB)

    Bouaricha, A. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.; Schnabel, R.B. [Colorado Univ., Boulder, CO (United States). Dept. of Computer Science

    1996-12-31

    This paper describes a modular software package for solving systems of nonlinear equations and nonlinear least squares problems, using a new class of methods called tensor methods. It is intended for small to medium-sized problems, say with up to 100 equations and unknowns, in cases where it is reasonable to calculate the Jacobian matrix or approximate it by finite differences at each iteration. The software allows the user to select between a tensor method and a standard method based upon a linear model. The tensor method models F({ital x}) by a quadratic model, where the second-order term is chosen so that the model is hardly more expensive to form, store, or solve than the standard linear model. Moreover, the software provides two different global strategies, a line search and a two- dimensional trust region approach. Test results indicate that, in general, tensor methods are significantly more efficient and robust than standard methods on small and medium-sized problems in iterations and function evaluations.

  16. Study of grounding system of large tokamak device JT-60

    International Nuclear Information System (INIS)

    Arakawa, Kiyotsugu; Shimada, Ryuichi; Kishimoto, Hiroshi; Yabuno, Kohei; Ishigaki, Yukio.

    1982-01-01

    In the critical plasma testing facility JT-60 constructed by the Japan Atomic Energy Research Institute, high voltage, large current is required in an instant. Accordingly, for the protection of human bodies and the equipment, and for realizing the stable operation of the complex, precise control and measurement system, a large scale facility of grounding system is required. In case of the JT-60 experimental facility, the equipments with different functions in separate buildings are connected, therefore, it is an important point to avoid high potential difference between buildings. In the grounding system for the JT-60, a reticulate grounding electrode is laid for each building, and these electrodes are connected with a low impedance metallic duct called grounding trunk line. The power supply cables for various magnetic field coils, control lines and measurement lines are laid in the duct. It is a large problem to grasp quantitatively the effect of a grounding trunk line by analysis. The authors analyzed the phenomenon that large current flows into a grounding system by lightning strike or grounding. The fundamental construction of the grounding system for the JT-60, the condition for the analysis and the result of simulation are reported. (Kako, I.)

  17. New Quasi-Newton Method for Solving Systems of Nonlinear Equations

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Vlček, Jan

    2017-01-01

    Roč. 62, č. 2 (2017), s. 121-134 ISSN 0862-7940 R&D Projects: GA ČR GA13-06684S Institutional support: RVO:67985807 Keywords : nonlinear equations * systems of equations * trust-region methods * quasi-Newton methods * adjoint Broyden methods * numerical algorithms * numerical experiments Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 0.618, year: 2016 http://hdl.handle.net/10338.dmlcz/146699

  18. Monotone methods for solving a boundary value problem of second order discrete system

    Directory of Open Access Journals (Sweden)

    Wang Yuan-Ming

    1999-01-01

    Full Text Available A new concept of a pair of upper and lower solutions is introduced for a boundary value problem of second order discrete system. A comparison result is given. An existence theorem for a solution is established in terms of upper and lower solutions. A monotone iterative scheme is proposed, and the monotone convergence rate of the iteration is compared and analyzed. The numerical results are given.

  19. The topologic information processing by the artificial intellingence systems for the logic tasks' solving

    Directory of Open Access Journals (Sweden)

    Demyokhin V. V.

    2008-04-01

    Full Text Available The new method of parallel logic gates realization is described. The implementation of the parallel logic for a binary patterns considered on the basis of the topological information processing, used also in recognizing of visual images of single-layer systems of artificial intelligence. The estimates of the main parameters of TIP devices indicate that their performance can reach 1016 operations / sec and the amount of the structural elements is much less than in the known opto-logic devices.

  20. Solving Boundary Value Problem for a Nonlinear Stationary Controllable System with Synthesizing Control

    Directory of Open Access Journals (Sweden)

    Alexander N. Kvitko

    2017-01-01

    Full Text Available An algorithm for constructing a control function that transfers a wide class of stationary nonlinear systems of ordinary differential equations from an initial state to a final state under certain control restrictions is proposed. The algorithm is designed to be convenient for numerical implementation. A constructive criterion of the desired transfer possibility is presented. The problem of an interorbital flight is considered as a test example and it is simulated numerically with the presented method.

  1. A comparison of iterative methods to solve complex valued linear algebraic systems

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Neytcheva, M.; Ahmad, B.

    2013-01-01

    Roč. 66, č. 4 (2013), s. 811-841 ISSN 1017-1398 R&D Projects: GA MŠk ED1.1.00/02.0070 Institutional support: RVO:68145535 Keywords : linear systems * complex symmetric * real valued form * preconditioning Subject RIV: BA - General Mathematics Impact factor: 1.005, year: 2013 http://www.it.uu.se/research/publications/reports/2013-005/2013-005-nc.pdf

  2. Solving Ratio-Dependent Predator-Prey System with Constant Effort Harvesting Using Homotopy Perturbation Method

    Directory of Open Access Journals (Sweden)

    Abdoul R. Ghotbi

    2008-01-01

    Full Text Available Due to wide range of interest in use of bioeconomic models to gain insight into the scientific management of renewable resources like fisheries and forestry, homotopy perturbation method is employed to approximate the solution of the ratio-dependent predator-prey system with constant effort prey harvesting. The results are compared with the results obtained by Adomian decomposition method. The results show that, in new model, there are less computations needed in comparison to Adomian decomposition method.

  3. A document preparation system in a large network environment

    Energy Technology Data Exchange (ETDEWEB)

    Vigil, M.; Bouchier, S.; Sanders, C.; Sydoriak, S.; Wheeler, K.

    1988-01-01

    At Los Alamos National Laboratory, we have developed an integrated document preparation system that produces publication-quality documents. This system combines text formatters and computer graphics capabilities that have been adapted to meet the needs of users in a large scientific research laboratory. This paper describes the integration of document processing technology to develop a system architecture, based on a page description language, to provide network-wide capabilities in a distributed computing environment. We describe the Laboratory requirements, the integration and implementation issues, and the challenges we faced developing this system.

  4. From a Proven Correct Microkernel to Trustworthy Large Systems

    Science.gov (United States)

    Andronick, June

    The seL4 microkernel was the world's first general-purpose operating system kernel with a formal, machine-checked proof of correctness. The next big step in the challenge of building truly trustworthy systems is to provide a framework for developing secure systems on top of seL4. This paper first gives an overview of seL4's correctness proof, together with its main implications and assumptions, and then describes our approach to provide formal security guarantees for large, complex systems.

  5. Turbomolecular pump vacuum system for the Princeton Large Torus

    International Nuclear Information System (INIS)

    Dylla, H.F.

    1977-10-01

    A turbomolecular pump vacuum system has been designed and installed on the Princeton Large Torus (PLT). Four vertical shaft, oil-bearing, 1500 l/s turbomolecular pumps have been interfaced to the 6400 liter PLT Vacuum vessel to provide a net pumping speed of 3000 l/s for H 2 . The particular requirements and problems of tokamak vacuum systems are enumerated. A vacuum control system is described which protects the vacuum vessel from contamination, and protects the turbomolecular pumps from damage under a variety of possible failure modes. The performance of the vacuum system is presented in terms of pumping speed measurements and residual gas behavior

  6. Hydrothermal processes above the Yellowstone magma chamber: Large hydrothermal systems and large hydrothermal explosions

    Science.gov (United States)

    Morgan, L.A.; Shanks, W.C. Pat; Pierce, K.L.

    2009-01-01

    and vein-fi lling; and (5) areal dimensions of many large hydrothermal explosion craters in Yellowstone are similar to those of its active geyser basins and thermal areas. For Yellowstone, our knowledge of hydrothermal craters and ejecta is generally limited to after the Yellowstone Plateau emerged from beneath a late Pleistocene icecap that was roughly a kilometer thick. Large hydrothermal explosions may have occurred earlier as indicated by multiple episodes of cementation and brecciation commonly observed in hydrothermal ejecta clasts. Critical components for large, explosive hydrothermal systems include a watersaturated system at or near boiling temperatures and an interconnected system of well-developed joints and fractures along which hydrothermal fluids flow. Active deformation of the Yellowstone caldera, active faulting and moderate local seismicity, high heat flow, rapid changes in climate, and regional stresses are factors that have strong infl uences on the type of hydrothermal system developed. Ascending hydrothermal fluids flow along fractures that have developed in response to active caldera deformation and along edges of low-permeability rhyolitic lava flows. Alteration of the area affected, self-sealing leading to development of a caprock for the hydrothermal system, and dissolution of silica-rich rocks are additional factors that may constrain the distribution and development of hydrothermal fields. A partial lowpermeability layer that acts as a cap to the hydrothermal system may produce some over-pressurization, thought to be small in most systems. Any abrupt drop in pressure initiates steam fl ashing and is rapidly transmitted through interconnected fractures that result in a series of multiple large-scale explosions contributing to the excavation of a larger explosion crater. Similarities between the size and dimensions of large hydrothermal explosion craters and thermal fields in Yellowstone may indicate that catastrophic events which result in l

  7. A New-Trend Model-Based to Solve the Peak Power Problems in OFDM Systems

    Directory of Open Access Journals (Sweden)

    Ashraf A. Eltholth

    2008-01-01

    Full Text Available The high peak to average power ration (PAR levels of orthogonal frequency division multiplexing (OFDM signals attract the attention of many researchers during the past decade. Existing approaches that attack this PAR issue are abundant, but no systematic framework or comparison between them exists to date. They sometimes even differ in the problem definition itself and consequently in the basic approach to follow. In this paper, we propose a new trend in mitigating the peak power problem in OFDM system based on modeling the effects of clipping and amplifier nonlinearities in an OFDM system. We showed that the distortion due to these effects is highly related to the dynamic range itself rather than the clipping level or the saturation level of the nonlinear amplifier, and thus we propose two criteria to reduce the dynamic range of the OFDM, namely, the use of MSK modulation and the use of Hadamard transform. Computer simulations of the OFDM system using Matlab are completely matched with the deduced model in terms of OFDM signal quality metrics such as BER, ACPR, and EVM. Also simulation results show that even the reduction of PAR using the two proposed criteria is not significat, and the reduction in the amount of distortion due to HPA is truley delightful.

  8. Finite-Time Stability of Large-Scale Systems with Interval Time-Varying Delay in Interconnection

    Directory of Open Access Journals (Sweden)

    T. La-inchua

    2017-01-01

    Full Text Available We investigate finite-time stability of a class of nonlinear large-scale systems with interval time-varying delays in interconnection. Time-delay functions are continuous but not necessarily differentiable. Based on Lyapunov stability theory and new integral bounding technique, finite-time stability of large-scale systems with interval time-varying delays in interconnection is derived. The finite-time stability criteria are delays-dependent and are given in terms of linear matrix inequalities which can be solved by various available algorithms. Numerical examples are given to illustrate effectiveness of the proposed method.

  9. A solution to the economic dispatch using EP based SA algorithm on large scale power system

    Energy Technology Data Exchange (ETDEWEB)

    Christober Asir Rajan, C. [Department of EEE, Pondicherry Engineering College, Pondicherry 605 014 (India)

    2010-07-15

    This paper develops a new approach for solving the Economic Load Dispatch (ELD) using an integrated algorithm based on Evolutionary Programming (EP) and Simulated Annealing (SA) on large scale power system. Classical methods employed for solving Economic Load Dispatch are calculus-based. For generator units having quadratic fuel cost functions, the classical techniques ignore or flatten out the portions of the incremental fuel cost curves and so may be have difficulties in the determination of the global optimum solution for non-differentiable fuel cost functions. To overcome these problems, the intelligent techniques, namely, Evolutionary Programming and Simulated Annealing are employed. The above said optimization techniques are capable of determining the global or near global optimum dispatch solutions. The validity and effectiveness of the proposed integrated algorithm has been tested with 66-bus Indian utility system, IEEE 5-bus, 30-bus, 118-bus system. And the test results are compared with the results obtained from other methods. Numerical results show that the proposed integrated algorithm can provide accurate solutions within reasonable time for any type of fuel cost functions. (author)

  10. Modified Differential Transform Method for Solving the Model of Pollution for a System of Lakes

    Directory of Open Access Journals (Sweden)

    Brahim Benhammouda

    2014-01-01

    present the posttreatment of the power series solutions with the Laplace-Padé resummation method as a useful strategy to extend the domain of convergence of the approximate solutions. The Fehlberg fourth-fifth order Runge-Kutta method with degree four interpolant (RKF45 numerical solution of the lakes system problem is used as a reference to compare with the analytical approximations showing the high accuracy of the results. The main advantage of the proposed technique is that it is based on a few straightforward steps and does not generate secular terms or depend of a perturbation parameter.

  11. Problem-solving tools for analyzing system problems. The affinity map and the relationship diagram.

    Science.gov (United States)

    Lepley, C J

    1998-12-01

    The author describes how to use two management tools, an affinity map and a relationship diagram, to define and analyze aspects of a complex problem in a system. The affinity map identifies the key influencing elements of the problem, whereas the relationship diagram helps to identify the area that is the most important element of the issue. Managers can use the tools to draw a map of problem drivers, graphically display the drivers in a diagram, and use the diagram to develop a cause-and-effect relationship.

  12. Value of flexibility in systems with large wind penetration

    OpenAIRE

    Silva , Vera

    2010-01-01

    The focus of this thesis is the quantification of the value of operation flexibility in systems with large penetration of wind generation. This begins with the quantification of the impact of wind generation (WG) uncertainty on the system's needs for frequency regulation and reserve. This is done by combing the stochastic behaviour of wind generation, demand uncertainty and generation outages. Two different approaches are compared to access the implications of using normal distribution approx...

  13. Algorithm for predicting the evolution of series of dynamics of complex systems in solving information problems

    Science.gov (United States)

    Kasatkina, T. I.; Dushkin, A. V.; Pavlov, V. A.; Shatovkin, R. R.

    2018-03-01

    In the development of information, systems and programming to predict the series of dynamics, neural network methods have recently been applied. They are more flexible, in comparison with existing analogues and are capable of taking into account the nonlinearities of the series. In this paper, we propose a modified algorithm for predicting the series of dynamics, which includes a method for training neural networks, an approach to describing and presenting input data, based on the prediction by the multilayer perceptron method. To construct a neural network, the values of a series of dynamics at the extremum points and time values corresponding to them, formed based on the sliding window method, are used as input data. The proposed algorithm can act as an independent approach to predicting the series of dynamics, and be one of the parts of the forecasting system. The efficiency of predicting the evolution of the dynamics series for a short-term one-step and long-term multi-step forecast by the classical multilayer perceptron method and a modified algorithm using synthetic and real data is compared. The result of this modification was the minimization of the magnitude of the iterative error that arises from the previously predicted inputs to the inputs to the neural network, as well as the increase in the accuracy of the iterative prediction of the neural network.

  14. Collaboration and Virtualization in Large Information Systems Projects

    Directory of Open Access Journals (Sweden)

    Stefan Ioan NITCHI

    2009-01-01

    Full Text Available A project is evolving through different phases from idea and conception until the experiments, implementation and maintenance. The globalization, the Internet, the Web and the mobile computing changed many human activities, and in this respect, the realization of the Information System (IS projects. The projects are growing, the teams are geographically distributed, and the users are heterogeneous. In this respect, the realization of the large Information Technology (IT projects needs to use collaborative technologies. The distribution of the team, the users' heterogeneity and the project complexity determines the virtualization. This paper is an overview of these aspects for large IT projects. It shortly present a general framework developed by the authors for collaborative systems in general and adapted to collaborative project management. The general considerations are illustrated on the case of a large IT project in which the authors were involved.

  15. Highly uniform parallel microfabrication using a large numerical aperture system

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Zi-Yu; Su, Ya-Hui, E-mail: ustcsyh@ahu.edu.cn, E-mail: dongwu@ustc.edu.cn [School of Electrical Engineering and Automation, Anhui University, Hefei 230601 (China); Zhang, Chen-Chu; Hu, Yan-Lei; Wang, Chao-Wei; Li, Jia-Wen; Chu, Jia-Ru; Wu, Dong, E-mail: ustcsyh@ahu.edu.cn, E-mail: dongwu@ustc.edu.cn [CAS Key Laboratory of Mechanical Behavior and Design of Materials, Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei 230026 (China)

    2016-07-11

    In this letter, we report an improved algorithm to produce accurate phase patterns for generating highly uniform diffraction-limited multifocal arrays in a large numerical aperture objective system. It is shown that based on the original diffraction integral, the uniformity of the diffraction-limited focal arrays can be improved from ∼75% to >97%, owing to the critical consideration of the aperture function and apodization effect associated with a large numerical aperture objective. The experimental results, e.g., 3 × 3 arrays of square and triangle, seven microlens arrays with high uniformity, further verify the advantage of the improved algorithm. This algorithm enables the laser parallel processing technology to realize uniform microstructures and functional devices in the microfabrication system with a large numerical aperture objective.

  16. An efficient, scalable, and adaptable framework for solving generic systems of level-set PDEs

    Directory of Open Access Journals (Sweden)

    Kishore R. Mosaliganti

    2013-12-01

    Full Text Available In the last decade, level-set methods have been actively developed for applications in image registration, segmentation, tracking, and reconstruction. However, the development of a wide variety of level-set PDEs and their numerical discretization schemes, coupled with hybrid combinations of PDE terms, stopping criteria, and reinitialization strategies, has created a software logistics problem. In the absence of an integrative design, current toolkits support only specific types of level-set implementations which restrict future algorithm development since extensions require significant code duplication and effort. In the new NIH/NLM Insight Toolkit (ITK v4 architecture, we implemented a level-set software design that is flexible to different numerical (continuous, discrete, and sparse and grid representations (point, mesh, and image-based. Given that a generic PDE is a summation of different terms, we used a set of linked containers to which level-set terms can be added or deleted at any point in the evolution process. This container-based approach allows the user to explore and customize terms in the level-set equation at compile-time in a flexible manner. The framework is optimized so that repeated computations of common intensity functions (e.g. gradient and Hessians across multiple terms is eliminated. The framework further enables the evolution of multiple level-sets for multi-object segmentation and processing of large datasets. For doing so, we restrict level-set domains to subsets of the image domain and use multithreading strategies to process groups of subdomains or level-set functions. Users can also select from a variety of reinitialization policies and stopping criteria. Finally, we developed a visualization framework that shows the evolution of a level-set in real-time to help guide algorithm development and parameter optimization. We demonstrate the power of our new framework using confocal microscopy images of cells in a

  17. Local decoherence-resistant quantum states of large systems

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Utkarsh; Sen, Aditi; Sen, Ujjwal, E-mail: ujjwal@hri.res.in

    2015-02-06

    We identify an effectively decoherence-free class of quantum states, each of which consists of a “minuscule” and a “large” sector, against local noise. In particular, the content of entanglement and other quantum correlations in the minuscule to large partition is independent of the number of particles in their large sectors, when all the particles suffer passage through local amplitude and phase damping channels. The states of the large sectors are distinct in terms of markedly different amounts of violation of Bell inequality. In case the large sector is macroscopic, such states are akin to the Schrödinger cat. - Highlights: • We identify an effectively decoherence-free class of quantum states of large systems. • We work with local noise models. • Decay of entanglement as well as information-theoretic quantum correlations considered. • The states are of the form of the Schrödinger cats, with minuscule and large sectors. • The states of the large sector are distinguishable by their violation of Bell inequality.

  18. A Dynamic Optimization Strategy for the Operation of Large Scale Seawater Reverses Osmosis System

    Directory of Open Access Journals (Sweden)

    Aipeng Jiang

    2014-01-01

    Full Text Available In this work, an efficient strategy was proposed for efficient solution of the dynamic model of SWRO system. Since the dynamic model is formulated by a set of differential-algebraic equations, simultaneous strategies based on collocations on finite element were used to transform the DAOP into large scale nonlinear programming problem named Opt2. Then, simulation of RO process and storage tanks was carried element by element and step by step with fixed control variables. All the obtained values of these variables then were used as the initial value for the optimal solution of SWRO system. Finally, in order to accelerate the computing efficiency and at the same time to keep enough accuracy for the solution of Opt2, a simple but efficient finite element refinement rule was used to reduce the scale of Opt2. The proposed strategy was applied to a large scale SWRO system with 8 RO plants and 4 storage tanks as case study. Computing result shows that the proposed strategy is quite effective for optimal operation of the large scale SWRO system; the optimal problem can be successfully solved within decades of iterations and several minutes when load and other operating parameters fluctuate.

  19. Multiarea Transmission Cost Allocation in Large Power Systems Using the Nodal Pricing Control Approach

    Directory of Open Access Journals (Sweden)

    M. Ghayeni

    2010-12-01

    Full Text Available This paper proposes an algorithm for transmission cost allocation (TCA in a large power system based on nodal pricing approach using the multi-area scheme. The nodal pricing approach is introduced to allocate the transmission costs by the control of nodal prices in a single area network. As the number of equations is dependent on the number of buses and generators, this method will be very time consuming for large power systems. To solve this problem, the present paper proposes a new algorithm based on multi-area approach for regulating the nodal prices, so that the simulation time is greatly reduced and therefore the TCA problem with nodal pricing approach will be applicable for large power systems. In addition, in this method the transmission costs are allocated to users more equitable. Since the higher transmission costs in an area having a higher reliability are paid only by users of that area in contrast with the single area method, in which these costs are allocated to all users regardless of their locations. The proposed method is implemented on the IEEE 118 bus test system which comprises three areas. Results show that with application of multi-area approach, the simulation time is greatly reduced and the transmission costs are also allocated to users with less variation in new nodal prices with respect to the single area approach.

  20. Steiner systems and large non-Hamiltonian hypergraphs

    Directory of Open Access Journals (Sweden)

    Zsolt Tuza

    2006-10-01

    Full Text Available From Steiner systems S(k − 2, 2k − 3, v, we construct k-uniform hyper- graphs of large size without Hamiltonian cycles. This improves previous estimates due to G. Y. Katona and H. Kierstead [J. Graph Theory 30 (1999, pp.  205–212].

  1. Participatory Design and the Challenges of Large-Scale Systems

    DEFF Research Database (Denmark)

    Simonsen, Jesper; Hertzum, Morten

    2008-01-01

    With its 10th biannual anniversary conference, Participatory Design (PD) is leaving its teens and must now be considered ready to join the adult world. In this article we encourage the PD community to think big: PD should engage in large-scale information-systems development and opt for a PD...

  2. Economic viability of large-scale fusion systems

    Energy Technology Data Exchange (ETDEWEB)

    Helsley, Charles E., E-mail: cehelsley@fusionpowercorporation.com; Burke, Robert J.

    2014-01-01

    A typical modern power generation facility has a capacity of about 1 GWe (Gigawatt electric) per unit. This works well for fossil fuel plants and for most fission facilities for it is large enough to support the sophisticated generation infrastructure but still small enough to be accommodated by most utility grid systems. The size of potential fusion power systems may demand a different viewpoint. The compression and heating of the fusion fuel for ignition requires a large driver, even if it is necessary for only a few microseconds or nanoseconds per energy pulse. The economics of large systems, that can effectively use more of the driver capacity, need to be examined. The assumptions used in this model are specific for the Fusion Power Corporation (FPC) SPRFD process but could be generalized for any system. We assume that the accelerator is the most expensive element of the facility and estimate its cost to be $20 billion. Ignition chambers and fuel handling facilities are projected to cost $1.5 billion each with up to 10 to be serviced by one accelerator. At first this seems expensive but that impression has to be tempered by the energy output that is equal to 35 conventional nuclear plants. This means the cost per kWh is actually low. Using the above assumptions and industry data for generators and heat exchange systems, we conclude that a fully utilized fusion system will produce marketable energy at roughly one half the cost of our current means of generating an equivalent amount of energy from conventional fossil fuel and/or fission systems. Even fractionally utilized systems, i.e. systems used at 25% of capacity, can be cost effective in many cases. In conclusion, SPRFD systems can be scaled to a size and configuration that can be economically viable and very competitive in today's energy market. Electricity will be a significant element in the product mix but synthetic fuels and water may also need to be incorporated to make the large system

  3. Economic viability of large-scale fusion systems

    International Nuclear Information System (INIS)

    Helsley, Charles E.; Burke, Robert J.

    2014-01-01

    A typical modern power generation facility has a capacity of about 1 GWe (Gigawatt electric) per unit. This works well for fossil fuel plants and for most fission facilities for it is large enough to support the sophisticated generation infrastructure but still small enough to be accommodated by most utility grid systems. The size of potential fusion power systems may demand a different viewpoint. The compression and heating of the fusion fuel for ignition requires a large driver, even if it is necessary for only a few microseconds or nanoseconds per energy pulse. The economics of large systems, that can effectively use more of the driver capacity, need to be examined. The assumptions used in this model are specific for the Fusion Power Corporation (FPC) SPRFD process but could be generalized for any system. We assume that the accelerator is the most expensive element of the facility and estimate its cost to be $20 billion. Ignition chambers and fuel handling facilities are projected to cost $1.5 billion each with up to 10 to be serviced by one accelerator. At first this seems expensive but that impression has to be tempered by the energy output that is equal to 35 conventional nuclear plants. This means the cost per kWh is actually low. Using the above assumptions and industry data for generators and heat exchange systems, we conclude that a fully utilized fusion system will produce marketable energy at roughly one half the cost of our current means of generating an equivalent amount of energy from conventional fossil fuel and/or fission systems. Even fractionally utilized systems, i.e. systems used at 25% of capacity, can be cost effective in many cases. In conclusion, SPRFD systems can be scaled to a size and configuration that can be economically viable and very competitive in today's energy market. Electricity will be a significant element in the product mix but synthetic fuels and water may also need to be incorporated to make the large system economically

  4. VIM for Solving the Pollution Problem of a System of Lakes

    Directory of Open Access Journals (Sweden)

    J. Biazar

    2010-01-01

    Full Text Available Pollution has become a very serious threat to our environment. Monitoring pollution is the first step toward planning to save the environment. The use of differential equations of monitoring pollution has become possible. In this paper the pollution problem of three lakes with interconnecting channels has been studied. The variational iteration method has been applied to compute an approximate solution of the system of differential equations, governing on the problem. Three different types of input models: sinusoidal, impulse, and step will be considered for monitoring the pollution in the lakes. The results are compared with those obtained by Adomian decomposition method. This comparison reveals that the variational iteration method is easier to be implemented.

  5. Solving Environmental Problems

    DEFF Research Database (Denmark)

    Ørding Olsen, Anders; Sofka, Wolfgang; Grimpe, Christoph

    2017-01-01

    for Research and Technological Development (FP7), our results indicate that the problem-solving potential of a search strategy increases with the diversity of existing knowledge of the partners in a consortium and with the experience of the partners involved. Moreover, we identify a substantial negative effect...... dispersed. Hence, firms need to collaborate. We shed new light on collaborative search strategies led by firms in general and for solving environmental problems in particular. Both topics are largely absent in the extant open innovation literature. Using data from the European Seventh Framework Program...

  6. Hydraulic System Design of Hydraulic Actuators for Large Butterfly Valves

    Directory of Open Access Journals (Sweden)

    Ye HUANG

    2014-09-01

    Full Text Available Hydraulic control systems of butterfly valves are presently valve-controlled and pump-controlled. Valve-controlled hydraulic systems have serious power loss and generate much heat during throttling. Pump-controlled hydraulic systems have no overflow or throttling losses but are limited in the speed adjustment of the variable-displacement pump, generate much noise, pollute the environment, and have motor power that does not match load requirements, resulting in low efficiency under light loads and wearing of the variable-displacement pump. To overcome these shortcomings, this article designs a closed hydraulic control system in which an AC servo motor drives a quantitative pump that controls a spiral swinging hydraulic cylinder, and analyzes and calculates the structure and parameters of a spiral swinging hydraulic cylinder. The hydraulic system adjusts the servo motor’s speed according to the requirements of the control system, and the motor power matches the power provided to components, thus eliminating the throttling loss of hydraulic circuits. The system is compact, produces a large output force, provides stable transmission, has a quick response, and is suitable as a hydraulic control system of a large butterfly valve.

  7. Liquid lithium loop system to solve challenging technology issues for fusion power plant

    Science.gov (United States)

    Ono, M.; Majeski, R.; Jaworski, M. A.; Hirooka, Y.; Kaita, R.; Gray, T. K.; Maingi, R.; Skinner, C. H.; Christenson, M.; Ruzic, D. N.

    2017-11-01

    Steady-state fusion power plant designs present major divertor technology challenges, including high divertor heat flux both in steady-state and during transients. In addition to these concerns, there are the unresolved technology issues of long term dust accumulation and associated tritium inventory and safety issues. It has been suggested that radiation-based liquid lithium (LL) divertor concepts with a modest lithium-loop could provide a possible solution for these outstanding fusion reactor technology issues, while potentially improving reactor plasma performance. The application of lithium (Li) in NSTX resulted in improved H-mode confinement, H-mode power threshold reduction, and reduction in the divertor peak heat flux while maintaining essentially Li-free core plasma operation even during H-modes. These promising results in NSTX and related modeling calculations motivated the radiative liquid lithium divertor concept and its variant, the active liquid lithium divertor concept, taking advantage of the enhanced or non-coronal Li radiation in relatively poorly confined divertor plasmas. To maintain the LL purity in a 1 GW-electric class fusion power plant, a closed LL loop system with a modest circulating capacity of ~1 l s-1 is envisioned. We examined two key technology issues: (1) dust or solid particle removal and (2) real time recovery of tritium from LL while keeping the tritium inventory level to an acceptable level. By running the LL-loop continuously, it can carry the dust particles and impurities generated in the vacuum vessel to the outside where the dust/impurities can be removed by relatively simple dust filter, cold trap and/or centrifugal separation systems. With ~1 l s-1 LL flow, even a small 0.1% dust content by weight (or 0.5 g s-1) suggests that the LL-loop could carry away nearly 16 tons of dust per year. In a 1 GW-electric (or ~3 GW fusion power) fusion power plant, about 0.5 g s-1 of tritium is needed to maintain the fusion fuel cycle

  8. Solving the puzzle of yeast survival in ephemeral nectar systems: exponential growth is not enough.

    Science.gov (United States)

    Hausmann, Sebastian L; Tietjen, Britta; Rillig, Matthias C

    2017-12-01

    Flower nectar is a sugar-rich ephemeral habitat for microorganisms. Nectar-borne yeasts are part of the microbial community and can affect pollination by changing nectar chemistry, attractiveness to pollinators or flower temperature if yeast population densities are high. Pollinators act as dispersal agents in this system; however, pollination events lead potentially to shrinking nectar yeast populations. We here examine how sufficiently high cell densities of nectar yeast can develop in a flower. In laboratory experiments, we determined the remaining fraction of nectar yeast cells after nectar removal, and used honeybees to determine the number of transmitted yeast cells from one flower to the next. The results of these experiments directly fed into a simulation model providing an insight into movement and colonization ecology of nectar yeasts. We found that cell densities only reached an ecologically relevant size for an intermediate pollination probability. Too few pollination events reduce yeast inoculation rate and too many reduce yeast population size strongly. In addition, nectar yeasts need a trait combination of at least an intermediate growth rate and an intermediate remaining fraction to compensate for highly frequent decimations. Our results can be used to predict nectar yeast dispersal, growth and consequently their ecological effects. © FEMS 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Nonterrestrial material processing and manufacturing of large space systems

    Science.gov (United States)

    Von Tiesenhausen, G.

    1979-01-01

    Nonterrestrial processing of materials and manufacturing of large space system components from preprocessed lunar materials at a manufacturing site in space is described. Lunar materials mined and preprocessed at the lunar resource complex will be flown to the space manufacturing facility (SMF), where together with supplementary terrestrial materials, they will be final processed and fabricated into space communication systems, solar cell blankets, radio frequency generators, and electrical equipment. Satellite Power System (SPS) material requirements and lunar material availability and utilization are detailed, and the SMF processing, refining, fabricating facilities, material flow and manpower requirements are described.

  10. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  11. Senior Management Use of Management Control Systems in Large Companies

    DEFF Research Database (Denmark)

    Willert, Jeanette; Israelsen, Poul; Rohde, Carsten

    2017-01-01

    The use of management control systems in large companies remains relatively unexplored. Indeed, only a few studies of senior managers’ use of management control systems consider multiple controls in companies. This paper explores data from a comprehensive survey of the use of management control...... systems in 120 strategic business units at some of the largest companies in Denmark. The paper identifies how senior management guides and controls their subordinates to meet their companies’ objectives. The presentation and discussion of the results, including citations from executive managers, use...

  12. Large Time Behavior of the Vlasov-Poisson-Boltzmann System

    Directory of Open Access Journals (Sweden)

    Li Li

    2013-01-01

    Full Text Available The motion of dilute charged particles can be modeled by Vlasov-Poisson-Boltzmann system. We study the large time stability of the VPB system. To be precise, we prove that when time goes to infinity, the solution of VPB system tends to global Maxwellian state in a rate Ot−∞, by using a method developed for Boltzmann equation without force in the work of Desvillettes and Villani (2005. The improvement of the present paper is the removal of condition on parameter λ as in the work of Li (2008.

  13. A novel reliability evaluation method for large engineering systems

    Directory of Open Access Journals (Sweden)

    Reda Farag

    2016-06-01

    Full Text Available A novel reliability evaluation method for large nonlinear engineering systems excited by dynamic loading applied in time domain is presented. For this class of problems, the performance functions are expected to be function of time and implicit in nature. Available first- or second-order reliability method (FORM/SORM will be challenging to estimate reliability of such systems. Because of its inefficiency, the classical Monte Carlo simulation (MCS method also cannot be used for large nonlinear dynamic systems. In the proposed approach, only tens instead of hundreds or thousands of deterministic evaluations at intelligently selected points are used to extract the reliability information. A hybrid approach, consisting of the stochastic finite element method (SFEM developed by the author and his research team using FORM, response surface method (RSM, an interpolation scheme, and advanced factorial schemes, is proposed. The method is clarified with the help of several numerical examples.

  14. Understanding large social-ecological systems: introducing the SESMAD project

    Directory of Open Access Journals (Sweden)

    Michael Cox

    2014-08-01

    Full Text Available This article introduces the Social-ecological systems meta-analysis database (SESMAD project, which is the project behind the case studies and synthetic articles contained in this special issue of the International Journal of the Commons. SESMAD is an internationally collaborative meta-analysis project that builds on previous seminally synthetic work on small-scale common-pool resource systems conducted at the Workshop in Political Theory and Policy Analysis at Indiana University. This project is guided by the following research question: can the variables found to be important in explaining outcomes on small-scale systems be scaled up to explain outcomes in large-scale environmental governance? In this special issue we report on our findings thus far through a set of case studies of large-scale environmental governance, a paper that describes our conceptual advances, and a paper that compares these five case studies to further examine our central research question.

  15. Budget management in the system of solving ecological contradictions of development of the national economy: territorial approach

    Directory of Open Access Journals (Sweden)

    Petrushenko Mykola M.

    2013-03-01

    Full Text Available The article analyses organisational and economic factors and forms a theoretical and methodical approach to budget management in the system of solving ecological contradictions of development of territorial systems of the national economy. The article justifies improvement of processes of managing budgets, directed at overcoming ecological contradictions, on the basis of conceptual provisions of the budgeting oriented at the result. It develops a scheme-model of the organisational and structural solution of ecological contradictions, with reinforcement of the integration role of the budgeting method, in the system of managing an administrative and territorial unit using example of the Sumy oblast. The offered theoretical and methodical approach to improvement of the budget management in the territorial and economic system allows development of principles of the on trust management in the field of ecological and economic relations and also activation of practical introduction of managerial instruments of consensual solution of ecologically caused conflict situations with application of specialised management-consulting activity.

  16. Variations Method to Solve Terminal Problems for the Second Order Systems of Canonical Form with State Constraints

    Directory of Open Access Journals (Sweden)

    T. S. Kasatkina

    2015-01-01

    Full Text Available Terminal control problem with fixed finite time for the second order affine systems with state constraints is considered. A solution of such terminal problem is suggested for the systems with scalar control of regular canonical form.In this article it is shown that the initial terminal problem is equivalent to the problem of auxiliary function search. This function should satisfy some conditions. Such function design consists of two stages. The first stage includes search of function which corresponds the solution of the terminal control problem without state constraints. This function is designed as polynom of the fifth power which depends on time variable. Coefficients of the polynom are defined by boundary conditions. The second stage includes modification of designed function if corresponding to that function trajectory is not satisfied constraints. Modification process is realized by adding to the current function supplementary polynom. Influence of that polynom handles by variation of a parameter value. Modification process can include a few iterations. After process termination continuous control is found. This control is the solution of the initial terminal prUsing presented scheme the terminal control problem for system, which describes oscillations of the mathematical pendulum, is solved. This approach can be used for the solution of terminal control problems with state constraints for affine systems with multi-dimensional control.

  17. A Power System Optimal Dispatch Strategy Considering the Flow of Carbon Emissions and Large Consumers

    Directory of Open Access Journals (Sweden)

    Jun Yang

    2015-08-01

    Full Text Available The carbon emissions trading market and direct power purchases by large consumers are two promising directions of power system development. To trace the carbon emission flow in the power grid, the theory of carbon emission flow is improved by allocating power loss to the load side. Based on the improved carbon emission flow theory, an optimal dispatch model is proposed to optimize the cost of both large consumers and the power grid, which will benefit from the carbon emissions trading market. Moreover, to better simulate reality, the direct purchase of power by large consumers is also considered in this paper. The OPF (optimal power flow method is applied to solve the problem. To evaluate our proposed optimal dispatch strategy, an IEEE 30-bus system is used to test the performance. The effects of the price of carbon emissions and the price of electricity from normal generators and low-carbon generators with regards to the optimal dispatch are analyzed. The simulation results indicate that the proposed strategy can significantly reduce both the operation cost of the power grid and the power utilization cost of large consumers.

  18. Power oscillation suppression by robust SMES in power system with large wind power penetration

    International Nuclear Information System (INIS)

    Ngamroo, Issarachai; Cuk Supriyadi, A.N.; Dechanupaprittha, Sanchai; Mitani, Yasunori

    2009-01-01

    The large penetration of wind farm into interconnected power systems may cause the severe problem of tie-line power oscillations. To suppress power oscillations, the superconducting magnetic energy storage (SMES) which is able to control active and reactive powers simultaneously, can be applied. On the other hand, several generating and loading conditions, variation of system parameters, etc., cause uncertainties in the system. The SMES controller designed without considering system uncertainties may fail to suppress power oscillations. To enhance the robustness of SMES controller against system uncertainties, this paper proposes a robust control design of SMES by taking system uncertainties into account. The inverse additive perturbation is applied to represent the unstructured system uncertainties and included in power system modeling. The configuration of active and reactive power controllers is the first-order lead-lag compensator with single input feedback. To tune the controller parameters, the optimization problem is formulated based on the enhancement of robust stability margin. The particle swarm optimization is used to solve the problem and achieve the controller parameters. Simulation studies in the six-area interconnected power system with wind farms confirm the robustness of the proposed SMES under various operating conditions

  19. Power oscillation suppression by robust SMES in power system with large wind power penetration

    Science.gov (United States)

    Ngamroo, Issarachai; Cuk Supriyadi, A. N.; Dechanupaprittha, Sanchai; Mitani, Yasunori

    2009-01-01

    The large penetration of wind farm into interconnected power systems may cause the severe problem of tie-line power oscillations. To suppress power oscillations, the superconducting magnetic energy storage (SMES) which is able to control active and reactive powers simultaneously, can be applied. On the other hand, several generating and loading conditions, variation of system parameters, etc., cause uncertainties in the system. The SMES controller designed without considering system uncertainties may fail to suppress power oscillations. To enhance the robustness of SMES controller against system uncertainties, this paper proposes a robust control design of SMES by taking system uncertainties into account. The inverse additive perturbation is applied to represent the unstructured system uncertainties and included in power system modeling. The configuration of active and reactive power controllers is the first-order lead-lag compensator with single input feedback. To tune the controller parameters, the optimization problem is formulated based on the enhancement of robust stability margin. The particle swarm optimization is used to solve the problem and achieve the controller parameters. Simulation studies in the six-area interconnected power system with wind farms confirm the robustness of the proposed SMES under various operating conditions.

  20. Tool Support for Parametric Analysis of Large Software Simulation Systems

    Science.gov (United States)

    Schumann, Johann; Gundy-Burlet, Karen; Pasareanu, Corina; Menzies, Tim; Barrett, Tony

    2008-01-01

    The analysis of large and complex parameterized software systems, e.g., systems simulation in aerospace, is very complicated and time-consuming due to the large parameter space, and the complex, highly coupled nonlinear nature of the different system components. Thus, such systems are generally validated only in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. We have addressed the factors deterring such an analysis with a tool to support envelope assessment: we utilize a combination of advanced Monte Carlo generation with n-factor combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. Additional test-cases, automatically generated from models (e.g., UML, Simulink, Stateflow) improve the coverage. The distributed test runs of the software system produce vast amounts of data, making manual analysis impossible. Our tool automatically analyzes the generated data through a combination of unsupervised Bayesian clustering techniques (AutoBayes) and supervised learning of critical parameter ranges using the treatment learner TAR3. The tool has been developed around the Trick simulation environment, which is widely used within NASA. We will present this tool with a GN&C (Guidance, Navigation and Control) simulation of a small satellite system.

  1. Status and Future Developments in Large Accelerator Control Systems

    International Nuclear Information System (INIS)

    Karen S. White

    2006-01-01

    Over the years, accelerator control systems have evolved from small hardwired systems to complex computer controlled systems with many types of graphical user interfaces and electronic data processing. Today's control systems often include multiple software layers, hundreds of distributed processors, and hundreds of thousands of lines of code. While it is clear that the next generation of accelerators will require much bigger control systems, they will also need better systems. Advances in technology will be needed to ensure the network bandwidth and CPU power can provide reasonable update rates and support the requisite timing systems. Beyond the scaling problem, next generation systems face additional challenges due to growing cyber security threats and the likelihood that some degree of remote development and operation will be required. With a large number of components, the need for high reliability increases and commercial solutions can play a key role towards this goal. Future control systems will operate more complex machines and need to present a well integrated, interoperable set of tools with a high degree of automation. Consistency of data presentation and exception handling will contribute to efficient operations. From the development perspective, engineers will need to provide integrated data management in the beginning of the project and build adaptive software components around a central data repository. This will make the system maintainable and ensure consistency throughout the inevitable changes during the machine lifetime. Additionally, such a large project will require professional project management and disciplined use of well-defined engineering processes. Distributed project teams will make the use of standards, formal requirements and design and configuration control vital. Success in building the control system of the future may hinge on how well we integrate commercial components and learn from best practices used in other industries

  2. Policy Driven Development: Flexible Policy Insertion for Large Scale Systems.

    Science.gov (United States)

    Demchak, Barry; Krüger, Ingolf

    2012-07-01

    The success of a software system depends critically on how well it reflects and adapts to stakeholder requirements. Traditional development methods often frustrate stakeholders by creating long latencies between requirement articulation and system deployment, especially in large scale systems. One source of latency is the maintenance of policy decisions encoded directly into system workflows at development time, including those involving access control and feature set selection. We created the Policy Driven Development (PDD) methodology to address these development latencies by enabling the flexible injection of decision points into existing workflows at runtime , thus enabling policy composition that integrates requirements furnished by multiple, oblivious stakeholder groups. Using PDD, we designed and implemented a production cyberinfrastructure that demonstrates policy and workflow injection that quickly implements stakeholder requirements, including features not contemplated in the original system design. PDD provides a path to quickly and cost effectively evolve such applications over a long lifetime.

  3. Universality in few-body systems with large scattering length

    International Nuclear Information System (INIS)

    Hammer, H.-W.

    2005-01-01

    Effective Field Theory (EFT) provides a powerful framework that exploits a separation of scales in physical systems to perform systematically improvable, model-independent calculations. Particularly interesting are few-body systems with short-range interactions and large two-body scattering length. Such systems display remarkable universal features. In systems with more than two particles, a three-body force with limit cycle behavior is required for consistent renormalization already at leading order. We will review this EFT and some of its applications in the physics of cold atoms and nuclear physics. In particular, we will discuss the possibility of an infrared limit cycle in QCD. Recent extensions of the EFT approach to the four-body system and N-boson droplets in two spatial dimensions will also be addressed

  4. Senior Management Use of Management Control Systems in Large Companies

    DEFF Research Database (Denmark)

    Willert, Jeanette; Israelsen, Poul; Rohde, Carsten

    2017-01-01

    Ferreira and Otley’s (2009) conceptual and holistic framework for performance management systems, supplemented by elements of contextual factors and organisational culture. Further, selected researchers’ perceptions of the purpose of using management control systems are related to practitioners’ ideas......The use of management control systems in large companies remains relatively unexplored. Indeed, only a few studies of senior managers’ use of management control systems consider multiple controls in companies. This paper explores data from a comprehensive survey of the use of management control...... systems in 120 strategic business units at some of the largest companies in Denmark. The paper identifies how senior management guides and controls their subordinates to meet their companies’ objectives. The presentation and discussion of the results, including citations from executive managers, use...

  5. FEMSYN - a code system to solve multigroup diffusion theory equations using a variety of solution techniques. Part 1 : Description of code system - input and sample problems

    International Nuclear Information System (INIS)

    Jagannathan, V.

    1985-01-01

    A modular computer code system called FEMSYN has been developed to solve the multigroup diffusion theory equations. The various methods that are incorporated in FEMSYN are (i) finite difference method (FDM) (ii) finite element method (FEM) and (iii) single channel flux synthesis method (SCFS). These methods are described in detail in parts II, III and IV of the present report. In this report, a comparison of the accuracy and the speed of different methods of solution for some benchmark problems are reported. The input preparation and listing of sample input and output are included in the Appendices. The code FEMSYN has been used to solve a wide variety of reactor core problems. It can be used for both LWR and PHWR applications. (author)

  6. Improved decomposition–coordination and discrete differential dynamic programming for optimization of large-scale hydropower system

    International Nuclear Information System (INIS)

    Li, Chunlong; Zhou, Jianzhong; Ouyang, Shuo; Ding, Xiaoling; Chen, Lu

    2014-01-01

    Highlights: • Optimization of large-scale hydropower system in the Yangtze River basin. • Improved decomposition–coordination and discrete differential dynamic programming. • Generating initial solution randomly to reduce generation time. • Proposing relative coefficient for more power generation. • Proposing adaptive bias corridor technology to enhance convergence speed. - Abstract: With the construction of major hydro plants, more and more large-scale hydropower systems are taking shape gradually, which brings up a challenge to optimize these systems. Optimization of large-scale hydropower system (OLHS), which is to determine water discharges or water levels of overall hydro plants for maximizing total power generation when subjecting to lots of constrains, is a high dimensional, nonlinear and coupling complex problem. In order to solve the OLHS problem effectively, an improved decomposition–coordination and discrete differential dynamic programming (IDC–DDDP) method is proposed in this paper. A strategy that initial solution is generated randomly is adopted to reduce generation time. Meanwhile, a relative coefficient based on maximum output capacity is proposed for more power generation. Moreover, an adaptive bias corridor technology is proposed to enhance convergence speed. The proposed method is applied to long-term optimal dispatches of large-scale hydropower system (LHS) in the Yangtze River basin. Compared to other methods, IDC–DDDP has competitive performances in not only total power generation but also convergence speed, which provides a new method to solve the OLHS problem

  7. The ESPAT tool: a general-purpose DSS shell for solving stochastic optimization problems in complex river-aquifer systems

    Science.gov (United States)

    Macian-Sorribes, Hector; Pulido-Velazquez, Manuel; Tilmant, Amaury

    2015-04-01

    Stochastic programming methods are better suited to deal with the inherent uncertainty of inflow time series in water resource management. However, one of the most important hurdles in their use in practical implementations is the lack of generalized Decision Support System (DSS) shells, usually based on a deterministic approach. The purpose of this contribution is to present a general-purpose DSS shell, named Explicit Stochastic Programming Advanced Tool (ESPAT), able to build and solve stochastic programming problems for most water resource systems. It implements a hydro-economic approach, optimizing the total system benefits as the sum of the benefits obtained by each user. It has been coded using GAMS, and implements a Microsoft Excel interface with a GAMS-Excel link that allows the user to introduce the required data and recover the results. Therefore, no GAMS skills are required to run the program. The tool is divided into four modules according to its capabilities: 1) the ESPATR module, which performs stochastic optimization procedures in surface water systems using a Stochastic Dual Dynamic Programming (SDDP) approach; 2) the ESPAT_RA module, which optimizes coupled surface-groundwater systems using a modified SDDP approach; 3) the ESPAT_SDP module, capable of performing stochastic optimization procedures in small-size surface systems using a standard SDP approach; and 4) the ESPAT_DET module, which implements a deterministic programming procedure using non-linear programming, able to solve deterministic optimization problems in complex surface-groundwater river basins. The case study of the Mijares river basin (Spain) is used to illustrate the method. It consists in two reservoirs in series, one aquifer and four agricultural demand sites currently managed using historical (XIV century) rights, which give priority to the most traditional irrigation district over the XX century agricultural developments. Its size makes it possible to use either the SDP or

  8. A trial of patient-oriented problem-solving system for immunology teaching in China: a comparison with dialectic lectures

    Science.gov (United States)

    2013-01-01

    Background The most common teaching method used in China is lecturing, but recently, efforts have been widely undertaken to promote the transition from teacher-centered to student-centered education. The patient-oriented problem-solving (POPS) system is an innovative teaching-learning method that permits students to work in small groups to solve clinical problems, promotes self-learning, encourages clinical reasoning and develops long-lasting memory. To our best knowledge, however, POPS has never been applied in teaching immunology in China. The aim of this study was to develop POPS in teaching immunology and assess students’ and teachers’ perception to POPS. Methods 321 second-year medical students were divided into two groups: I and II. Group I, comprising 110 students, was taught by POPS, and 16 immunology teachers witnessed the whole teaching process. Group II including the remaining 211 students was taught through traditional lectures. The results of the pre- and post-test of both groups were compared. Group I students and teachers then completed a self-structured feedback questionnaire for analysis before a discussion meeting attended only by the teachers was held. Results Significant improvement in the mean difference between the pre- and post-test scores of those in Groups I and II was seen, demonstrating the effectiveness of POPS teaching. Most students responded that POPS facilitates self-learning, helps them to understand topics and creates interest, and 88.12% of students favored POPS over simple lectures. Moreover, while they responded that POPS facilitated student learning better than lectures, teachers pointed out that limited teaching resources would make it difficult for wide POPS application in China. Conclusions While POPS can break up the monotony of dialectic lectures and serve as a better teaching method, it may not be feasible for the current educational environment in China. The main reason for this is the relative shortage of teaching

  9. A trial of patient-oriented problem-solving system for immunology teaching in China: a comparison with dialectic lectures.

    Science.gov (United States)

    Zhang, Zhiren; Liu, Wei; Han, Junfeng; Guo, Sheng; Wu, Yuzhang

    2013-01-28

    The most common teaching method used in China is lecturing, but recently, efforts have been widely undertaken to promote the transition from teacher-centered to student-centered education. The patient-oriented problem-solving (POPS) system is an innovative teaching-learning method that permits students to work in small groups to solve clinical problems, promotes self-learning, encourages clinical reasoning and develops long-lasting memory. To our best knowledge, however, POPS has never been applied in teaching immunology in China. The aim of this study was to develop POPS in teaching immunology and assess students' and teachers' perception to POPS. 321 second-year medical students were divided into two groups: I and II. Group I, comprising 110 students, was taught by POPS, and 16 immunology teachers witnessed the whole teaching process. Group II including the remaining 211 students was taught through traditional lectures. The results of the pre- and post-test of both groups were compared. Group I students and teachers then completed a self-structured feedback questionnaire for analysis before a discussion meeting attended only by the teachers was held. Significant improvement in the mean difference between the pre- and post-test scores of those in Groups I and II was seen, demonstrating the effectiveness of POPS teaching. Most students responded that POPS facilitates self-learning, helps them to understand topics and creates interest, and 88.12% of students favored POPS over simple lectures. Moreover, while they responded that POPS facilitated student learning better than lectures, teachers pointed out that limited teaching resources would make it difficult for wide POPS application in China. While POPS can break up the monotony of dialectic lectures and serve as a better teaching method, it may not be feasible for the current educational environment in China. The main reason for this is the relative shortage of teaching resources such as space, library facilities

  10. Verifying large modular systems using iterative abstraction refinement

    International Nuclear Information System (INIS)

    Lahtinen, Jussi; Kuismin, Tuomas; Heljanko, Keijo

    2015-01-01

    Digital instrumentation and control (I&C) systems are increasingly used in the nuclear engineering domain. The exhaustive verification of these systems is challenging, and the usual verification methods such as testing and simulation are typically insufficient. Model checking is a formal method that is able to exhaustively analyse the behaviour of a model against a formally written specification. If the model checking tool detects a violation of the specification, it will give out a counter-example that demonstrates how the specification is violated in the system. Unfortunately, sometimes real life system designs are too big to be directly analysed by traditional model checking techniques. We have developed an iterative technique for model checking large modular systems. The technique uses abstraction based over-approximations of the model behaviour, combined with iterative refinement. The main contribution of the work is the concrete abstraction refinement technique based on the modular structure of the model, the dependency graph of the model, and a refinement sampling heuristic similar to delta debugging. The technique is geared towards proving properties, and outperforms BDD-based model checking, the k-induction technique, and the property directed reachability algorithm (PDR) in our experiments. - Highlights: • We have developed an iterative technique for model checking large modular systems. • The technique uses BDD-based model checking, k-induction, and PDR in parallel. • We have tested our algorithm by verifying two models with it. • The technique outperforms classical model checking methods in our experiments

  11. Large area high-speed metrology SPM system

    International Nuclear Information System (INIS)

    Klapetek, P; Valtr, M; Martinek, J; Picco, L; Payton, O D; Miles, M; Yacoot, A

    2015-01-01

    We present a large area high-speed measuring system capable of rapidly generating nanometre resolution scanning probe microscopy data over mm 2 regions. The system combines a slow moving but accurate large area XYZ scanner with a very fast but less accurate small area XY scanner. This arrangement enables very large areas to be scanned by stitching together the small, rapidly acquired, images from the fast XY scanner while simultaneously moving the slow XYZ scanner across the region of interest. In order to successfully merge the image sequences together two software approaches for calibrating the data from the fast scanner are described. The first utilizes the low uncertainty interferometric sensors of the XYZ scanner while the second implements a genetic algorithm with multiple parameter fitting during the data merging step of the image stitching process. The basic uncertainty components related to these high-speed measurements are also discussed. Both techniques are shown to successfully enable high-resolution, large area images to be generated at least an order of magnitude faster than with a conventional atomic force microscope. (paper)

  12. Large area high-speed metrology SPM system

    Science.gov (United States)

    Klapetek, P.; Valtr, M.; Picco, L.; Payton, O. D.; Martinek, J.; Yacoot, A.; Miles, M.

    2015-02-01

    We present a large area high-speed measuring system capable of rapidly generating nanometre resolution scanning probe microscopy data over mm2 regions. The system combines a slow moving but accurate large area XYZ scanner with a very fast but less accurate small area XY scanner. This arrangement enables very large areas to be scanned by stitching together the small, rapidly acquired, images from the fast XY scanner while simultaneously moving the slow XYZ scanner across the region of interest. In order to successfully merge the image sequences together two software approaches for calibrating the data from the fast scanner are described. The first utilizes the low uncertainty interferometric sensors of the XYZ scanner while the second implements a genetic algorithm with multiple parameter fitting during the data merging step of the image stitching process. The basic uncertainty components related to these high-speed measurements are also discussed. Both techniques are shown to successfully enable high-resolution, large area images to be generated at least an order of magnitude faster than with a conventional atomic force microscope.

  13. Bake-Out Mobile Controls for Large Vacuum Systems

    CERN Document Server

    Blanchard, S; Gomes, P; Pereira, H; Kopylov, L; Merker, S; Mikheev, M

    2014-01-01

    Large vacuum systems at CERN (Large Hadron Collider - LHC, Low Energy Ion Rings - LEIR...) require bake-out to achieve ultra-high vacuum specifications. The bake-out cycle is used to decrease the outgassing rate of the vacuum vessel and to activate the Non-Evaporable Getter (NEG) thin film. Bake-out control is a Proportional-Integral-Derivative (PID) regulation with complex recipes, interlocks and troubleshooting management and remote control. It is based on mobile Programmable Logic Controller (PLC) cabinets, fieldbus network and Supervisory Control and Data Acquisition (SCADA) application. The CERN vacuum installations include more than 7 km of baked vessels; using mobile cabinets reduces considerably the cost of the control system. The cabinets are installed close to the vacuum vessels during the time of the bake-out cycle. Mobile cabinets can be used in any of the CERN vacuum facilities. Remote control is provided through a fieldbus network and a SCADA application

  14. Quarkonia production in small and large systems measured by ATLAS

    CERN Document Server

    Lopez, Jorge; The ATLAS collaboration

    2018-01-01

    The experimentally observed dissociation and regeneration of bound quarkonium states in heavy-ion collisions provide a powerful tool to probe the dynamics of the hot, dense plasma. These measurements are sensitive to the effects of color screening, color recombination, or other, new suppression mechanisms. In the large-statistics Run 2 lead-lead and proton-lead collision data, these phenomena can be probed with unprecedented precision. Measurements of the ground and excited quarkonia states, as well as their separation into prompt and non-prompt components, provide further opportunities to study the dynamics of heavy parton energy loss in these large systems. In addition, quarkonium production rates, and their excited to ground states ratios, in small, asymmetric systems are an interesting probe of cold nuclear matter effects. In this talk, the latest ATLAS results on quarkonia production will be presented, including new, differential measurements of charmonium suppression and azimuthal modulation in lead-lea...

  15. Dynamics of Large Systems of Nonlinearly Evolving Units

    Science.gov (United States)

    Lu, Zhixin

    The dynamics of large systems of many nonlinearly evolving units is a general research area that has great importance for many areas in science and technology, including biology, computation by artificial neural networks, statistical mechanics, flocking in animal groups, the dynamics of coupled neurons in the brain, and many others. While universal principles and techniques are largely lacking in this broad area of research, there is still one particular phenomenon that seems to be broadly applicable. In particular, this is the idea of emergence, by which is meant macroscopic behaviors that "emerge" from a large system of many "smaller or simpler entities such that...large entities" [i.e., macroscopic behaviors] arise which "exhibit properties the smaller/simpler entities do not exhibit." In this thesis we investigate mechanisms and manifestations of emergence in four dynamical systems consisting many nonlinearly evolving units. These four systems are as follows. (a) We first study the motion of a large ensemble of many noninteracting particles in a slowly changing Hamiltonian system that undergoes a separatrix crossing. In such systems, we find that separatrix-crossing induces a counterintuitive effect. Specifically, numerical simulation of two sets of densely sprinkled initial conditions on two energy curves appears to suggest that the two energy curves, one originally enclosing the other, seemingly interchange their positions. This, however, is topologically forbidden. We resolve this paradox by introducing a numerical simulation method we call "robust" and study its consequences. (b) We next study the collective dynamics of oscillatory pacemaker neurons in Suprachiasmatic Nucleus (SCN), which, through synchrony, govern the circadian rhythm of mammals. We start from a high-dimensional description of the many coupled oscillatory neuronal units within the SCN. This description is based on a forced Kuramoto model. We then reduce the system dimensionality by using

  16. A Classification Framework for Large-Scale Face Recognition Systems

    OpenAIRE

    Zhou, Ziheng; Deravi, Farzin

    2009-01-01

    This paper presents a generic classification framework for large-scale face recognition systems. Within the framework, a data sampling strategy is proposed to tackle the data imbalance when image pairs are sampled from thousands of face images for preparing a training dataset. A modified kernel Fisher discriminant classifier is proposed to make it computationally feasible to train the kernel-based classification method using tens of thousands of training samples. The framework is tested in an...

  17. A remote joint system for large vacuum ducts

    International Nuclear Information System (INIS)

    Hagmann, D.B.; Coughlan, J.B.

    1983-01-01

    A large remote vacuum duct joining system has been developed for fusion machines that uses several two-jaw screwdriven clamps. The preferred location for clamp installation is inside the vacuum duct where access space is available for the actuating device. It also decreases space needed for handling operations exterior to the duct. The clamp system is unique in that it is low cost, applies force directly over the seal, permits leak testing to the seal annulus, is highly reliable, can be remotely replaced, and is usable on a variety of other applications

  18. The system of computer simulation and organizational management of large enterprises activity

    Directory of Open Access Journals (Sweden)

    E. D. Chertov

    2016-01-01

    Full Text Available Study on the construction of an integrated technical support is carried out by the example of organizational information systems (or administrative and economic management activities of large organizations. As part of the management information system, comprehensive technical support related to other parts of the system, first of all, to the information database managementsystem, which covers all types of information required for planning and management, and an algorithm for processing this information. This means that not only the control system determines the required set of technical means, but it features a significant effect on the composition and organization of the management information system database. A feature of the integrated logistics is the variety of hardware functions, a large number of device types, different ways of interaction of the operator and equipment, the possibility of a different line-up and aggregation devices. The complex of technical means of information management systems have all the features of a complex system: versatility, availability feedbacks multicriteriality, hierarchical structure, the presence of allocated parts connected to each other by complex interactions, the uncertainty of the behavior of these parts, which is the result of the ultimate reliability of technical means and the influence of environmental disturbances . For this reason, the tasks associated with the creation of an integrated logistics management information system should be solved with the system approach. To maximize the efficiency of the information management system required the construction of technological complex with minimal installation and operation, which leads to the need to choose the optimal variant of technical means of the number of possible. The decision of the main objectives of integrated logistics can be reduced to the construction of the joint number of languages - character sets or alphabets describing the input

  19. Electron cyclotron beam measurement system in the Large Helical Device

    Energy Technology Data Exchange (ETDEWEB)

    Kamio, S., E-mail: kamio@nifs.ac.jp; Takahashi, H.; Kubo, S.; Shimozuma, T.; Yoshimura, Y.; Igami, H.; Ito, S.; Kobayashi, S.; Mizuno, Y.; Okada, K.; Osakabe, M.; Mutoh, T. [National Institute for Fusion Science, Toki 509-5292 (Japan)

    2014-11-15

    In order to evaluate the electron cyclotron (EC) heating power inside the Large Helical Device vacuum vessel and to investigate the physics of the interaction between the EC beam and the plasma, a direct measurement system for the EC beam transmitted through the plasma column was developed. The system consists of an EC beam target plate, which is made of isotropic graphite and faces against the EC beam through the plasma, and an IR camera for measuring the target plate temperature increase by the transmitted EC beam. This system is applicable to the high magnetic field (up to 2.75 T) and plasma density (up to 0.8 × 10{sup 19} m{sup −3}). This system successfully evaluated the transmitted EC beam profile and the refraction.

  20. Large capacity, high-speed multiparameter multichannel analysis system

    International Nuclear Information System (INIS)

    Hendricks, R.W.; Seeger, P.A.; Scheer, J.W.; Suehiro, S.

    1980-01-01

    A data acquisition system for recording multiparameter digital data into a large memory array at over 2.5 MHz is described. The system consists of a MOSTEK MK8600 2048K x 24-bit memory system, I/O ports to various external devices including the CAMAC dataway, a memory incrementer/adder and a daisy-chain of experiment-specific modules which calculate the memory address which is to be incremented. The design of the daisy-chain permits multiple modules and provides for easy modification as experimental needs change. The system has been designed for use in multiparameter, multichannel analysis of high-speed data gathered by position-sensitive detectors at conventional and synchrotron x-ray sources as well as for fixed energy and time-of-flight diffraction at continuous and pulsed neutron sources

  1. Separate Poles Mode for Large-Capacity HVDC System

    Science.gov (United States)

    Zhu, Lin; Gao, Qin

    2017-05-01

    This paper proposes a novel connection mode, separate poles mode (SPM), for large-capacity HVDC systems. The proposed mode focuses on the core issues of HVDC connection in interconnected power grids and principally aims at increasing effective electric distance between poles, which helps to mitigate the interaction problems between AC system and DC system. Receiving end of bipolar HVDC has been divided into different inverter stations under the mode, and thus significantly alleviates difficulties in power transmission and consumption of receiving-end AC grids. By investigating the changes of multi-feed short-circuit ratio (MISCR), finding that HVDC with SPM shows critical impacts upon itself and other HVDC systems with conventional connection mode, which demonstrates that SPM can make balance between MISCR increase and short-circuit current limit.

  2. Two-level systems driven by large-amplitude fields

    Science.gov (United States)

    Nori, F.; Ashhab, S.; Johansson, J. R.; Zagoskin, A. M.

    2009-03-01

    We analyze the dynamics of a two-level system subject to driving by large-amplitude external fields, focusing on the resonance properties in the case of driving around the region of avoided level crossing. In particular, we consider three main questions that characterize resonance dynamics: (1) the resonance condition, (2) the frequency of the resulting oscillations on resonance, and (3) the width of the resonance. We identify the regions of validity of different approximations. In a large region of the parameter space, we use a geometric picture in order to obtain both a simple understanding of the dynamics and quantitative results. The geometric approach is obtained by dividing the evolution into discrete time steps, with each time step described by either a phase shift on the basis states or a coherent mixing process corresponding to a Landau-Zener crossing. We compare the results of the geometric picture with those of a rotating wave approximation. We also comment briefly on the prospects of employing strong driving as a useful tool to manipulate two-level systems. S. Ashhab, J.R. Johansson, A.M. Zagoskin, F. Nori, Two-level systems driven by large-amplitude fields, Phys. Rev. A 75, 063414 (2007). S. Ashhab et al, unpublished.

  3. Static analysis of large-scale multibody system using joint coordinates and spatial algebra operator.

    Science.gov (United States)

    Omar, Mohamed A

    2014-01-01

    Initial transient oscillations inhibited in the dynamic simulations responses of multibody systems can lead to inaccurate results, unrealistic load prediction, or simulation failure. These transients could result from incompatible initial conditions, initial constraints violation, and inadequate kinematic assembly. Performing static equilibrium analysis before the dynamic simulation can eliminate these transients and lead to stable simulation. Most exiting multibody formulations determine the static equilibrium position by minimizing the system potential energy. This paper presents a new general purpose approach for solving the static equilibrium in large-scale articulated multibody. The proposed approach introduces an energy drainage mechanism based on Baumgarte constraint stabilization approach to determine the static equilibrium position. The spatial algebra operator is used to express the kinematic and dynamic equations of the closed-loop multibody system. The proposed multibody system formulation utilizes the joint coordinates and modal elastic coordinates as the system generalized coordinates. The recursive nonlinear equations of motion are formulated using the Cartesian coordinates and the joint coordinates to form an augmented set of differential algebraic equations. Then system connectivity matrix is derived from the system topological relations and used to project the Cartesian quantities into the joint subspace leading to minimum set of differential equations.

  4. Design of central control system for large helical device (LHD)

    International Nuclear Information System (INIS)

    Yamazaki, K.; Kaneko, H.; Yamaguchi, S.; Watanabe, K.Y.; Taniguchi, Y.; Motojima, O.

    1993-11-01

    The world largest superconducting fusion machine LHD (Large Helical Device) is under construction in Japan, aiming at steady state operations. Its basic control system consists of UNIX computers, FDDI/Ethernet LANs, VME multiprocessors and VxWorks real-time OS. For flexible and reliable operations of the LHD machine a cooperative distributed system with more than 30 experimental equipments is controlled by the central computer and the main timing system, and is supervised by the main protective interlock system. Intelligent control systems, such as applications of fuzzy logic and neural networks, are planed to be adopted for flexible feedback controls of plasma configurations besides the classical PID control scheme. Design studies of its control system and related R and D programs with coil-plasma simulation systems are now being performed. The construction of the LHD Control Building in a new site will begin in 1995 after finishing the construction of the LHD Experimental Building, and the hardware construction of the LHD central control equipments will be started in 1996. A first plasma production by means of this control system is expected in 1997. (author)

  5. Stability and Control of Large-Scale Dynamical Systems A Vector Dissipative Systems Approach

    CERN Document Server

    Haddad, Wassim M

    2011-01-01

    Modern complex large-scale dynamical systems exist in virtually every aspect of science and engineering, and are associated with a wide variety of physical, technological, environmental, and social phenomena, including aerospace, power, communications, and network systems, to name just a few. This book develops a general stability analysis and control design framework for nonlinear large-scale interconnected dynamical systems, and presents the most complete treatment on vector Lyapunov function methods, vector dissipativity theory, and decentralized control architectures. Large-scale dynami

  6. Hybrid and Cooperative Strategies Using Harmony Search and Artificial Immune Systems for Solving the Nurse Rostering Problem

    Directory of Open Access Journals (Sweden)

    Suk Ho Jin

    2017-06-01

    Full Text Available The nurse rostering problem is an important search problem that features many constraints. In a nurse rostering problem, these constraints are defined by processes such as maintaining work regulations, assigning nurse shifts, and considering nurse preferences. A number of approaches to address these constraints, such as penalty function methods, have been investigated in the literature. We propose two types of hybrid metaheuristic approaches for solving the nurse rostering problem, which are based on combining harmony search techniques and artificial immune systems to balance local and global searches and prevent slow convergence speeds and prematurity. The proposed algorithms are evaluated against a benchmarking dataset of nurse rostering problems; the results show that they identify better or best known solutions compared to those identified in other studies for most instances. The results also show that the combination of harmony search and artificial immune systems is better suited than using single metaheuristic or other hybridization methods for finding upper-bound solutions for nurse rostering problems and discrete optimization problems.

  7. On Lattice Sequential Decoding for Large MIMO Systems

    KAUST Repository

    Ali, Konpal S.

    2014-04-01

    Due to their ability to provide high data rates, Multiple-Input Multiple-Output (MIMO) wireless communication systems have become increasingly popular. Decoding of these systems with acceptable error performance is computationally very demanding. In the case of large overdetermined MIMO systems, we employ the Sequential Decoder using the Fano Algorithm. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity and vice versa for higher bias values. We attempt to bound the error by bounding the bias, using the minimum distance of a lattice. Also, a particular trend is observed with increasing SNR: a region of low complexity and high error, followed by a region of high complexity and error falling, and finally a region of low complexity and low error. For lower bias values, the stages of the trend are incurred at lower SNR than for higher bias values. This has the important implication that a low enough bias value, at low to moderate SNR, can result in low error and low complexity even for large MIMO systems. Our work is compared against Lattice Reduction (LR) aided Linear Decoders (LDs). Another impressive observation for low bias values that satisfy the error bound is that the Sequential Decoder\\'s error is seen to fall with increasing system size, while it grows for the LR-aided LDs. For the case of large underdetermined MIMO systems, Sequential Decoding with two preprocessing schemes is proposed – 1) Minimum Mean Square Error Generalized Decision Feedback Equalization (MMSE-GDFE) preprocessing 2) MMSE-GDFE preprocessing, followed by Lattice Reduction and Greedy Ordering. Our work is compared against previous work which employs Sphere Decoding preprocessed using MMSE-GDFE, Lattice Reduction and Greedy Ordering. For the case of large systems, this results in high complexity and difficulty in choosing the sphere radius. Our schemes

  8. Remote collaboration system based on large scale simulation

    International Nuclear Information System (INIS)

    Kishimoto, Yasuaki; Sugahara, Akihiro; Li, J.Q.

    2008-01-01

    Large scale simulation using super-computer, which generally requires long CPU time and produces large amount of data, has been extensively studied as a third pillar in various advanced science fields in parallel to theory and experiment. Such a simulation is expected to lead new scientific discoveries through elucidation of various complex phenomena, which are hardly identified only by conventional theoretical and experimental approaches. In order to assist such large simulation studies for which many collaborators working at geographically different places participate and contribute, we have developed a unique remote collaboration system, referred to as SIMON (simulation monitoring system), which is based on client-server system control introducing an idea of up-date processing, contrary to that of widely used post-processing. As a key ingredient, we have developed a trigger method, which transmits various requests for the up-date processing from the simulation (client) running on a super-computer to a workstation (server). Namely, the simulation running on a super-computer actively controls the timing of up-date processing. The server that has received the requests from the ongoing simulation such as data transfer, data analyses, and visualizations, etc. starts operations according to the requests during the simulation. The server makes the latest results available to web browsers, so that the collaborators can monitor the results at any place and time in the world. By applying the system to a specific simulation project of laser-matter interaction, we have confirmed that the system works well and plays an important role as a collaboration platform on which many collaborators work with one another

  9. Large Scale Emerging Properties from Non Hamiltonian Complex Systems

    Directory of Open Access Journals (Sweden)

    Marco Bianucci

    2017-06-01

    Full Text Available The concept of “large scale” depends obviously on the phenomenon we are interested in. For example, in the field of foundation of Thermodynamics from microscopic dynamics, the spatial and time large scales are order of fraction of millimetres and microseconds, respectively, or lesser, and are defined in relation to the spatial and time scales of the microscopic systems. In large scale oceanography or global climate dynamics problems the time scales of interest are order of thousands of kilometres, for space, and many years for time, and are compared to the local and daily/monthly times scales of atmosphere and ocean dynamics. In all the cases a Zwanzig projection approach is, at least in principle, an effective tool to obtain class of universal smooth “large scale” dynamics for few degrees of freedom of interest, starting from the complex dynamics of the whole (usually many degrees of freedom system. The projection approach leads to a very complex calculus with differential operators, that is drastically simplified when the basic dynamics of the system of interest is Hamiltonian, as it happens in Foundation of Thermodynamics problems. However, in geophysical Fluid Dynamics, Biology, and in most of the physical problems the building block fundamental equations of motions have a non Hamiltonian structure. Thus, to continue to apply the useful projection approach also in these cases, we exploit the generalization of the Hamiltonian formalism given by the Lie algebra of dissipative differential operators. In this way, we are able to analytically deal with the series of the differential operators stemming from the projection approach applied to these general cases. Then we shall apply this formalism to obtain some relevant results concerning the statistical properties of the El Niño Southern Oscillation (ENSO.

  10. On distributed wavefront reconstruction for large-scale adaptive optics systems.

    Science.gov (United States)

    de Visser, Cornelis C; Brunner, Elisabeth; Verhaegen, Michel

    2016-05-01

    The distributed-spline-based aberration reconstruction (D-SABRE) method is proposed for distributed wavefront reconstruction with applications to large-scale adaptive optics systems. D-SABRE decomposes the wavefront sensor domain into any number of partitions and solves a local wavefront reconstruction problem on each partition using multivariate splines. D-SABRE accuracy is within 1% of a global approach with a speedup that scales quadratically with the number of partitions. The D-SABRE is compared to the distributed cumulative reconstruction (CuRe-D) method in open-loop and closed-loop simulations using the YAO adaptive optics simulation tool. D-SABRE accuracy exceeds CuRe-D for low levels of decomposition, and D-SABRE proved to be more robust to variations in the loop gain.

  11. Final report LDRD project 105816 : model reduction of large dynamic systems with localized nonlinearities.

    Energy Technology Data Exchange (ETDEWEB)

    Lehoucq, Richard B.; Segalman, Daniel Joseph; Hetmaniuk, Ulrich L. (University of Washington, Seattle, WA); Dohrmann, Clark R.

    2009-10-01

    Advanced computing hardware and software written to exploit massively parallel architectures greatly facilitate the computation of extremely large problems. On the other hand, these tools, though enabling higher fidelity models, have often resulted in much longer run-times and turn-around-times in providing answers to engineering problems. The impediments include smaller elements and consequently smaller time steps, much larger systems of equations to solve, and the inclusion of nonlinearities that had been ignored in days when lower fidelity models were the norm. The research effort reported focuses on the accelerating the analysis process for structural dynamics though combinations of model reduction and mitigation of some factors that lead to over-meshing.

  12. ON THE WAYS OF AUTOMATED PROCESSING OF SPATIAL GEOMETRY OF THE SYSTEM “GATE-CASTING” FOR SOLVING OF THE CLASSIFICATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    A. N. Chichko

    2007-01-01

    Full Text Available The system parameterization of castings, allowing to formalize spatial geometry of casting, is offered. The algorithm of taxonomy, which can be used for solving of problems of castings classification in the systems of computeraided design of foundry technologies, is described. The method is approved on castings of type ''cover”.

  13. Glass badge dosimetry system for large scale personal monitoring

    International Nuclear Information System (INIS)

    Norimichi Juto

    2002-01-01

    Glass Badge using silver activated phosphate glass dosemeter was specially developed for large scale personal monitoring. And dosimetry systems such as an automatic leader and a dose equipment calculation algorithm were developed at once to achieve reasonable personal monitoring. In large scale personal monitoring, both of precision for dosimetry and confidence for lot of personal data handling become very important. The silver activated phosphate glass dosemeter has basically excellent characteristics for dosimetry such as homogeneous and stable sensitivity, negligible fading and so on. Glass Badge was designed to measure 10 keV - 10 MeV range of photon. 300 keV - 3 MeV range of beta, and 0.025 eV - 15 MeV range of neutron by included SSNTD. And developed Glass Badge dosimetry system has not only these basic characteristics but also lot of features to keep good precision for dosimetry and data handling. In this presentation, features of Glass Badge dosimetry systems and examples for practical personal monitoring systems will be presented. (Author)

  14. Buffer provisioning for large-scale data-acquisition systems

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Froening, Holger; Vandelli, Wainer

    2018-01-01

    The data acquisition system of the ATLAS experiment, a major experiment of the Large Hadron Collider (LHC) at CERN, will go through a major upgrade in the next decade. The upgrade is driven by experimental physics requirements, calling for increased data rates on the order of 6~TB/s. By contrast, the data rate of the existing system is 160~GB/s. Among the changes in the upgraded system will be a very large buffer with a projected size on the order of 70 PB. The buffer role will be decoupling of data production from on-line data processing, storing data for periods of up to 24~hours until it can be analyzed by the event processing system. The larger buffer will allow a new data recording strategy, providing additional margins to handle variable data rates. At the same time it will provide sensible trade-offs between buffering space and on-line processing capabilities. This compromise between two resources will be possible since the data production cycle includes time periods where the experiment will not produ...

  15. A Simple Instrumentation System for Large Structure Vibration Monitoring

    Directory of Open Access Journals (Sweden)

    Didik R. Santoso

    2010-12-01

    Full Text Available Traditional instrumentation systems used for monitoring vibration of large-scale infrastructure building such as bridges, railway, and others structural building, generally have a complex design. Makes it simple would be very useful both in terms of low-cost and easy maintenance. This paper describes how to develop the instrumentation system. The system is built based on distributed network, with field bus topology, using single-master multi-slave architecture. Master is a control unit, built based on a PC equipped with RS-485 interface. Slave is a sensing unit; each slave was built by integrating a 3-axis vibration sensor with a microcontroller based data acquisition system. Vibration sensor is designed using the main components of a MEMS accelerometer. While the software is developed for two functions: as a control system hardware and data processing. To verify performance of the developed instrumentation system, several laboratory tests have been performed. The result shows that the system has good performance.

  16. Advanced energy systems (APU) for large commercial aircraft

    Energy Technology Data Exchange (ETDEWEB)

    Westenberger, A.; Bleil, J.; Arendt, M. [Airbus Deutschland GmbH, Hamburg (Germany)

    2013-06-01

    The intention of using a highly integrated component using on fuel cell technology installed on board of large commercial passenger aircraft for the generation of onboard power for the systems demand during an entire aircraft mission was subject of several studies. The results of these studies have been based on the simulation of the whole system in the context of an aircraft system environment. In front of the work stood the analyses of different fuel cell technologies and the analyses of the aircraft system environment. Today onboard power is provided on ground by an APU and in flight by the main engines. In order to compare fuel cell technology with the today's usual gas turbine operational characteristics have been analysed. A second analysis was devoted to the system demand for typical aircraft categories. The MEA system concept was supposed in all cases. The favourable concept represented an aircraft propelled by conventional engines with starter generator units, providing AC electrical power, covering in total proximately half of the power demand and a component based on fuel cell technology. This component provided electrical DC power, clean potable water, thermal energy at 180 degrees Celsius and nitrogen enriched air for fire suppression and fire extinguishing agent. In opposite of a usual gas turbine based APU, this new unit was operated as the primary power system. (orig.)

  17. Exchanging large data object in multi-agent systems

    Science.gov (United States)

    Al-Yaseen, Wathiq Laftah; Othman, Zulaiha Ali; Nazri, Mohd Zakree Ahmad

    2016-08-01

    One of the Business Intelligent solutions that is currently in use is the Multi-Agent System (MAS). Communication is one of the most important elements in MAS, especially for exchanging large low level data between distributed agents (physically). The Agent Communication Language in JADE has been offered as a secure method for sending data, whereby the data is defined as an object. However, the object cannot be used to send data to another agent in a different location. Therefore, the aim of this paper was to propose a method for the exchange of large low level data as an object by creating a proxy agent known as a Delivery Agent, which temporarily imitates the Receiver Agent. The results showed that the proposed method is able to send large-sized data. The experiments were conducted using 16 datasets ranging from 100,000 to 7 million instances. However, for the proposed method, the RAM and the CPU machine had to be slightly increased for the Receiver Agent, but the latency time was not significantly different compared to the use of the Java Socket method (non-agent and less secure). With such results, it was concluded that the proposed method can be used to securely send large data between agents.

  18. Distributed system for large-scale remote research

    International Nuclear Information System (INIS)

    Ueshima, Yutaka

    2002-01-01

    In advanced photon research, large-scale simulations and high-resolution observations are powerfull tools. In numerical and real experiments, the real-time visualization and steering system is considered as a hopeful method of data analysis. This approach is valid in the typical analysis at one time or low cost experiment and simulation. In research of an unknown problem, it is necessary that the output data be analyzed many times because conclusive analysis is difficult at one time. Consequently, output data should be filed to refer and analyze at any time. To support research, we need the automatic functions, transporting data files from data generator to data storage, analyzing data, tracking history of data handling, and so on. The supporting system will be a functionally distributed system. (author)

  19. SIMON: Remote collaboration system based on large scale simulation

    International Nuclear Information System (INIS)

    Sugawara, Akihiro; Kishimoto, Yasuaki

    2003-01-01

    Development of SIMON (SImulation MONitoring) system is described. SIMON aims to investigate many physical phenomena of tokamak type nuclear fusion plasma by simulation and to exchange information and to carry out joint researches with scientists in the world using internet. The characteristics of SIMON are followings; 1) decrease load of simulation by trigger sending method, 2) visualization of simulation results and hierarchical structure of analysis, 3) decrease of number of license by using command line when software is used, 4) improvement of support for using network of simulation data output by use of HTML (Hyper Text Markup Language), 5) avoidance of complex built-in work in client part and 6) small-sized and portable software. The visualization method of large scale simulation, remote collaboration system by HTML, trigger sending method, hierarchical analytical method, introduction into three-dimensional electromagnetic transportation code and technologies of SIMON system are explained. (S.Y.)

  20. Cryogenic control system of the large COMPASS polarized target

    CERN Document Server

    Gautheron, F; Baum, G; Berglund, P; Doshita, N; Görtz, S; Gustafsson, K K; Horikawa, N; Kisselev, Yu V; Koivuniemi, J H; Kondo, K; Meyer, Werner T; Reicherz, G

    2004-01-01

    The dilution refrigerator used to cool the large COMPASS polarized target is monitored through a PC running LabVIEW trademark 6.1 under Windows 2000 trademark . About 60 parameters of the target (temperatures, pressures, flow rates) are continuously plotted and checked. They are periodically recorded in an Oracle trademark database and in a data file. An alarm for every parameter can be individually activated and optionally connected to a GSM (Global System for Mobile Communication) delivery message system. A web server receives and publishes the online status of the target with online tables and graphics on a dedicated COMPASS polarized target information web site. A Siemens programmable logic controller (PLC) powered by an uninterruptable source keeps the cryogenic system safe and stable during the long beam periods by controlling valves and interlocks. This safety feature protects the dilution refrigerator against potential damages in case of power failure.

  1. Magnetic Properties of Large-Scale Nanostructured Graphene Systems

    DEFF Research Database (Denmark)

    Gregersen, Søren Schou

    The on-going progress in two-dimensional (2D) materials and nanostructure fabrication motivates the study of altered and combined materials. Graphene—the most studied material of the 2D family—displays unique electronic and spintronic properties. Exceptionally high electron mobilities, that surpass...... those in conventional materials such as silicon, make graphene a very interesting material for high-speed electronics. Simultaneously, long spin-diffusion lengths and spin-life times makes graphene an eligible spin-transport channel. In this thesis, we explore fundamental features of nanostructured...... graphene systems using large-scale modeling techniques. Graphene perforations, or antidots, have received substantial interest in the prospect of opening large band gaps in the otherwise gapless graphene. Motivated by recent improvements of fabrication processes, such as forming graphene antidots and layer...

  2. Adaptive Neural Networks Decentralized FTC Design for Nonstrict-Feedback Nonlinear Interconnected Large-Scale Systems Against Actuator Faults.

    Science.gov (United States)

    Li, Yongming; Tong, Shaocheng

    The problem of active fault-tolerant control (FTC) is investigated for the large-scale nonlinear systems in nonstrict-feedback form. The nonstrict-feedback nonlinear systems considered in this paper consist of unstructured uncertainties, unmeasured states, unknown interconnected terms, and actuator faults (e.g., bias fault and gain fault). A state observer is designed to solve the unmeasurable state problem. Neural networks (NNs) are used to identify the unknown lumped nonlinear functions so that the problems of unstructured uncertainties and unknown interconnected terms can be solved. By combining the adaptive backstepping design principle with the combination Nussbaum gain function property, a novel NN adaptive output-feedback FTC approach is developed. The proposed FTC controller can guarantee that all signals in all subsystems are bounded, and the tracking errors for each subsystem converge to a small neighborhood of zero. Finally, numerical results of practical examples are presented to further demonstrate the effectiveness of the proposed control strategy.The problem of active fault-tolerant control (FTC) is investigated for the large-scale nonlinear systems in nonstrict-feedback form. The nonstrict-feedback nonlinear systems considered in this paper consist of unstructured uncertainties, unmeasured states, unknown interconnected terms, and actuator faults (e.g., bias fault and gain fault). A state observer is designed to solve the unmeasurable state problem. Neural networks (NNs) are used to identify the unknown lumped nonlinear functions so that the problems of unstructured uncertainties and unknown interconnected terms can be solved. By combining the adaptive backstepping design principle with the combination Nussbaum gain function property, a novel NN adaptive output-feedback FTC approach is developed. The proposed FTC controller can guarantee that all signals in all subsystems are bounded, and the tracking errors for each subsystem converge to a small

  3. Efficient network monitoring for large data acquisition systems

    International Nuclear Information System (INIS)

    Savu, D.O.; Martin, B.; Al-Shabibi, A.; Sjoen, R.; Batraneanu, S.M.; Stancu, S.N.

    2012-01-01

    Though constantly evolving and improving, the available network monitoring solutions have limitations when applied to the infrastructure of a high speed realtime data acquisition (DAQ) system. DAQ networks are particular computer networks where experts have to pay attention to both individual subsections as well as system wide traffic flows while monitoring the network. The ATLAS Network at the Large Hadron Collider (LHC) has more than 200 switches interconnecting 3500 hosts and totaling 8500 high speed links. The use of heterogeneous tools for monitoring various infrastructure parameters, in order to assure optimal DAQ system performance, proved to be a tedious and time consuming task for experts. To alleviate this problem we used our networking and DAQ expertise to build a flexible and scalable monitoring system providing an intuitive user interface with the same look and feel irrespective of the data provider that is used. Our system uses custom developed components for critical performance monitoring and seamlessly integrates complementary data from auxiliary tools, such as NAGIOS, information services or custom databases. A number of techniques (e.g. normalization, aggregation and data caching) were used in order to improve the user interface response time. The end result is a unified monitoring interface, for fast and uniform access to system statistics, which significantly reduced the time spent by experts for ad-hoc and post-mortem analysis. (authors)

  4. Issues in visual support to real-time space system simulation solved in the Systems Engineering Simulator

    Science.gov (United States)

    Yuen, Vincent K.

    1989-01-01

    The Systems Engineering Simulator has addressed the major issues in providing visual data to its real-time man-in-the-loop simulations. Out-the-window views and CCTV views are provided by three scene systems to give the astronauts their real-world views. To expand the window coverage for the Space Station Freedom workstation a rotating optics system is used to provide the widest field of view possible. To provide video signals to as many viewpoints as possible, windows and CCTVs, with a limited amount of hardware, a video distribution system has been developed to time-share the video channels among viewpoints at the selection of the simulation users. These solutions have provided the visual simulation facility for real-time man-in-the-loop simulations for the NASA space program.

  5. On the Evaluation of Computational Results Obtained from Solving System of linear Equations With matlab The Dual affine Scalling interior Point

    International Nuclear Information System (INIS)

    Murfi, Hendri; Basaruddin, T.

    2001-01-01

    The interior point method for linear programming has gained extraordinary interest as an alternative to simplex method since Karmarkar presented a polynomial-time algorithm for linear programming based on interior point method. In implementation of the algorithm of this method, there are two important things that have impact heavily to performance of the algorithm; they are data structure and used method to solve linear equation system in the algorithm. This paper describes about solving linear equation system in variants of the algorithm called dual-affine scaling algorithm. Next, we evaluate experimentally results of some used methods, either direct method or iterative method. The experimental evaluation used Matlab

  6. Large-Scale Optimization for Bayesian Inference in Complex Systems

    Energy Technology Data Exchange (ETDEWEB)

    Willcox, Karen [MIT; Marzouk, Youssef [MIT

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to

  7. Market penetration of large wind/diesel systems

    International Nuclear Information System (INIS)

    Kronborg, T.

    1992-01-01

    Burmeister ampersand Wain is developing a large size wind/diesel package in collaboration with Micon, the Danish wind turbine manufacturer, and the Danish utility NESA. The package comprises an initial calculation of the technical feasibility and the economic viability of an actual project, installing the optimum number of large wind turbines, and service, operation, and maintenance as needed. The concept should be seen as an addition to existing diesel-based power stations. Wind turbines are especially advantageous in smaller diesel-based electrical systems in the 1-20 MW range because such systems can have high fuel costs and expensive maintenance. Analysis of the market for the wind/diesel concept indicates islands and remote areas with limited population are likely candidates for implementation of wind/diesel systems. An example of an economic analysis of a wind/diesel application on an isolated island is presented, showing the cost savings possible. To obtain practical experience and to demonstrate the wind/diesel concept, a MW-size demonstration plant is being constructed in Denmark

  8. Large Scale Self-Organizing Information Distribution System

    National Research Council Canada - National Science Library

    Low, Steven

    2005-01-01

    This project investigates issues in "large-scale" networks. Here "large-scale" refers to networks with large number of high capacity nodes and transmission links, and shared by a large number of users...

  9. Reliable pipeline repair system for very large pipe size

    Energy Technology Data Exchange (ETDEWEB)

    Charalambides, John N.; Sousa, Alexandre Barreto de [Oceaneering International, Inc., Houston, TX (United States)

    2004-07-01

    The oil and gas industry worldwide has been mainly depending on the long-term reliability of rigid pipelines to ensure the transportation of hydrocarbons, crude oil, gas, fuel, etc. Many other methods are also utilized onshore and offshore (e.g. flexible lines, FPSO's, etc.), but when it comes to the underwater transportation of very high volumes of oil and gas, the industry commonly uses large size rigid pipelines (i.e. steel pipes). Oil and gas operators learned to depend on the long-lasting integrity of these very large pipelines and many times they forget or disregard that even steel pipelines degrade over time and more often that that, they are also susceptible to various forms of damage (minor or major, environmental or external, etc.). Over the recent years the industry had recognized the need of implementing an 'emergency repair plan' to account for such unforeseen events and the oil and gas operators have become 'smarter' by being 'pro-active' in order to ensure 'flow assurance'. When we consider very large diameter steel pipelines such as 42' and 48' nominal pipe size (NPS), the industry worldwide does not provide 'ready-made', 'off-the-shelf' repair hardware that can be easily shipped to the offshore location and effect a major repair within acceptable time frames and avoid substantial profit losses due to 'down-time' in production. The typical time required to establish a solid repair system for large pipe diameters could be as long as six or more months (depending on the availability of raw materials). This paper will present in detail the Emergency Pipeline Repair Systems (EPRS) that Oceaneering successfully designed, manufactured, tested and provided to two major oil and gas operators, located in two different continents (Gulf of Mexico, U.S.A. and Arabian Gulf, U.A.E.), for two different very large pipe sizes (42'' and 48'' Nominal Pipe Sizes

  10. A trial of patient-oriented problem-solving system for immunology teaching in China: a comparison with dialectic lectures

    Directory of Open Access Journals (Sweden)

    Zhang Zhiren

    2013-01-01

    Full Text Available Abstract Background The most common teaching method used in China is lecturing, but recently, efforts have been widely undertaken to promote the transition from teacher-centered to student-centered education. The patient-oriented problem-solving (POPS system is an innovative teaching-learning method that permits students to work in small groups to solve clinical problems, promotes self-learning, encourages clinical reasoning and develops long-lasting memory. To our best knowledge, however, POPS has never been applied in teaching immunology in China. The aim of this study was to develop POPS in teaching immunology and assess students’ and teachers’ perception to POPS. Methods 321 second-year medical students were divided into two groups: I and II. Group I, comprising 110 students, was taught by POPS, and 16 immunology teachers witnessed the whole teaching process. Group II including the remaining 211 students was taught through traditional lectures. The results of the pre- and post-test of both groups were compared. Group I students and teachers then completed a self-structured feedback questionnaire for analysis before a discussion meeting attended only by the teachers was held. Results Significant improvement in the mean difference between the pre- and post-test scores of those in Groups I and II was seen, demonstrating the effectiveness of POPS teaching. Most students responded that POPS facilitates self-learning, helps them to understand topics and creates interest, and 88.12% of students favored POPS over simple lectures. Moreover, while they responded that POPS facilitated student learning better than lectures, teachers pointed out that limited teaching resources would make it difficult for wide POPS application in China. Conclusions While POPS can break up the monotony of dialectic lectures and serve as a better teaching method, it may not be feasible for the current educational environment in China. The main reason for this is the

  11. Solving Immunology?

    Science.gov (United States)

    Vodovotz, Yoram; Xia, Ashley; Read, Elizabeth L; Bassaganya-Riera, Josep; Hafler, David A; Sontag, Eduardo; Wang, Jin; Tsang, John S; Day, Judy D; Kleinstein, Steven H; Butte, Atul J; Altman, Matthew C; Hammond, Ross; Sealfon, Stuart C

    2017-02-01

    Emergent responses of the immune system result from the integration of molecular and cellular networks over time and across multiple organs. High-content and high-throughput analysis technologies, concomitantly with data-driven and mechanistic modeling, hold promise for the systematic interrogation of these complex pathways. However, connecting genetic variation and molecular mechanisms to individual phenotypes and health outcomes has proven elusive. Gaps remain in data, and disagreements persist about the value of mechanistic modeling for immunology. Here, we present the perspectives that emerged from the National Institute of Allergy and Infectious Disease (NIAID) workshop 'Complex Systems Science, Modeling and Immunity' and subsequent discussions regarding the potential synergy of high-throughput data acquisition, data-driven modeling, and mechanistic modeling to define new mechanisms of immunological disease and to accelerate the translation of these insights into therapies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Surface Nuclear Magnetic Resonance Imaging of Large Systems

    International Nuclear Information System (INIS)

    Weichman, P.B.; Lavely, E.M.; Ritzwoller, M.H.

    1999-01-01

    The general theory of surface NMR imaging of large electromagnetically active systems is considered, motivated by geophysical applications. A general imaging equation is derived for the NMR voltage response, valid for arbitrary transmitter and receiver loop geometry and arbitrary conductivity structure of the sample. When the conductivity grows to the point where the electromagnetic skin depth becomes comparable to the sample size, significant diffusive retardation effects occur that strongly affect the signal. Accounting for these now allows more accurate imaging than previously possible. It is shown that the time constant T 1 may in principle be inferred directly from the diffusive tail of the signal. copyright 1999 The American Physical Society

  13. The weight and angle of depression detection and control system of a large portal crane

    Science.gov (United States)

    Shi, Lian-Wen; Xie, Hongxia; Wang, Meijing; Guan, Yankui; Leng, Gengxin

    2008-12-01

    In order to prevent overturning accidents, the lifted weight and the angle of depression should be detected when a large portal crane is working in a shipyard. However, the locations of the weight sensor and the angle of depression detection part are far away from the central control room. The long signal transmitting distance is so long that it results in a lot of interferences, even the breaking down of the system. In order to solve the above mentioned problems, a high precision analog signal amplifier and a voltage / current (V / I) transforming circuit is set at the place of the sensor to detect the weight. After the sensor signals have been amplified, they will be transformed into 4 to 20 mA current signals for transmission. Thus the interferences in the long transmitting process can be overcome. A WXJ-3 potentiometer is applied to detect the angle of depression. This device has the advantages of a high accuracy of repeated positions, a good stability and a strong anti-fatigue property. After processed by the current-strengthened circuit, the transmitted signals representing voltage value can have the characteristics of transmitting currents because of the large current value. Then the anti-jamming capability is stronger. Send the weight and the angle of depression detection signals to A/D converter, then the signals turn into digital representation and are sent to the control system composed of a PLC. The PLC calculates the current rated lifting weight depending on the different angles of depression, and when the weight is greater than the rated one, the PLC sends control signals to stop the lifting; hence the crane can only put down the weights. So the safety of the large portal crane is effectively guaranteed. At present ,the system has been applied to the 70-ton large portal cranes of the Tianjin Xingang Shipyard with a safe operation of 10 years.

  14. The Pathological Spectrum of Systemic Anaplastic Large Cell Lymphoma (ALCL

    Directory of Open Access Journals (Sweden)

    Ivonne A. Montes-Mojarro

    2018-04-01

    Full Text Available Anaplastic large cell lymphoma (ALCL represents a group of malignant T-cell lymphoproliferations that share morphological and immunophenotypical features, namely strong CD30 expression and variable loss of T-cell markers, but differ in clinical presentation and prognosis. The recognition of anaplastic lymphoma kinase (ALK fusion proteins as a result of chromosomal translocations or inversions was the starting point for the distinction of different subgroups of ALCL. According to their distinct clinical settings and molecular findings, the 2016 revised World Health Organization (WHO classification recognizes four different entities: systemic ALK-positive ALCL (ALK+ ALCL, systemic ALK-negative ALCL (ALK− ALCL, primary cutaneous ALCL (pC-ALCL, and breast implant-associated ALCL (BI-ALCL, the latter included as a provisional entity. ALK is rearranged in approximately 80% of systemic ALCL cases with one of its partner genes, most commonly NPM1, and is associated with favorable prognosis, whereas systemic ALK− ALCL shows heterogeneous clinical, phenotypical, and genetic features, underlining the different oncogenesis between these two entities. Recognition of the pathological spectrum of ALCL is crucial to understand its pathogenesis and its boundaries with other entities. In this review, we will focus on the morphological, immunophenotypical, and molecular features of systemic ALK+ and ALK− ALCL. In addition, BI-ALCL will be discussed.

  15. An integrated system for large scale scanning of nuclear emulsions

    Energy Technology Data Exchange (ETDEWEB)

    Bozza, Cristiano, E-mail: kryss@sa.infn.it [University of Salerno and INFN, via Ponte Don Melillo, Fisciano 84084 (Italy); D’Ambrosio, Nicola [Laboratori Nazionali del Gran Sasso, S.S. 17 BIS km 18.910, Assergi (AQ) 67010 (Italy); De Lellis, Giovanni [University of Napoli and INFN, Complesso Universitario di Monte Sant' Angelo, via Cintia Ed. G, Napoli 80126 (Italy); De Serio, Marilisa [University of Bari and INFN, via E. Orabona 4, Bari 70125 (Italy); Di Capua, Francesco [INFN Napoli, Complesso Universitario di Monte Sant' Angelo, via Cintia Ed. G, Napoli 80126 (Italy); Di Crescenzo, Antonia [University of Napoli and INFN, Complesso Universitario di Monte Sant' Angelo, via Cintia Ed. G, Napoli 80126 (Italy); Di Ferdinando, Donato [INFN Bologna, viale B. Pichat 6/2, Bologna 40127 (Italy); Di Marco, Natalia [Laboratori Nazionali del Gran Sasso, S.S. 17 BIS km 18.910, Assergi (AQ) 67010 (Italy); Esposito, Luigi Salvatore [Laboratori Nazionali del Gran Sasso, now at CERN, Geneva (Switzerland); Fini, Rosa Anna [INFN Bari, via E. Orabona 4, Bari 70125 (Italy); Giacomelli, Giorgio [University of Bologna and INFN, viale B. Pichat 6/2, Bologna 40127 (Italy); Grella, Giuseppe [University of Salerno and INFN, via Ponte Don Melillo, Fisciano 84084 (Italy); Ieva, Michela [University of Bari and INFN, via E. Orabona 4, Bari 70125 (Italy); Kose, Umut [INFN Padova, via Marzolo 8, Padova (PD) 35131 (Italy); Longhin, Andrea; Mauri, Nicoletta [INFN Laboratori Nazionali di Frascati, via E. Fermi 40, Frascati (RM) 00044 (Italy); Medinaceli, Eduardo [University of Padova and INFN, via Marzolo 8, Padova (PD) 35131 (Italy); Monacelli, Piero [University of L' Aquila and INFN, via Vetoio Loc. Coppito, L' Aquila (AQ) 67100 (Italy); Muciaccia, Maria Teresa; Pastore, Alessandra [University of Bari and INFN, via E. Orabona 4, Bari 70125 (Italy); and others

    2013-03-01

    The European Scanning System, developed to analyse nuclear emulsions at high speed, has been completed with the development of a high level software infrastructure to automate and support large-scale emulsion scanning. In one year, an average installation is capable of performing data-taking and online analysis on a total surface ranging from few m{sup 2} to tens of m{sup 2}, acquiring many billions of tracks, corresponding to several TB. This paper focuses on the procedures that have been implemented and on their impact on physics measurements. The system proved robust, reliable, fault-tolerant and user-friendly, and seldom needs assistance. A dedicated relational Data Base system is the backbone of the whole infrastructure, storing data themselves and not only catalogues of data files, as in common practice, being a unique case in high-energy physics DAQ systems. The logical organisation of the system is described and a summary is given of the physics measurement that are readily available by automated processing.

  16. Large quantum systems: a mathematical and numerical perspective

    International Nuclear Information System (INIS)

    Lewin, M.

    2009-06-01

    This thesis is devoted to the mathematical study of variational models for large quantum systems. The mathematical methods are that of nonlinear analysis, calculus of variations, partial differential equations, spectral theory, and numerical analysis. The first part contains some results on finite systems. We study several approximations of the N-body Schroedinger equation for electrons in an atom or a molecule, and then the so-called Hartree-Fock- Bogoliubov model for a system of fermions interacting via the gravitational force. In a second part, we propose a new method allowing to prove the existence of the thermodynamic limit of Coulomb quantum systems. Then, we construct two Hartree-Fock-type models for infinite systems. The first is a relativistic theory deduced from Quantum Electrodynamics, allowing to describe the behavior of electrons, coupled to that of Dirac's vacuum which can become polarized. The second model describes a nonrelativistic quantum crystal in the presence of a charged defect. A new numerical method is also proposed. The last part of the thesis is devoted to spectral pollution, a phenomenon which is observed when trying to approximate eigenvalues in a gap of the essential spectrum of a self-adjoint operator, for instance for periodic Schroedinger operators or Dirac operators. (author)

  17. An integrated system for large scale scanning of nuclear emulsions

    International Nuclear Information System (INIS)

    Bozza, Cristiano; D’Ambrosio, Nicola; De Lellis, Giovanni; De Serio, Marilisa; Di Capua, Francesco; Di Crescenzo, Antonia; Di Ferdinando, Donato; Di Marco, Natalia; Esposito, Luigi Salvatore; Fini, Rosa Anna; Giacomelli, Giorgio; Grella, Giuseppe; Ieva, Michela; Kose, Umut; Longhin, Andrea; Mauri, Nicoletta; Medinaceli, Eduardo; Monacelli, Piero; Muciaccia, Maria Teresa; Pastore, Alessandra

    2013-01-01

    The European Scanning System, developed to analyse nuclear emulsions at high speed, has been completed with the development of a high level software infrastructure to automate and support large-scale emulsion scanning. In one year, an average installation is capable of performing data-taking and online analysis on a total surface ranging from few m 2 to tens of m 2 , acquiring many billions of tracks, corresponding to several TB. This paper focuses on the procedures that have been implemented and on their impact on physics measurements. The system proved robust, reliable, fault-tolerant and user-friendly, and seldom needs assistance. A dedicated relational Data Base system is the backbone of the whole infrastructure, storing data themselves and not only catalogues of data files, as in common practice, being a unique case in high-energy physics DAQ systems. The logical organisation of the system is described and a summary is given of the physics measurement that are readily available by automated processing

  18. The construction of arbitrary order ERKN methods based on group theory for solving oscillatory Hamiltonian systems with applications

    Energy Technology Data Exchange (ETDEWEB)

    Mei, Lijie, E-mail: bxhanm@126.com; Wu, Xinyuan, E-mail: xywu@nju.edu.cn

    2016-10-15

    In general, extended Runge–Kutta–Nyström (ERKN) methods are more effective than traditional Runge–Kutta–Nyström (RKN) methods in dealing with oscillatory Hamiltonian systems. However, the theoretical analysis for ERKN methods, such as the order conditions, the symplectic conditions and the symmetric conditions, becomes much more complicated than that for RKN methods. Therefore, it is a bottleneck to construct high-order ERKN methods efficiently. In this paper, we first establish the ERKN group Ω for ERKN methods and the RKN group G for RKN methods, respectively. We then rigorously show that ERKN methods are a natural extension of RKN methods, that is, there exists an epimorphism η of the ERKN group Ω onto the RKN group G. This epimorphism gives a global insight into the structure of the ERKN group by the analysis of its kernel and the corresponding RKN group G. Meanwhile, we establish a particular mapping φ of G into Ω so that each image element is an ideal representative element of the congruence class in Ω. Furthermore, an elementary theoretical analysis shows that this map φ can preserve many structure-preserving properties, such as the order, the symmetry and the symplecticity. From the epimorphism η together with its section φ, we may gain knowledge about the structure of the ERKN group Ω via the RKN group G. In light of the theoretical analysis of this paper, we obtain high-order structure-preserving ERKN methods in an effective way for solving oscillatory Hamiltonian systems. Numerical experiments are carried out and the results are very promising, which strongly support our theoretical analysis presented in this paper.

  19. Participatory Design of Large-Scale Information Systems

    DEFF Research Database (Denmark)

    Simonsen, Jesper; Hertzum, Morten

    2008-01-01

    into a PD process model that (1) emphasizes PD experiments as transcending traditional prototyping by evaluating fully integrated systems exposed to real work practices; (2) incorporates improvisational change management including anticipated, emergent, and opportunity-based change; and (3) extends initial...... design and development into a sustained and ongoing stepwise implementation that constitutes an overall technology-driven organizational change. The process model is presented through a largescale PD experiment in the Danish healthcare sector. We reflect on our experiences from this experiment......In this article we discuss how to engage in large-scale information systems development by applying a participatory design (PD) approach that acknowledges the unique situated work practices conducted by the domain experts of modern organizations. We reconstruct the iterative prototyping approach...

  20. The Liquid Argon Calorimeter system for the SLC Large Detector

    International Nuclear Information System (INIS)

    Haller, G.M.; Fox, J.D.; Smith, S.R.

    1988-09-01

    In this paper the physical packaging and the logical organization of the Liquid Argon Calorimeter (LAC) electronics system for the Stanford Linear Collider Large Detector (SLD) at SLAC are described. This system processes signals from approximately 44,000 calorimeter towers and is unusual in that most electronic functions are packaged within the detector itself as opposed to an external electronics support rack. The signal path from the towers in the liquid argon through the vacuum to the outside of the detector is explained. The organization of the control logic, analog electronics, power regulation, analog-to-digital conversion circuits, and fiber optic drivers mounted directly on the detector are described. Redundancy considerations for the electronics and cooling issues are discussed. 12 refs., 5 figs

  1. System concept for a moderate cost Large Deployable Reflector (LDR)

    Science.gov (United States)

    Swanson, P. N.; Breckinridge, J. B.; Diner, A.; Freeland, R. E.; Irace, W. R.; Mcelroy, P. M.; Meinel, A. B.; Tolivar, A. F.

    1986-01-01

    A study was carried out at JPL during the first quarter of 1985 to develop a system concept for NASA's LDR. Major features of the concept are a four-mirror, two-stage optical system; a lightweight structural composite segmented primary reflector; and a deployable truss backup structure with integral thermal shield. The two-stage optics uses active figure control at the quaternary reflector located at the primary reflector exit pupil, allowing the large primary to be passive. The lightweight composite reflector panels limit the short-wavelength operation to approximately 30 microns but reduce the total primary reflector weight by a factor of 3 to 4 over competing technologies. On-orbit thermal analysis indicates a primary reflector equilibrium temperature of less than 200 K with a maximum gradient of about 5 C across the 20-m aperture. Weight and volume estimates are consistent with a single Shuttle launch, and are based on Space Station assembly and checkout.

  2. Large linear magnetoresistivity in strongly inhomogeneous planar and layered systems

    International Nuclear Information System (INIS)

    Bulgadaev, S.A.; Kusmartsev, F.V.

    2005-01-01

    Explicit expressions for magnetoresistance R of planar and layered strongly inhomogeneous two-phase systems are obtained, using exact dual transformation, connecting effective conductivities of in-plane isotropic two-phase systems with and without magnetic field. These expressions allow to describe the magnetoresistance of various inhomogeneous media at arbitrary concentrations x and magnetic fields H. All expressions show large linear magnetoresistance effect with different dependencies on the phase concentrations. The corresponding plots of the x- and H-dependencies of R(x,H) are represented for various values, respectively, of magnetic field and concentrations at some values of inhomogeneity parameter. The obtained results show a remarkable similarity with the existing experimental data on linear magnetoresistance in silver chalcogenides Ag 2+δ Se. A possible physical explanation of this similarity is proposed. It is shown that the random, stripe type, structures of inhomogeneities are the most suitable for a fabrication of magnetic sensors and a storage of information at room temperatures

  3. Human visual system automatically represents large-scale sequential regularities.

    Science.gov (United States)

    Kimura, Motohiro; Widmann, Andreas; Schröger, Erich

    2010-03-04

    Our brain recordings reveal that large-scale sequential regularities defined across non-adjacent stimuli can be automatically represented in visual sensory memory. To show that, we adopted an auditory paradigm developed by Sussman, E., Ritter, W., and Vaughan, H. G. Jr. (1998). Predictability of stimulus deviance and the mismatch negativity. NeuroReport, 9, 4167-4170, Sussman, E., and Gumenyuk, V. (2005). Organization of sequential sounds in auditory memory. NeuroReport, 16, 1519-1523 to the visual domain by presenting task-irrelevant infrequent luminance-deviant stimuli (D, 20%) inserted among task-irrelevant frequent stimuli being of standard luminance (S, 80%) in randomized (randomized condition, SSSDSSSSSDSSSSD...) and fixed manners (fixed condition, SSSSDSSSSDSSSSD...). Comparing the visual mismatch negativity (visual MMN), an event-related brain potential (ERP) index of memory-mismatch processes in human visual sensory system, revealed that visual MMN elicited by deviant stimuli was reduced in the fixed compared to the randomized condition. Thus, the large-scale sequential regularity being present in the fixed condition (SSSSD) must have been represented in visual sensory memory. Interestingly, this effect did not occur in conditions with stimulus-onset asynchronies (SOAs) of 480 and 800 ms but was confined to the 160-ms SOA condition supporting the hypothesis that large-scale regularity extraction was based on perceptual grouping of the five successive stimuli defining the regularity. 2010 Elsevier B.V. All rights reserved.

  4. AI tools in computer based problem solving

    Science.gov (United States)

    Beane, Arthur J.

    1988-01-01

    The use of computers to solve value oriented, deterministic, algorithmic problems, has evolved a structured life cycle model of the software process. The symbolic processing techniques used, primarily in research, for solving nondeterministic problems, and those for which an algorithmic solution is unknown, have evolved a different model, much less structured. Traditionally, the two approaches have been used completely independently. With the advent of low cost, high performance 32 bit workstations executing identical software with large minicomputers and mainframes, it became possible to begin to merge both models into a single extended model of computer problem solving. The implementation of such an extended model on a VAX family of micro/mini/mainframe systems is described. Examples in both development and deployment of applications involving a blending of AI and traditional techniques are given.

  5. Two-level systems driven by large-amplitude fields

    International Nuclear Information System (INIS)

    Ashhab, S.; Johansson, J. R.; Zagoskin, A. M.; Nori, Franco

    2007-01-01

    We analyze the dynamics of a two-level system subject to driving by large-amplitude external fields, focusing on the resonance properties in the case of driving around the region of avoided level crossing. In particular, we consider three main questions that characterize resonance dynamics: (1) the resonance condition (2) the frequency of the resulting oscillations on resonance, and (3) the width of the resonance. We identify the regions of validity of different approximations. In a large region of the parameter space, we use a geometric picture in order to obtain both a simple understanding of the dynamics and quantitative results. The geometric approach is obtained by dividing the evolution into discrete time steps, with each time step described by either a phase shift on the basis states or a coherent mixing process corresponding to a Landau-Zener crossing. We compare the results of the geometric picture with those of a rotating wave approximation. We also comment briefly on the prospects of employing strong driving as a useful tool to manipulate two-level systems

  6. Potential large missions enabled by NASA's space launch system

    Science.gov (United States)

    Stahl, H. Philip; Hopkins, Randall C.; Schnell, Andrew; Smith, David A.; Jackman, Angela; Warfield, Keith R.

    2016-07-01

    Large space telescope missions have always been limited by their launch vehicle's mass and volume capacities. The Hubble Space Telescope (HST) was specifically designed to fit inside the Space Shuttle and the James Webb Space Telescope (JWST) is specifically designed to fit inside an Ariane 5. Astrophysicists desire even larger space telescopes. NASA's "Enduring Quests Daring Visions" report calls for an 8- to 16-m Large UV-Optical-IR (LUVOIR) Surveyor mission to enable ultra-high-contrast spectroscopy and coronagraphy. AURA's "From Cosmic Birth to Living Earth" report calls for a 12-m class High-Definition Space Telescope to pursue transformational scientific discoveries. NASA's "Planning for the 2020 Decadal Survey" calls for a Habitable Exoplanet Imaging (HabEx) and a LUVOIR as well as Far-IR and an X-Ray Surveyor missions. Packaging larger space telescopes into existing launch vehicles is a significant engineering complexity challenge that drives cost and risk. NASA's planned Space Launch System (SLS), with its 8 or 10-m diameter fairings and ability to deliver 35 to 45-mt of payload to Sun-Earth-Lagrange-2, mitigates this challenge by fundamentally changing the design paradigm for large space telescopes. This paper reviews the mass and volume capacities of the planned SLS, discusses potential implications of these capacities for designing large space telescope missions, and gives three specific mission concept implementation examples: a 4-m monolithic off-axis telescope, an 8-m monolithic on-axis telescope and a 12-m segmented on-axis telescope.

  7. Potential Large Decadal Missions Enabled by Nasas Space Launch System

    Science.gov (United States)

    Stahl, H. Philip; Hopkins, Randall C.; Schnell, Andrew; Smith, David Alan; Jackman, Angela; Warfield, Keith R.

    2016-01-01

    Large space telescope missions have always been limited by their launch vehicle's mass and volume capacities. The Hubble Space Telescope (HST) was specifically designed to fit inside the Space Shuttle and the James Webb Space Telescope (JWST) is specifically designed to fit inside an Ariane 5. Astrophysicists desire even larger space telescopes. NASA's "Enduring Quests Daring Visions" report calls for an 8- to 16-m Large UV-Optical-IR (LUVOIR) Surveyor mission to enable ultra-high-contrast spectroscopy and coronagraphy. AURA's "From Cosmic Birth to Living Earth" report calls for a 12-m class High-Definition Space Telescope to pursue transformational scientific discoveries. NASA's "Planning for the 2020 Decadal Survey" calls for a Habitable Exoplanet Imaging (HabEx) and a LUVOIR as well as Far-IR and an X-Ray Surveyor missions. Packaging larger space telescopes into existing launch vehicles is a significant engineering complexity challenge that drives cost and risk. NASA's planned Space Launch System (SLS), with its 8 or 10-m diameter fairings and ability to deliver 35 to 45-mt of payload to Sun-Earth-Lagrange-2, mitigates this challenge by fundamentally changing the design paradigm for large space telescopes. This paper reviews the mass and volume capacities of the planned SLS, discusses potential implications of these capacities for designing large space telescope missions, and gives three specific mission concept implementation examples: a 4-m monolithic off-axis telescope, an 8-m monolithic on-axis telescope and a 12-m segmented on-axis telescope.

  8. Regionally Implicit Discontinuous Galerkin Methods for Solving the Relativistic Vlasov-Maxwell System Submitted to Iowa State University

    Science.gov (United States)

    Guthrey, Pierson Tyler

    The relativistic Vlasov-Maxwell system (RVM) models the behavior of collisionless plasma, where electrons and ions interact via the electromagnetic fields they generate. In the RVM system, electrons could accelerate to significant fractions of the speed of light. An idea that is actively being pursued by several research groups around the globe is to accelerate electrons to relativistic speeds by hitting a plasma with an intense laser beam. As the laser beam passes through the plasma it creates plasma wakes, much like a ship passing through water, which can trap electrons and push them to relativistic speeds. Such setups are known as laser wakefield accelerators, and have the potential to yield particle accelerators that are significantly smaller than those currently in use. Ultimately, the goal of such research is to harness the resulting electron beams to generate electromagnetic waves that can be used in medical imaging applications. High-order accurate numerical discretizations of kinetic Vlasov plasma models are very effective at yielding low-noise plasma simulations, but are computationally expensive to solve because of the high dimensionality. In addition to the general difficulties inherent to numerically simulating Vlasov models, the relativistic Vlasov-Maxwell system has unique challenges not present in the non-relativistic case. One such issue is that operator splitting of the phase gradient leads to potential instabilities, thus we require an alternative to operator splitting of the phase. The goal of the current work is to develop a new class of high-order accurate numerical methods for solving kinetic Vlasov models of plasma. The main discretization in configuration space is handled via a high-order finite element method called the discontinuous Galerkin method (DG). One difficulty is that standard explicit time-stepping methods for DG suffer from time-step restrictions that are significantly worse than what a simple Courant-Friedrichs-Lewy (CFL

  9. A large capacity, high-speed multiparameter multichannel analysis system

    International Nuclear Information System (INIS)

    Hendricks, R.W.; Suehiro, S.; Seeger, P.A.; Scheer, J.W.

    1982-01-01

    A data acquisition system for recording multiparameter digital data into a large memory array at over 2.5 MHz is described. The system consists of a MOSTEK MK 8600 2048 K x 24-bit memory system, I/O ports to various external devices including the CAMAC dataway, a memory incrementer/adder and a daisy-chain of experiment-specific modules which calculate the memory address which is to be incremented. The design of the daisy-chain permits multiple modules and provides for easy modification as experimental needs change. The system has been designed for use in multiparameter, multichannel analysis of high-speed data gathered by position-sensitive detectors at conventional and synchrotron X-ray sources as well as for fixed energy and time-of-flight diffraction at continuous and pulsed neutron sources. Modules which have been developed to date include a buffer for two-dimensional position-sensitive detectors, a mapper for high-speed coordinate transformations, a buffered time-of-flight clock, a time-correlator for synchronized diffraction experiments, and a display unit for data bus diagnostics. (orig.)

  10. Staghorn: An Automated Large-Scale Distributed System Analysis Platform

    Energy Technology Data Exchange (ETDEWEB)

    Gabert, Kasimir [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Burns, Ian [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Elliott, Steven [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kallaher, Jenna [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vail, Adam [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-09-01

    Conducting experiments on large-scale distributed computing systems is becoming significantly easier with the assistance of emulation. Researchers can now create a model of a distributed computing environment and then generate a virtual, laboratory copy of the entire system composed of potentially thousands of virtual machines, switches, and software. The use of real software, running at clock rate in full virtual machines, allows experiments to produce meaningful results without necessitating a full understanding of all model components. However, the ability to inspect and modify elements within these models is bound by the limitation that such modifications must compete with the model, either running in or alongside it. This inhibits entire classes of analyses from being conducted upon these models. We developed a mechanism to snapshot an entire emulation-based model as it is running. This allows us to \\freeze time" and subsequently fork execution, replay execution, modify arbitrary parts of the model, or deeply explore the model. This snapshot includes capturing packets in transit and other input/output state along with the running virtual machines. We were able to build this system in Linux using Open vSwitch and Kernel Virtual Machines on top of Sandia's emulation platform Firewheel. This primitive opens the door to numerous subsequent analyses on models, including state space exploration, debugging distributed systems, performance optimizations, improved training environments, and improved experiment repeatability.

  11. Tensor-GMRES method for large sparse systems of nonlinear equations

    Science.gov (United States)

    Feng, Dan; Pulliam, Thomas H.

    1994-01-01

    This paper introduces a tensor-Krylov method, the tensor-GMRES method, for large sparse systems of nonlinear equations. This method is a coupling of tensor model formation and solution techniques for nonlinear equations with Krylov subspace projection techniques for unsymmetric systems of linear equations. Traditional tensor methods for nonlinear equations are based on a quadratic model of the nonlinear function, a standard linear model augmented by a simple second order term. These methods are shown to be significantly more efficient than standard methods both on nonsingular problems and on problems where the Jacobian matrix at the solution is singular. A major disadvantage of the traditional tensor methods is that the solution of the tensor model requires the factorization of the Jacobian matrix, which may not be suitable for problems where the Jacobian matrix is large and has a 'bad' sparsity structure for an efficient factorization. We overcome this difficulty by forming and solving the tensor model using an extension of a Newton-GMRES scheme. Like traditional tensor methods, we show that the new tensor method has significant computational advantages over the analogous Newton counterpart. Consistent with Krylov subspace based methods, the new tensor method does not depend on the factorization of the Jacobian matrix. As a matter of fact, the Jacobian matrix is never needed explicitly.

  12. Fluid-structure interaction in non-rigid pipeline systems - large scale validation experiments

    International Nuclear Information System (INIS)

    Heinsbroek, A.G.T.J.; Kruisbrink, A.C.H.

    1993-01-01

    The fluid-structure interaction computer code FLUSTRIN, developed by DELFT HYDRAULICS, enables the user to determine dynamic fluid pressures, structural stresses and displacements in a liquid-filled pipeline system under transient conditions. As such, the code is a useful tool to process and mechanical engineers in the safe design and operation of pipeline systems in nuclear power plants. To validate FLUSTRIN, experiments have been performed in a large scale 3D test facility. The test facility consists of a flexible pipeline system which is suspended by wires, bearings and anchors. Pressure surges, which excite the system, are generated by a fast acting shut-off valve. Dynamic pressures, structural displacements and strains (in total 70 signals) have been measured under well determined initial and boundary conditions. The experiments have been simulated with FLUSTRIN, which solves the acoustic equations using the method of characteristics (fluid) and the finite element method (structure). The agreement between experiments and simulations is shown to be good: frequencies, amplitudes and wave phenomena are well predicted by the numerical simulations. It is demonstrated that an uncoupled water hammer computation would render unreliable and useless results. (author)

  13. A very high performance stabilization system for large mass bolometerexperiments

    Energy Technology Data Exchange (ETDEWEB)

    Arnaboldi, C. [Sezione INFN di Milano Bicocca, Piazza della Scienza 3, I-20126 Milano (Italy); Universita di Milano Bicocca, Piazza della Scienza 3, I-20126 Milano (Italy); Giachero, A., E-mail: Andrea.Giachero@mib.infn.it [Sezione INFN di Milano Bicocca, Piazza della Scienza 3, I-20126 Milano (Italy); Universita di Milano Bicocca, Piazza della Scienza 3, I-20126 Milano (Italy); Gotti, C. [Sezione INFN di Milano Bicocca, Piazza della Scienza 3, I-20126 Milano (Italy); Universita di Firenze, Dipartimento di Elettronica e Telecomunicazioni, Via S. Marta 3, I-50139 Firenze (Italy); Pessina, G. [Sezione INFN di Milano Bicocca, Piazza della Scienza 3, I-20126 Milano (Italy); Universita di Milano Bicocca, Piazza della Scienza 3, I-20126 Milano (Italy)

    2011-10-01

    CUORE is a large mass bolometric experiment, composed of 988 crystals, under construction in Hall A of the Gran Sasso Underground Laboratories (LNGS). Its main aim is the study of neutrinoless double beta decay of {sup 130}Te. Each bolometer is a 760 g crystal of Tellurium dioxide on which a Nuclear Transmutation Doped Ge thermistor, Ge NTD, is glued with proper thermal contact. The stability of the system is mandatory over many years of data taking. To accomplish this requirement a heating resistor is glued on each detector across which a voltage pulse can be injected at will, to develop a known calibrated heating power. We present the design solution for a pulse generator system to be used for the injection of such a small and short voltage pulse across the heaters. This system is composed by different custom PCB boards each of them having multi-channel independent outputs completely remotely programmable from the acquisition system, in pulse width and amplitude, through an on-board ARM7 microcontroller. Pulse amplitudes must be selectable, in order to handle each detector on its full dynamic range. The resolution of the output voltage is 12 bits over 10 V range. An additional 4 steps programmable voltage attenuator is added at every output. The width of any pulse can range from 100{mu}s to 25.5 ms. The main features of the final system are: stability and precision in pulses generation (at the level of less than a ppm/{sup o}C), low cost (thanks to the use of commercial components) and compact implementation.

  14. Optimization of large-scale heterogeneous system-of-systems models.

    Energy Technology Data Exchange (ETDEWEB)

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Lee, Herbert K. H. (University of California, Santa Cruz, Santa Cruz, CA); Hart, William Eugene; Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Woodruff, David L. (University of California, Davis, Davis, CA)

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  15. System design of a large fuel cell hybrid locomotive

    Science.gov (United States)

    Miller, A. R.; Hess, K. S.; Barnes, D. L.; Erickson, T. L.

    Fuel cell power for locomotives combines the environmental benefits of a catenary-electric locomotive with the higher overall energy efficiency and lower infrastructure costs of a diesel-electric. A North American consortium, a public-private partnership, is developing a prototype hydrogen-fueled fuel cell-battery hybrid switcher locomotive for urban and military-base rail applications. Switcher locomotives are used in rail yards for assembling and disassembling trains and moving trains from one point to another. At 127 tonnes (280,000 lb), continuous power of 250 kW from its (proton exchange membrane) PEM fuel cell prime mover, and transient power well in excess of 1 MW, the hybrid locomotive will be the heaviest and most powerful fuel cell land vehicle yet. This fast-paced project calls for completion of the vehicle itself near the end of 2007. Several technical challenges not found in the development of smaller vehicles arise when designing and developing such a large fuel cell vehicle. Weight, center of gravity, packaging, and safety were design factors leading to, among other features, the roof location of the lightweight 350 bar compressed hydrogen storage system. Harsh operating conditions, especially shock loads during coupling to railcars, require component mounting systems capable of absorbing high energy. Vehicle scale-up by increasing mass, density, or power presents new challenges primarily related to issues of system layout, hydrogen storage, heat transfer, and shock loads.

  16. Optical technologies for data communication in large parallel systems

    International Nuclear Information System (INIS)

    Ritter, M B; Vlasov, Y; Kash, J A; Benner, A

    2011-01-01

    Large, parallel systems have greatly aided scientific computation and data collection, but performance scaling now relies on chip and system-level parallelism. This has happened because power density limits have caused processor frequency growth to stagnate, driving the new multi-core architecture paradigm, which would seem to provide generations of performance increases as transistors scale. However, this paradigm will be constrained by electrical I/O bandwidth limits; first off the processor card, then off the processor module itself. We will present best-estimates of these limits, then show how optical technologies can help provide more bandwidth to allow continued system scaling. We will describe the current status of optical transceiver technology which is already being used to exceed off-board electrical bandwidth limits, then present work on silicon nanophotonic transceivers and 3D integration technologies which, taken together, promise to allow further increases in off-module and off-card bandwidth. Finally, we will show estimated limits of nanophotonic links and discuss breakthroughs that are needed for further progress, and will speculate on whether we will reach Exascale-class machine performance at affordable powers.

  17. Characteristics of large thermal energy storage systems in Poland

    Science.gov (United States)

    Zwierzchowski, Ryszard

    2017-11-01

    In District Heating Systems (DHS) there are significant fluctuations in demand for heat by consumers during both the heating and the summer seasons. These variations are considered primarily in the 24-hour time horizon. These problems are aggravated further if the DHS is supplied by a CHP plant, because fluctuations in heat demand adversely affect to a significant degree the stable production of electricity at high overall efficiency. Therefore, introducing Thermal Energy Storage (TES) would be highly recommended on these grounds alone. The characteristics of Large (i.e. over 10 000 m3) TES in operation in Poland are presented. Information is given regarding new projects (currently in design or construction) that apply TES technology in DHS in Poland. The paper looks at the methodology used in Poland to select the TES system for a particular DHS, i.e., procedure for calculating capacity of the TES tank and the system to prevent water stored in the tank from absorbing oxygen from atmospheric air. Implementation of TES in DHS is treated as a recommended technology in the Polish District Heating sector. This technology offers great opportunities to improve the operating conditions of DHS, cutting energy production costs and emissions of pollutants to the atmosphere.

  18. Optimization of large-scale industrial systems : an emerging method

    Energy Technology Data Exchange (ETDEWEB)

    Hammache, A.; Aube, F.; Benali, M.; Cantave, R. [Natural Resources Canada, Varennes, PQ (Canada). CANMET Energy Technology Centre

    2006-07-01

    This paper reviewed optimization methods of large-scale industrial production systems and presented a novel systematic multi-objective and multi-scale optimization methodology. The methodology was based on a combined local optimality search with global optimality determination, and advanced system decomposition and constraint handling. The proposed method focused on the simultaneous optimization of the energy, economy and ecology aspects of industrial systems (E{sup 3}-ISO). The aim of the methodology was to provide guidelines for decision-making strategies. The approach was based on evolutionary algorithms (EA) with specifications including hybridization of global optimality determination with a local optimality search; a self-adaptive algorithm to account for the dynamic changes of operating parameters and design variables occurring during the optimization process; interactive optimization; advanced constraint handling and decomposition strategy; and object-oriented programming and parallelization techniques. Flowcharts of the working principles of the basic EA were presented. It was concluded that the EA uses a novel decomposition and constraint handling technique to enhance the Pareto solution search procedure for multi-objective problems. 6 refs., 9 figs.

  19. A measurement system for large, complex software programs

    Science.gov (United States)

    Rone, Kyle Y.; Olson, Kitty M.; Davis, Nathan E.

    1994-01-01

    This paper describes measurement systems required to forecast, measure, and control activities for large, complex software development and support programs. Initial software cost and quality analysis provides the foundation for meaningful management decisions as a project evolves. In modeling the cost and quality of software systems, the relationship between the functionality, quality, cost, and schedule of the product must be considered. This explicit relationship is dictated by the criticality of the software being developed. This balance between cost and quality is a viable software engineering trade-off throughout the life cycle. Therefore, the ability to accurately estimate the cost and quality of software systems is essential to providing reliable software on time and within budget. Software cost models relate the product error rate to the percent of the project labor that is required for independent verification and validation. The criticality of the software determines which cost model is used to estimate the labor required to develop the software. Software quality models yield an expected error discovery rate based on the software size, criticality, software development environment, and the level of competence of the project and developers with respect to the processes being employed.

  20. Large resistivity modulation in mixed-phase metallic systems.

    Science.gov (United States)

    Lee, Yeonbae; Liu, Z Q; Heron, J T; Clarkson, J D; Hong, J; Ko, C; Biegalski, M D; Aschauer, U; Hsu, S L; Nowakowski, M E; Wu, J; Christen, H M; Salahuddin, S; Bokor, J B; Spaldin, N A; Schlom, D G; Ramesh, R

    2015-01-07

    In numerous systems, giant physical responses have been discovered when two phases coexist; for example, near a phase transition. An intermetallic FeRh system undergoes a first-order antiferromagnetic to ferromagnetic transition above room temperature and shows two-phase coexistence near the transition. Here we have investigated the effect of an electric field to FeRh/PMN-PT heterostructures and report 8% change in the electrical resistivity of FeRh films. Such a 'giant' electroresistance (GER) response is striking in metallic systems, in which external electric fields are screened, and thus only weakly influence the carrier concentrations and mobilities. We show that our FeRh films comprise coexisting ferromagnetic and antiferromagnetic phases with different resistivities and the origin of the GER effect is the strain-mediated change in their relative proportions. The observed behaviour is reminiscent of colossal magnetoresistance in perovskite manganites and illustrates the role of mixed-phase coexistence in achieving large changes in physical properties with low-energy external perturbation.

  1. Optical technologies for data communication in large parallel systems

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, M B; Vlasov, Y; Kash, J A [IBM T.J. Watson Research Center, Yorktown Heights, NY (United States); Benner, A, E-mail: mritter@us.ibm.com [IBM Poughkeepsie, Poughkeepsie, NY (United States)

    2011-01-15

    Large, parallel systems have greatly aided scientific computation and data collection, but performance scaling now relies on chip and system-level parallelism. This has happened because power density limits have caused processor frequency growth to stagnate, driving the new multi-core architecture paradigm, which would seem to provide generations of performance increases as transistors scale. However, this paradigm will be constrained by electrical I/O bandwidth limits; first off the processor card, then off the processor module itself. We will present best-estimates of these limits, then show how optical technologies can help provide more bandwidth to allow continued system scaling. We will describe the current status of optical transceiver technology which is already being used to exceed off-board electrical bandwidth limits, then present work on silicon nanophotonic transceivers and 3D integration technologies which, taken together, promise to allow further increases in off-module and off-card bandwidth. Finally, we will show estimated limits of nanophotonic links and discuss breakthroughs that are needed for further progress, and will speculate on whether we will reach Exascale-class machine performance at affordable powers.

  2. Decentralized Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2013-01-01

    problem is formulated as a centralized large-scale optimization problem but is then decomposed into smaller subproblems that are solved locally by each unit connected to an aggregator. For large-scale systems the method is faster than solving the full problem and can be distributed to include an arbitrary...

  3. Investigation of propulsion system for large LNG ships

    Science.gov (United States)

    Sinha, R. P.; Nik, Wan Mohd Norsani Wan

    2012-09-01

    Requirements to move away from coal for power generation has made LNG as the most sought after fuel source, raising steep demands on its supply and production. Added to this scenario is the gradual depletion of the offshore oil and gas fields which is pushing future explorations and production activities far away into the hostile environment of deep sea. Production of gas in such environment has great technical and commercial impacts on gas business. For instance, laying gas pipes from deep sea to distant receiving terminals will be technically and economically challenging. Alternative to laying gas pipes will require installing re-liquefaction unit on board FPSOs to convert gas into liquid for transportation by sea. But, then because of increased distance between gas source and receiving terminals the current medium size LNG ships will no longer remain economical to operate. Recognizing this business scenario shipowners are making huge investments in the acquisition of large LNG ships. As power need of large LNG ships is very different from the current small ones, a variety of propulsion derivatives such as UST, DFDE, 2-Stroke DRL and Combined cycle GT have been proposed by leading engine manufacturers. Since, propulsion system constitutes major element of the ship's capital and life cycle cost, which of these options is most suited for large LNG ships is currently a major concern of the shipping industry and must be thoroughly assessed. In this paper the authors investigate relative merits of these propulsion options against the benchmark performance criteria of BOG disposal, fuel consumption, gas emissions, plant availability and overall life cycle cost.

  4. Large-Scale Traveling Weather Systems in Mars’ Southern Extratropics

    Science.gov (United States)

    Hollingsworth, Jeffery L.; Kahre, Melinda A.

    2017-10-01

    Between late fall and early spring, Mars’ middle- and high-latitude atmosphere supports strong mean equator-to-pole temperature contrasts and an accompanying mean westerly polar vortex. Observations from both the MGS Thermal Emission Spectrometer (TES) and the MRO Mars Climate Sounder (MCS) indicate that a mean baroclinicity-barotropicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). Such extratropical weather disturbances are critical components of the global circulation as they serve as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of such traveling extratropical synoptic disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively-lifted and radiatively-active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to the northern-hemisphere counterparts, the southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are investigated, in addition to large-scale up-slope/down-slope flows and the diurnal cycle. A southern storm zone in late winter and early spring presents in the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre and Hellas impact basins. Geographically localized transient-wave activity diagnostics are constructed that illuminate dynamical differences amongst the simulations and these are presented.

  5. Investigation of propulsion system for large LNG ships

    International Nuclear Information System (INIS)

    Sinha, R P; Wan Nik, Wan Mohd Norsani

    2012-01-01

    Requirements to move away from coal for power generation has made LNG as the most sought after fuel source, raising steep demands on its supply and production. Added to this scenario is the gradual depletion of the offshore oil and gas fields which is pushing future explorations and production activities far away into the hostile environment of deep sea. Production of gas in such environment has great technical and commercial impacts on gas business. For instance, laying gas pipes from deep sea to distant receiving terminals will be technically and economically challenging. Alternative to laying gas pipes will require installing re-liquefaction unit on board FPSOs to convert gas into liquid for transportation by sea. But, then because of increased distance between gas source and receiving terminals the current medium size LNG ships will no longer remain economical to operate. Recognizing this business scenario shipowners are making huge investments in the acquisition of large LNG ships. As power need of large LNG ships is very different from the current small ones, a variety of propulsion derivatives such as UST, DFDE, 2-Stroke DRL and Combined cycle GT have been proposed by leading engine manufacturers. Since, propulsion system constitutes major element of the ship's capital and life cycle cost, which of these options is most suited for large LNG ships is currently a major concern of the shipping industry and must be thoroughly assessed. In this paper the authors investigate relative merits of these propulsion options against the benchmark performance criteria of BOG disposal, fuel consumption, gas emissions, plant availability and overall life cycle cost.

  6. Large-Scale Traveling Weather Systems in Mars Southern Extratropics

    Science.gov (United States)

    Hollingsworth, Jeffery L.; Kahre, Melinda A.

    2017-01-01

    Between late fall and early spring, Mars' middle- and high-latitude atmosphere supports strong mean equator-to-pole temperature contrasts and an accompanying mean westerly polar vortex. Observations from both the MGS Thermal Emission Spectrometer (TES) and the MRO Mars Climate Sounder (MCS) indicate that a mean baroclinicity-barotropicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). Such extratropical weather disturbances are critical components of the global circulation as they serve as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of such traveling extratropical synoptic disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively-lifted and radiatively-active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to the northern-hemisphere counterparts, the southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are investigated, in addition to large-scale up-slope/down-slope flows and the diurnal cycle. A southern storm zone in late winter and early spring presents in the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre and Hellas impact basins. Geographically localized transient-wave activity diagnostics are constructed that illuminate dynamical differences amongst the simulations and these are presented.

  7. Large strip RPCs for the LEPS2 TOF system

    Energy Technology Data Exchange (ETDEWEB)

    Tomida, N., E-mail: natsuki@scphys.kyoto-u.ac.jp [Department of Physics, Kyoto University, Kyoto 606-8502 (Japan); Niiyama, M. [Department of Physics, Kyoto University, Kyoto 606-8502 (Japan); Ohnishi, H. [RIKEN (The Institute of Physical and Chemical Research), Wako, Saitama 351-0198 (Japan); Tran, N. [Research Center for Nuclear Physics (RCNP), Osaka University, Ibaraki, Osaka 567-0047 (Japan); Hsieh, C.-Y.; Chu, M.-L.; Chang, W.-C. [Institute of Physics, Academia Sinica, Nankang, Taipei 11529, Taiwan (China); Chen, J.-Y. [National Synchrotron Radiation Research Center (NSRRC), Hsinchu 30076, Taiwan (China)

    2014-12-01

    High time-resolution resistive plate chambers (RPCs) with large-size readout strips are developed for the time-of-flight (TOF) detector system of the LEPS2 experiment at SPring-8. The experimental requirement is a 50-ps time resolution for a strip size larger than 100 cm{sup 2}/channel. We are able to achieve 50-ps time resolutions with 2.5×100 cm{sup 2} strips by directly connecting the amplifiers to strips. With the same time resolution, the number of front-end electronics (FEE) is also reduced by signal addition. - Highlights: • Find a way to achieve a good time resolution with a large strip RPC. • 2.5 cm narrow strips have better resolutions than 5.0 cm ones. • The 0.5 mm narrow strip interval shows flat time resolutions between strips. • FEEs directly connected to strips make the signal reflection at the strip edge small. • A time resolution of 50 ps was achieved with 2.5 cm×100 cm strips.

  8. Aberrations and focusability in large solid-state-laser systems

    International Nuclear Information System (INIS)

    Simmons, W.W.

    1981-01-01

    Solid state lasers for fusion experiments must reliably deliver maximum power to small (approximately .5 mm) targets from stand-off focal distances of 1 m or more. This requirement places stringent limits upon the optical quality of the several major components - amplifiers, Faraday isolators, spatial filters - in each amplifier train. Residual static aberrations in optical components are transferred to the beam as it traverses the optical amplifier chain. Although individual components are typically less than lambda/20 for components less than 10 cm clear aperture; and less than lambda/10 for components less than 20 cm clear aperture; the large number of such components in optical series results in a wavefront error that may exceed one wave for modern solid state lasers. For pulse operation, the focal spot is additionally broadened by intensity dependent nonlinearities. Specific examples of the performance of large aperture components will be presented within the context of the Argus and Shiva laser systems, which are presently operational at Lawrence Livermore National Laboratory. Design requirements upon the larger aperture Nova laser components, up to 74 cm in clear aperture, will also be discussed; these pose a significant challenge to the optical industry

  9. Large-scale visualization system for grid environment

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2007-01-01

    Center for Computational Science and E-systems of Japan Atomic Energy Agency (CCSE/JAEA) has been conducting R and Ds of distributed computing (grid computing) environments: Seamless Thinking Aid (STA), Information Technology Based Laboratory (ITBL) and Atomic Energy Grid InfraStructure (AEGIS). In these R and Ds, we have developed the visualization technology suitable for the distributed computing environment. As one of the visualization tools, we have developed the Parallel Support Toolkit (PST) which can execute the visualization process parallely on a computer. Now, we improve PST to be executable simultaneously on multiple heterogeneous computers using Seamless Thinking Aid Message Passing Interface (STAMPI). STAMPI, we have developed in these R and Ds, is the MPI library executable on a heterogeneous computing environment. The improvement realizes the visualization of extremely large-scale data and enables more efficient visualization processes in a distributed computing environment. (author)

  10. Risk-benefit evaluation for large technological systems

    International Nuclear Information System (INIS)

    Okrent, D.

    1979-01-01

    The related topics of risk-benefit analysis, risk analysis, and risk-acceptance criteria (How safe is safe enough) are of growing importance. An interdisciplinary study on various aspects of these topics, including applications to nuclear power, was recently completed at the University of California, Los Angeles (UCLA), with the support of the National Science Foundation. In addition to more than 30 topical reports and various open-literature publications, a final report (UCLA-ENG-7777) to the study, titled ''A Generalized Evaluation Approach to Risk--Benefit for Large Technological Systems and Its Application to Nuclear Power'', was issued in early 1978. This article briefly summarizes portions of the final report dealing with general aspects of risk-benefit methodology, societal knowledge and perception of risk, and risk-acceptance criteria

  11. Program system RALLY - for probabilistic safety analysis of large technical systems

    International Nuclear Information System (INIS)

    Gueldner, W.; Polke, H.; Spindler, H.; Zipf, G.

    1982-03-01

    This report describes the program system RALLY to compute the reliability of large and intermeshed technical systems. In addition to a short explanation of the different programs, the possible applications of the program system RALLY are demonstrated. Finally, the most important studies carried out so far on RALLY are discussed. (orig.) [de

  12. Pool fires in a large scale ventilation system

    International Nuclear Information System (INIS)

    Smith, P.R.; Leslie, I.H.; Gregory, W.S.; White, B.

    1991-01-01

    A series of pool fire experiments was carried out in the Large Scale Flow Facility of the Mechanical Engineering Department at New Mexico State University. The various experiments burned alcohol, hydraulic cutting oil, kerosene, and a mixture of kerosene and tributylphosphate. Gas temperature and wall temperature measurements as a function of time were made throughout the 23.3m 3 burn compartment and the ducts of the ventilation system. The mass of the smoke particulate deposited upon the ventilation system 0.61m x 0.61m high efficiency particulate air filter for the hydraulic oil, kerosene, and kerosene-tributylphosphate mixture fires was measured using an in situ null balance. Significant increases in filter resistance were observed for all three fuels for burning time periods ranging from 10 to 30 minutes. This was found to be highly dependent upon initial ventilation system flow rate, fuel type, and flow configuration. The experimental results were compared to simulated results predicted by the Los Alamos National Laboratory FIRAC computer code. In general, the experimental and the computer results were in reasonable agreement, despite the fact that the fire compartment for the experiments was an insulated steel tank with 0.32 cm walls, while the compartment model FIRIN of FIRAC assumes 0.31 m thick concrete walls. This difference in configuration apparently caused FIRAC to consistently underpredict the measured temperatures in the fire compartment. The predicted deposition of soot proved to be insensitive to ventilation system flow rate, but the measured values showed flow rate dependence. However, predicted soot deposition was of the same order of magnitude as measured soot deposition

  13. Special data base of Informational - Computational System 'INM RAS - Black Sea' for solving inverse and data assimilation problems

    Science.gov (United States)

    Zakharova, Natalia; Piskovatsky, Nicolay; Gusev, Anatoly

    2014-05-01

    Development of Informational-Computational Systems (ICS) for data assimilation procedures is one of multidisciplinary problems. To study and solve these problems one needs to apply modern results from different disciplines and recent developments in: mathematical modeling; theory of adjoint equations and optimal control; inverse problems; numerical methods theory; numerical algebra and scientific computing. The above problems are studied in the Institute of Numerical Mathematics of the Russian Academy of Science (INM RAS) in ICS for personal computers. In this work the results on the Special data base development for ICS "INM RAS - Black Sea" are presented. In the presentation the input information for ICS is discussed, some special data processing procedures are described. In this work the results of forecast using ICS "INM RAS - Black Sea" with operational observation data assimilation are presented. This study was supported by the Russian Foundation for Basic Research (project No 13-01-00753) and by Presidium Program of Russian Academy of Sciences (project P-23 "Black sea as an imitational ocean model"). References 1. V.I. Agoshkov, M.V. Assovskii, S.A. Lebedev, Numerical simulation of Black Sea hydrothermodynamics taking into account tide-forming forces. Russ. J. Numer. Anal. Math. Modelling (2012) 27, No.1, pp. 5-31. 2. E.I. Parmuzin, V.I. Agoshkov, Numerical solution of the variational assimilation problem for sea surface temperature in the model of the Black Sea dynamics. Russ. J. Numer. Anal. Math. Modelling (2012) 27, No.1, pp. 69-94. 3. V.B. Zalesny, N.A. Diansky, V.V. Fomin, S.N. Moshonkin, S.G. Demyshev, Numerical model of the circulation of Black Sea and Sea of Azov. Russ. J. Numer. Anal. Math. Modelling (2012) 27, No.1, pp. 95-111. 4. Agoshkov V.I.,Assovsky M.B., Giniatulin S. V., Zakharova N.B., Kuimov G.V., Parmuzin E.I., Fomin V.V. Informational Computational system of variational assimilation of observation data "INM RAS - Black sea"// Ecological

  14. Medical isotope identification with large mobile detection systems

    Science.gov (United States)

    Mukhopadhyay, Sanjoy; Maurer, Richard

    2012-10-01

    The Remote Sensing laboratory (RSL) of National Security Technologies Inc. has built an array of large (5.08 - cm x 10.16 - cm x 40.6 - cm) thallium doped sodium iodide (NaI: Tl) scintillators to locate and screen gamma-ray emitting radioisotopes that are of interests to radiological emergency responders [1]. These vehicle mounted detectors provide the operators with rapid, simple, specific information for radiological threat assessment. Applications include large area inspection, customs inspection, border protection, emergency response, and monitoring of radiological facilities. These RSL mobile units are currently being upgraded to meet the Defense Threat Reduction Agency mission requirements for a next-generation system capable of detecting and identifying nuclear threat materials. One of the challenging problems faced by these gamma-ray detectors is the unambiguous identification of medical isotopes like 131I (364.49 keV [81.7%], 636.99 keV [7.17%]), 99Tcm (140.51 keV [89.1%]) and 67Ga (184.6 keV [19.7%], 300.2 [16.0%], 393.5 [4.5%] that are used in radionuclide therapy and often have overlapping gamma-ray energy regions of interest (ROI). The problem is made worse by short (about 5 seconds) acquisition time of the spectral data necessary for dynamic mobile detectors. This article describes attempts to identify medical isotopes from data collected from this mobile detection system in a short period of time (not exceeding 5 secs) and a large standoff distance (typically 10 meters) The mobile units offer identification capabilities that are based on hardware auto stabilization of the amplifier gain. The 1461 keV gamma-energy line from 40K is tracked. It uses gamma-ray energy windowing along with embedded mobile Gamma Detector Response and Analysis Software (GADRAS) [2] simultaneously to deconvolve any overlapping gamma-energy ROIs. These high sensitivity detectors are capable of resolving complex masking scenarios and exceed all ANSI N42.34 (2006) requirements

  15. Appreciative Problem Solving

    DEFF Research Database (Denmark)

    Hansen, David

    2012-01-01

    Many industrial production work systems have increased in complexity, and their new business model scompete on innovation, rather than low cost.At a medical device production facility committed to Lean Production, a research project was carried out to use Appreciative Inquiry to better engage...... employee strengths in continuou simprovements of the work system. The research question was: “How can Lean problem solving and Appreciative Inquiry be combined for optimized work system innovation?” The research project was carried out as a co-creation process with close cooperation between researcher...

  16. Exshall: A Turkel-Zwas explicit large time-step FORTRAN program for solving the shallow-water equations in spherical coordinates

    Science.gov (United States)

    Navon, I. M.; Yu, Jian

    A FORTRAN computer program is presented and documented applying the Turkel-Zwas explicit large time-step scheme to a hemispheric barotropic model with constraint restoration of integral invariants of the shallow-water equations. We then proceed to detail the algorithms embodied in the code EXSHALL in this paper, particularly algorithms related to the efficiency and stability of T-Z scheme and the quadratic constraint restoration method which is based on a variational approach. In particular we provide details about the high-latitude filtering, Shapiro filtering, and Robert filtering algorithms used in the code. We explain in detail the various subroutines in the EXSHALL code with emphasis on algorithms implemented in the code and present the flowcharts of some major subroutines. Finally, we provide a visual example illustrating a 4-day run using real initial data, along with a sample printout and graphic isoline contours of the height field and velocity fields.

  17. Trophic modeling of Eastern Boundary Current Systems: a review and prospectus for solving the “Peruvian Puzzle”

    Directory of Open Access Journals (Sweden)

    Marc H. Taylor

    2013-04-01

    Full Text Available Eastern Boundary Current systems (EBCSs are among the most productive fishing areas in the world. High primary and secondary productivity supports a large biomass of small planktivorous pelagic fish, “small pelagics”, which are important drivers of production to the entire system whereby they can influence both higher and lower trophic levels. Environmental variability causes changes in plankton (food quality and quantity, which can affect population sizes, distribution and domi-nance among small pelagics. This variability combined with impacts from the fishery complicate the development of management strategies. Consequently, much recent work has been in the development of multispecies trophic models to better understand interdependencies and system dynamics. Despite similarities in extent, structure and primary productivity between EBCSs, the Peruvian system greatly differs from the others in the magnitude of fish catches, due mainly to the incredible production of the anchovy Engraulis ringens. This paper reviews literature concerning EBCSs dynamics and the state-of-the-art in the trophic modeling of EBCSs. The objective is to critically analyze the potential of this approach for system understanding and management and to adapt existing steady-state models of the Peruvian system for use in (future dynamic simulations. A guideline for the construction of trophodynamic models is presented taking into account the important trophic and environmental interactions. In consideration of the importance of small pelagics for the system dynamics, emphasis is placed on developing appropriate model compartmentalization and spatial delineation that facilitates dynamic simulations. Methods of model validation to historical changes are presented to support hypotheses concerning EBCS dynamics and as a critical step to the development of predictive models. Finally, the identification of direct model links to easily obtainable abiotic parameters is

  18. REQUIREMENTS FOR SYSTEMS DEVELOPMENT LIFE CYCLE MODELS FOR LARGE-SCALE DEFENSE SYSTEMS

    Directory of Open Access Journals (Sweden)

    Kadir Alpaslan DEMIR

    2015-10-01

    Full Text Available TLarge-scale defense system projects are strategic for maintaining and increasing the national defense capability. Therefore, governments spend billions of dollars in the acquisition and development of large-scale defense systems. The scale of defense systems is always increasing and the costs to build them are skyrocketing. Today, defense systems are software intensive and they are either a system of systems or a part of it. Historically, the project performances observed in the development of these systems have been signifi cantly poor when compared to other types of projects. It is obvious that the currently used systems development life cycle models are insuffi cient to address today’s challenges of building these systems. Using a systems development life cycle model that is specifi cally designed for largescale defense system developments and is effective in dealing with today’s and near-future challenges will help to improve project performances. The fi rst step in the development a large-scale defense systems development life cycle model is the identifi cation of requirements for such a model. This paper contributes to the body of literature in the fi eld by providing a set of requirements for system development life cycle models for large-scale defense systems. Furthermore, a research agenda is proposed.

  19. RF system considerations for large high-duty-factor linacs

    International Nuclear Information System (INIS)

    Lynch, M.T.; Ziomek, C.D.; Tallerico, P.J.; Regan, A.H.; Eaton, L.; Lawrence, G.

    1994-01-01

    RF systems are often a major cost item for linacs, but this is especially true for large high-duty-factor linacs (up to and including CW) such as the Accelerator for Production of Tritium (APT) or the Accelerator for Transmutation of nuclear Waste (ATW). In addition, the high energy and high average beam current of these machines (approximately 1 GeV, 100--200 mA) leads to a need for excellent control of the accelerating fields in order to minimize the possibility of beam loss in the accelerator and the resulting activation. This paper will address the key considerations and limitations in the design of the RF system. These considerations impact the design of both the high power RF components and the RF controls. As might be expected, the two concerns sometimes lead to conflicting design requirements. For example minimum RF operating costs lead to a desire for operation near saturation of the high power RF generators in order to maximize the operating efficiency. Optimal control of the RF fields leads to a desire for maximum overdrive capability in those same generators in order to respond quickly to disturbances of the accelerator fields

  20. Modeling Resource Utilization of a Large Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    The ATLAS 'Phase-II' upgrade, scheduled to start in 2024, will significantly change the requirements under which the data-acquisition system operates. The input data rate, currently fixed around 150 GB/s, is anticipated to reach 5 TB/s. In order to deal with the challenging conditions, and exploit the capabilities of newer technologies, a number of architectural changes are under consideration. Of particular interest is a new component, known as the Storage Handler, which will provide a large buffer area decoupling real-time data taking from event filtering. Dynamic operational models of the upgraded system can be used to identify the required resources and to select optimal techniques. In order to achieve a robust and dependable model, the current data-acquisition architecture has been used as a test case. This makes it possible to verify and calibrate the model against real operation data. Such a model can then be evolved toward the future ATLAS Phase-II architecture. In this paper we introduce the current ...