WorldWideScience

Sample records for minimization problem blmp

  1. Analysis list: blmp-1 [Chip-atlas[Archive

    Lifescience Database Archive (English)

    Full Text Available blmp-1 Embryo,Larvae + ce10 http://dbarchive.biosciencedbc.jp/kyushu-u/ce10/target/...blmp-1.1.tsv http://dbarchive.biosciencedbc.jp/kyushu-u/ce10/target/blmp-1.5.tsv http://dbarchive.bioscience...dbc.jp/kyushu-u/ce10/target/blmp-1.10.tsv http://dbarchive.biosciencedbc.jp/kyushu-u/ce10/colo/blmp-1.Embryo....tsv,http://dbarchive.biosciencedbc.jp/kyushu-u/ce10/colo/blmp-1.Larvae.tsv http://dbarchive.bioscience...dbc.jp/kyushu-u/ce10/colo/Embryo.gml,http://dbarchive.biosciencedbc.jp/kyushu-u/ce10/colo/Larvae.gml ...

  2. One-dimensional Gromov minimal filling problem

    International Nuclear Information System (INIS)

    Ivanov, Alexandr O; Tuzhilin, Alexey A

    2012-01-01

    The paper is devoted to a new branch in the theory of one-dimensional variational problems with branching extremals, the investigation of one-dimensional minimal fillings introduced by the authors. On the one hand, this problem is a one-dimensional version of a generalization of Gromov's minimal fillings problem to the case of stratified manifolds. On the other hand, this problem is interesting in itself and also can be considered as a generalization of another classical problem, the Steiner problem on the construction of a shortest network connecting a given set of terminals. Besides the statement of the problem, we discuss several properties of the minimal fillings and state several conjectures. Bibliography: 38 titles.

  3. Gravitino problem in minimal supergravity inflation

    Energy Technology Data Exchange (ETDEWEB)

    Hasegawa, Fuminori [Institute for Cosmic Ray Research, The University of Tokyo, Kashiwa, Chiba 277-8582 (Japan); Mukaida, Kyohei [Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba 277-8583 (Japan); Nakayama, Kazunori [Department of Physics, Faculty of Science, The University of Tokyo, Bunkyo-ku, Tokyo 133-0033 (Japan); Terada, Takahiro, E-mail: terada@kias.re.kr [School of Physics, Korea Institute for Advanced Study (KIAS), Seoul 02455 (Korea, Republic of); Yamada, Yusuke [Stanford Institute for Theoretical Physics and Department of Physics, Stanford University, Stanford, CA 94305 (United States)

    2017-04-10

    We study non-thermal gravitino production in the minimal supergravity inflation. In this minimal model utilizing orthogonal nilpotent superfields, the particle spectrum includes only graviton, gravitino, inflaton, and goldstino. We find that a substantial fraction of the cosmic energy density can be transferred to the longitudinal gravitino due to non-trivial change of its sound speed. This implies either a breakdown of the effective theory after inflation or a serious gravitino problem.

  4. Gravitino problem in minimal supergravity inflation

    Directory of Open Access Journals (Sweden)

    Fuminori Hasegawa

    2017-04-01

    Full Text Available We study non-thermal gravitino production in the minimal supergravity inflation. In this minimal model utilizing orthogonal nilpotent superfields, the particle spectrum includes only graviton, gravitino, inflaton, and goldstino. We find that a substantial fraction of the cosmic energy density can be transferred to the longitudinal gravitino due to non-trivial change of its sound speed. This implies either a breakdown of the effective theory after inflation or a serious gravitino problem.

  5. OPTIM, Minimization of Band-Width of Finite Elements Problems

    International Nuclear Information System (INIS)

    Huart, M.

    1977-01-01

    1 - Nature of the physical problem solved: To minimize the band-width of finite element problems. 2 - Method of solution: A surface is constructed from the x-y-coordinates of each node using its node number as z-value. This surface consists of triangles. Nodes are renumbered in such a way as to minimize the surface area. 3 - Restrictions on the complexity of the problem: This program is applicable to 2-D problems. It is dimensioned for a maximum of 1000 elements

  6. Global Sufficient Optimality Conditions for a Special Cubic Minimization Problem

    Directory of Open Access Journals (Sweden)

    Xiaomei Zhang

    2012-01-01

    Full Text Available We present some sufficient global optimality conditions for a special cubic minimization problem with box constraints or binary constraints by extending the global subdifferential approach proposed by V. Jeyakumar et al. (2006. The present conditions generalize the results developed in the work of V. Jeyakumar et al. where a quadratic minimization problem with box constraints or binary constraints was considered. In addition, a special diagonal matrix is constructed, which is used to provide a convenient method for justifying the proposed sufficient conditions. Then, the reformulation of the sufficient conditions follows. It is worth noting that this reformulation is also applicable to the quadratic minimization problem with box or binary constraints considered in the works of V. Jeyakumar et al. (2006 and Y. Wang et al. (2010. Finally some examples demonstrate that our optimality conditions can effectively be used for identifying global minimizers of the certain nonconvex cubic minimization problem.

  7. Minimization In Digital Design As A Meta-Planning Problem

    Science.gov (United States)

    Ho, William P. C.; Wu, Jung-Gen

    1987-05-01

    In our model-based expert system for automatic digital system design, we formalize the design process into three sub-processes - compiling high-level behavioral specifications into primitive behavioral operations, grouping primitive operations into behavioral functions, and grouping functions into modules. Consideration of design minimization explicitly controls decision-making in the last two subprocesses. Design minimization, a key task in the automatic design of digital systems, is complicated by the high degree of interaction among the time sequence and content of design decisions. In this paper, we present an AI approach which directly addresses these interactions and their consequences by modeling the minimization prob-lem as a planning problem, and the management of design decision-making as a meta-planning problem.

  8. Minimal surfaces, stratified multivarifolds, and the plateau problem

    CERN Document Server

    Thi, Dao Trong; Primrose, E J F; Silver, Ben

    1991-01-01

    Plateau's problem is a scientific trend in modern mathematics that unites several different problems connected with the study of minimal surfaces. In its simplest version, Plateau's problem is concerned with finding a surface of least area that spans a given fixed one-dimensional contour in three-dimensional space--perhaps the best-known example of such surfaces is provided by soap films. From the mathematical point of view, such films are described as solutions of a second-order partial differential equation, so their behavior is quite complicated and has still not been thoroughly studied. Soap films, or, more generally, interfaces between physical media in equilibrium, arise in many applied problems in chemistry, physics, and also in nature. In applications, one finds not only two-dimensional but also multidimensional minimal surfaces that span fixed closed "contours" in some multidimensional Riemannian space. An exact mathematical statement of the problem of finding a surface of least area or volume requir...

  9. Canonical Primal-Dual Method for Solving Non-convex Minimization Problems

    OpenAIRE

    Wu, Changzhi; Li, Chaojie; Gao, David Yang

    2012-01-01

    A new primal-dual algorithm is presented for solving a class of non-convex minimization problems. This algorithm is based on canonical duality theory such that the original non-convex minimization problem is first reformulated as a convex-concave saddle point optimization problem, which is then solved by a quadratically perturbed primal-dual method. %It is proved that the popular SDP method is indeed a special case of the canonical duality theory. Numerical examples are illustrated. Comparing...

  10. Free time minimizers for the three-body problem

    Science.gov (United States)

    Moeckel, Richard; Montgomery, Richard; Sánchez Morgado, Héctor

    2018-03-01

    Free time minimizers of the action (called "semi-static" solutions by Mañe in International congress on dynamical systems in Montevideo (a tribute to Ricardo Mañé), vol 362, pp 120-131, 1996) play a central role in the theory of weak KAM solutions to the Hamilton-Jacobi equation (Fathi in Weak KAM Theorem in Lagrangian Dynamics Preliminary Version Number 10, 2017). We prove that any solution to Newton's three-body problem which is asymptotic to Lagrange's parabolic homothetic solution is eventually a free time minimizer. Conversely, we prove that every free time minimizer tends to Lagrange's solution, provided the mass ratios lie in a certain large open set of mass ratios. We were inspired by the work of Da Luz and Maderna (Math Proc Camb Philos Soc 156:209-227, 1980) which showed that every free time minimizer for the N-body problem is parabolic and therefore must be asymptotic to the set of central configurations. We exclude being asymptotic to Euler's central configurations by a second variation argument. Central configurations correspond to rest points for the McGehee blown-up dynamics. The large open set of mass ratios are those for which the linearized dynamics at each Euler rest point has a complex eigenvalue.

  11. Numerical solution of large nonlinear boundary value problems by quadratic minimization techniques

    International Nuclear Information System (INIS)

    Glowinski, R.; Le Tallec, P.

    1984-01-01

    The objective of this paper is to describe the numerical treatment of large highly nonlinear two or three dimensional boundary value problems by quadratic minimization techniques. In all the different situations where these techniques were applied, the methodology remains the same and is organized as follows: 1) derive a variational formulation of the original boundary value problem, and approximate it by Galerkin methods; 2) transform this variational formulation into a quadratic minimization problem (least squares methods) or into a sequence of quadratic minimization problems (augmented lagrangian decomposition); 3) solve each quadratic minimization problem by a conjugate gradient method with preconditioning, the preconditioning matrix being sparse, positive definite, and fixed once for all in the iterative process. This paper will illustrate the methodology above on two different examples: the description of least squares solution methods and their application to the solution of the unsteady Navier-Stokes equations for incompressible viscous fluids; the description of augmented lagrangian decomposition techniques and their application to the solution of equilibrium problems in finite elasticity

  12. Iterative Schemes for Convex Minimization Problems with Constraints

    Directory of Open Access Journals (Sweden)

    Lu-Chuan Ceng

    2014-01-01

    Full Text Available We first introduce and analyze one implicit iterative algorithm for finding a solution of the minimization problem for a convex and continuously Fréchet differentiable functional, with constraints of several problems: the generalized mixed equilibrium problem, the system of generalized equilibrium problems, and finitely many variational inclusions in a real Hilbert space. We prove strong convergence theorem for the iterative algorithm under suitable conditions. On the other hand, we also propose another implicit iterative algorithm for finding a fixed point of infinitely many nonexpansive mappings with the same constraints, and derive its strong convergence under mild assumptions.

  13. Scheduling stochastic two-machine flow shop problems to minimize expected makespan

    Directory of Open Access Journals (Sweden)

    Mehdi Heydari

    2013-07-01

    Full Text Available During the past few years, despite tremendous contribution on deterministic flow shop problem, there are only limited number of works dedicated on stochastic cases. This paper examines stochastic scheduling problems in two-machine flow shop environment for expected makespan minimization where processing times of jobs are normally distributed. Since jobs have stochastic processing times, to minimize the expected makespan, the expected sum of the second machine’s free times is minimized. In other words, by minimization waiting times for the second machine, it is possible to reach the minimum of the objective function. A mathematical method is proposed which utilizes the properties of the normal distributions. Furthermore, this method can be used as a heuristic method for other distributions, as long as the means and variances are available. The performance of the proposed method is explored using some numerical examples.

  14. Minimization under entropy conditions, with applications in lower bound problems

    International Nuclear Information System (INIS)

    Toft, Joachim

    2004-01-01

    We minimize the functional f->∫ afdμ under the entropy condition E(f)=-∫ f log fdμ≥E, ∫ fdμ=1 and f≥0, where E is a member of R is fixed. We prove that the minimum is attained for f=e -sa /∫ e -sa dμ, where s is a member of R is chosen such that E(f)=E. We apply the result on minimizing problems in pseudodifferential calculus, where we minimize the harmonic oscillator

  15. Quantum N-body problem with a minimal length

    International Nuclear Information System (INIS)

    Buisseret, Fabien

    2010-01-01

    The quantum N-body problem is studied in the context of nonrelativistic quantum mechanics with a one-dimensional deformed Heisenberg algebra of the form [x,p]=i(1+βp 2 ), leading to the existence of a minimal observable length √(β). For a generic pairwise interaction potential, analytical formulas are obtained that allow estimation of the ground-state energy of the N-body system by finding the ground-state energy of a corresponding two-body problem. It is first shown that in the harmonic oscillator case, the β-dependent term grows faster with increasing N than the β-independent term. Then, it is argued that such a behavior should also be observed with generic potentials and for D-dimensional systems. Consequently, quantum N-body bound states might be interesting places to look at nontrivial manifestations of a minimal length, since the more particles that are present, the more the system deviates from standard quantum-mechanical predictions.

  16. Free-energy minimization and the dark-room problem.

    Science.gov (United States)

    Friston, Karl; Thornton, Christopher; Clark, Andy

    2012-01-01

    Recent years have seen the emergence of an important new fundamental theory of brain function. This theory brings information-theoretic, Bayesian, neuroscientific, and machine learning approaches into a single framework whose overarching principle is the minimization of surprise (or, equivalently, the maximization of expectation). The most comprehensive such treatment is the "free-energy minimization" formulation due to Karl Friston (see e.g., Friston and Stephan, 2007; Friston, 2010a,b - see also Fiorillo, 2010; Thornton, 2010). A recurrent puzzle raised by critics of these models is that biological systems do not seem to avoid surprises. We do not simply seek a dark, unchanging chamber, and stay there. This is the "Dark-Room Problem." Here, we describe the problem and further unpack the issues to which it speaks. Using the same format as the prolog of Eddington's Space, Time, and Gravitation (Eddington, 1920) we present our discussion as a conversation between: an information theorist (Thornton), a physicist (Friston), and a philosopher (Clark).

  17. NP-hardness of the cluster minimization problem revisited

    Science.gov (United States)

    Adib, Artur B.

    2005-10-01

    The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested.

  18. NP-hardness of the cluster minimization problem revisited

    International Nuclear Information System (INIS)

    Adib, Artur B

    2005-01-01

    The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested

  19. NP-hardness of the cluster minimization problem revisited

    Energy Technology Data Exchange (ETDEWEB)

    Adib, Artur B [Physics Department, Brown University, Providence, RI 02912 (United States)

    2005-10-07

    The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested.

  20. Minimal investment risk of a portfolio optimization problem with budget and investment concentration constraints

    Science.gov (United States)

    Shinzato, Takashi

    2017-02-01

    In the present paper, the minimal investment risk for a portfolio optimization problem with imposed budget and investment concentration constraints is considered using replica analysis. Since the minimal investment risk is influenced by the investment concentration constraint (as well as the budget constraint), it is intuitive that the minimal investment risk for the problem with an investment concentration constraint can be larger than that without the constraint (that is, with only the budget constraint). Moreover, a numerical experiment shows the effectiveness of our proposed analysis. In contrast, the standard operations research approach failed to identify accurately the minimal investment risk of the portfolio optimization problem.

  1. On the uniqueness of minimizers for a class of variational problems with Polyconvex integrand

    KAUST Repository

    Awi, Romeo

    2017-02-05

    We prove existence and uniqueness of minimizers for a family of energy functionals that arises in Elasticity and involves polyconvex integrands over a certain subset of displacement maps. This work extends previous results by Awi and Gangbo to a larger class of integrands. First, we study these variational problems over displacements for which the determinant is positive. Second, we consider a limit case in which the functionals are degenerate. In that case, the set of admissible displacements reduces to that of incompressible displacements which are measure preserving maps. Finally, we establish that the minimizer over the set of incompressible maps may be obtained as a limit of minimizers corresponding to a sequence of minimization problems over general displacements provided we have enough regularity on the dual problems. We point out that these results defy the direct methods of the calculus of variations.

  2. On the minimizers of calculus of variations problems in Hilbert spaces

    KAUST Repository

    Gomes, Diogo A.

    2014-01-19

    The objective of this paper is to discuss existence, uniqueness and regularity issues of minimizers of one dimensional calculus of variations problem in Hilbert spaces. © 2014 Springer-Verlag Berlin Heidelberg.

  3. On the minimizers of calculus of variations problems in Hilbert spaces

    KAUST Repository

    Gomes, Diogo A.; Nurbekyan, Levon

    2014-01-01

    The objective of this paper is to discuss existence, uniqueness and regularity issues of minimizers of one dimensional calculus of variations problem in Hilbert spaces. © 2014 Springer-Verlag Berlin Heidelberg.

  4. Minimizers of a Class of Constrained Vectorial Variational Problems: Part I

    KAUST Repository

    Hajaiej, Hichem

    2014-04-18

    In this paper, we prove the existence of minimizers of a class of multiconstrained variational problems. We consider systems involving a nonlinearity that does not satisfy compactness, monotonicity, neither symmetry properties. Our approach hinges on the concentration-compactness approach. In the second part, we will treat orthogonal constrained problems for another class of integrands using density matrices method. © 2014 Springer Basel.

  5. Minimizers of a Class of Constrained Vectorial Variational Problems: Part I

    KAUST Repository

    Hajaiej, Hichem; Markowich, Peter A.; Trabelsi, Saber

    2014-01-01

    In this paper, we prove the existence of minimizers of a class of multiconstrained variational problems. We consider systems involving a nonlinearity that does not satisfy compactness, monotonicity, neither symmetry properties. Our approach hinges

  6. Minimization of Linear Functionals Defined on| Solutions of Large-Scale Discrete Ill-Posed Problems

    DEFF Research Database (Denmark)

    Elden, Lars; Hansen, Per Christian; Rojas, Marielba

    2003-01-01

    The minimization of linear functionals de ned on the solutions of discrete ill-posed problems arises, e.g., in the computation of con dence intervals for these solutions. In 1990, Elden proposed an algorithm for this minimization problem based on a parametric-programming reformulation involving...... the solution of a sequence of trust-region problems, and using matrix factorizations. In this paper, we describe MLFIP, a large-scale version of this algorithm where a limited-memory trust-region solver is used on the subproblems. We illustrate the use of our algorithm in connection with an inverse heat...

  7. Sensitivity computation of the l1 minimization problem and its application to dictionary design of ill-posed problems

    International Nuclear Information System (INIS)

    Horesh, L; Haber, E

    2009-01-01

    The l 1 minimization problem has been studied extensively in the past few years. Recently, there has been a growing interest in its application for inverse problems. Most studies have concentrated in devising ways for sparse representation of a solution using a given prototype dictionary. Very few studies have addressed the more challenging problem of optimal dictionary construction, and even these were primarily devoted to the simplistic sparse coding application. In this paper, sensitivity analysis of the inverse solution with respect to the dictionary is presented. This analysis reveals some of the salient features and intrinsic difficulties which are associated with the dictionary design problem. Equipped with these insights, we propose an optimization strategy that alleviates these hurdles while utilizing the derived sensitivity relations for the design of a locally optimal dictionary. Our optimality criterion is based on local minimization of the Bayesian risk, given a set of training models. We present a mathematical formulation and an algorithmic framework to achieve this goal. The proposed framework offers the design of dictionaries for inverse problems that incorporate non-trivial, non-injective observation operators, where the data and the recovered parameters may reside in different spaces. We test our algorithm and show that it yields improved dictionaries for a diverse set of inverse problems in geophysics and medical imaging

  8. Sensitivity computation of the ell1 minimization problem and its application to dictionary design of ill-posed problems

    Science.gov (United States)

    Horesh, L.; Haber, E.

    2009-09-01

    The ell1 minimization problem has been studied extensively in the past few years. Recently, there has been a growing interest in its application for inverse problems. Most studies have concentrated in devising ways for sparse representation of a solution using a given prototype dictionary. Very few studies have addressed the more challenging problem of optimal dictionary construction, and even these were primarily devoted to the simplistic sparse coding application. In this paper, sensitivity analysis of the inverse solution with respect to the dictionary is presented. This analysis reveals some of the salient features and intrinsic difficulties which are associated with the dictionary design problem. Equipped with these insights, we propose an optimization strategy that alleviates these hurdles while utilizing the derived sensitivity relations for the design of a locally optimal dictionary. Our optimality criterion is based on local minimization of the Bayesian risk, given a set of training models. We present a mathematical formulation and an algorithmic framework to achieve this goal. The proposed framework offers the design of dictionaries for inverse problems that incorporate non-trivial, non-injective observation operators, where the data and the recovered parameters may reside in different spaces. We test our algorithm and show that it yields improved dictionaries for a diverse set of inverse problems in geophysics and medical imaging.

  9. Solving Minimal Covering Location Problems with Single and Multiple Node Coverage

    Directory of Open Access Journals (Sweden)

    Darko DRAKULIĆ

    2016-12-01

    Full Text Available Location science represents a very attractiveresearch field in combinatorial optimization and it is in expansion in last five decades. The main objective of location problems is determining the best position for facilities in a given set of nodes.Location science includes techniques for modelling problemsand methods for solving them. This paper presents results of solving two types of minimal covering location problems, with single and multiple node coverage, by using CPLEX optimizer and Particle Swarm Optimization method.

  10. A Hybrid ACO Approach to the Matrix Bandwidth Minimization Problem

    Science.gov (United States)

    Pintea, Camelia-M.; Crişan, Gloria-Cerasela; Chira, Camelia

    The evolution of the human society raises more and more difficult endeavors. For some of the real-life problems, the computing time-restriction enhances their complexity. The Matrix Bandwidth Minimization Problem (MBMP) seeks for a simultaneous permutation of the rows and the columns of a square matrix in order to keep its nonzero entries close to the main diagonal. The MBMP is a highly investigated {NP}-complete problem, as it has broad applications in industry, logistics, artificial intelligence or information recovery. This paper describes a new attempt to use the Ant Colony Optimization framework in tackling MBMP. The introduced model is based on the hybridization of the Ant Colony System technique with new local search mechanisms. Computational experiments confirm a good performance of the proposed algorithm for the considered set of MBMP instances.

  11. Localised controlled release of simvastatin from porous chitosan–gelatin scaffolds engrafted with simvastatin loaded PLGA-microparticles for bone tissue engineering application

    Energy Technology Data Exchange (ETDEWEB)

    Gentile, Piergiorgio [Department of Mechanical and Aerospace Engineering, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin (Italy); School of Clinical Dentistry, University of Sheffield, 19 Claremont Crescent, Sheffield (United Kingdom); Nandagiri, Vijay Kumar [Department of Mechanical and Aerospace Engineering, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin (Italy); School of Pharmacy, Royal College of Surgeons in Ireland, 123, St. Stephen Green, Dublin 2 (Ireland); Daly, Jacqueline [Division of Biology, Department of Anatomy, Royal College of Surgeons in Ireland, 123, St. Stephen Green, Dublin 2 (Ireland); Chiono, Valeria; Mattu, Clara; Tonda-Turo, Chiara; Ciardelli, Gianluca [Department of Mechanical and Aerospace Engineering, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin (Italy); Ramtoola, Zebunnissa, E-mail: zramtoola@rcsi.ie [School of Pharmacy, Royal College of Surgeons in Ireland, 123, St. Stephen Green, Dublin 2 (Ireland)

    2016-02-01

    Localised controlled release of simvastatin from porous freeze-dried chitosan–gelatin (CH–G) scaffolds was investigated by incorporating simvastatin loaded poly-(DL-lactide-co-glycolide) acid (PLGA) microparticles (MSIMs) into the scaffolds. MSIMs at 10% w/w simvastatin loading were prepared using a single emulsion-solvent evaporation method. The MSIM optimal amount to be incorporated into the scaffolds was selected by analysing the effect of embedding increasing amounts of blank PLGA microparticles (BL-MPs) on the scaffold physical properties and on the in vitro cell viability using a clonal human osteoblastic cell line (hFOB). Increasing the BL-MP content from 0% to 33.3% w/w showed a significant decrease in swelling degree (from 1245 ± 56% to 570 ± 35%). Scaffold pore size and distribution changed significantly as a function of BL-MP loading. Compressive modulus of scaffolds increased with increasing BL-MP amount up to 16.6% w/w (23.0 ± 1.0 kPa). No significant difference in cell viability was observed with increasing BL-MP loading. Based on these results, a content of 16.6% w/w MSIM particles was incorporated successfully in CH–G scaffolds, showing a controlled localised release of simvastatin able to influence the hFOB cell proliferation and the osteoblastic differentiation after 11 days. - Highlights: • Simvastatin loaded PLGA microparticle engrafted porous CH–G scaffolds were produced. • The microparticle optimal amount to be incorporated into the scaffolds was studied. • Physical properties of scaffolds changed as a function of microparticle loading. • The level of simvastatin released enhanced cell proliferation and mineralisation.

  12. Localised controlled release of simvastatin from porous chitosan–gelatin scaffolds engrafted with simvastatin loaded PLGA-microparticles for bone tissue engineering application

    International Nuclear Information System (INIS)

    Gentile, Piergiorgio; Nandagiri, Vijay Kumar; Daly, Jacqueline; Chiono, Valeria; Mattu, Clara; Tonda-Turo, Chiara; Ciardelli, Gianluca; Ramtoola, Zebunnissa

    2016-01-01

    Localised controlled release of simvastatin from porous freeze-dried chitosan–gelatin (CH–G) scaffolds was investigated by incorporating simvastatin loaded poly-(DL-lactide-co-glycolide) acid (PLGA) microparticles (MSIMs) into the scaffolds. MSIMs at 10% w/w simvastatin loading were prepared using a single emulsion-solvent evaporation method. The MSIM optimal amount to be incorporated into the scaffolds was selected by analysing the effect of embedding increasing amounts of blank PLGA microparticles (BL-MPs) on the scaffold physical properties and on the in vitro cell viability using a clonal human osteoblastic cell line (hFOB). Increasing the BL-MP content from 0% to 33.3% w/w showed a significant decrease in swelling degree (from 1245 ± 56% to 570 ± 35%). Scaffold pore size and distribution changed significantly as a function of BL-MP loading. Compressive modulus of scaffolds increased with increasing BL-MP amount up to 16.6% w/w (23.0 ± 1.0 kPa). No significant difference in cell viability was observed with increasing BL-MP loading. Based on these results, a content of 16.6% w/w MSIM particles was incorporated successfully in CH–G scaffolds, showing a controlled localised release of simvastatin able to influence the hFOB cell proliferation and the osteoblastic differentiation after 11 days. - Highlights: • Simvastatin loaded PLGA microparticle engrafted porous CH–G scaffolds were produced. • The microparticle optimal amount to be incorporated into the scaffolds was studied. • Physical properties of scaffolds changed as a function of microparticle loading. • The level of simvastatin released enhanced cell proliferation and mineralisation.

  13. Minimizing the Total Service Time of Discrete Dynamic Berth Allocation Problem by an Iterated Greedy Heuristic

    Science.gov (United States)

    2014-01-01

    Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP), which aims to minimize total service time, and proposes an iterated greedy (IG) algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set. PMID:25295295

  14. Minimizing the Total Service Time of Discrete Dynamic Berth Allocation Problem by an Iterated Greedy Heuristic

    Directory of Open Access Journals (Sweden)

    Shih-Wei Lin

    2014-01-01

    Full Text Available Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP, which aims to minimize total service time, and proposes an iterated greedy (IG algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set.

  15. Limit behavior of mass critical Hartree minimization problems with steep potential wells

    Science.gov (United States)

    Guo, Yujin; Luo, Yong; Wang, Zhi-Qiang

    2018-06-01

    We consider minimizers of the following mass critical Hartree minimization problem: eλ(N ) ≔inf {u ∈H1(Rd ) , ‖u‖2 2=N } Eλ(u ) , where d ≥ 3, λ > 0, and the Hartree energy functional Eλ(u) is defined by Eλ(u ) ≔∫Rd|∇u (x ) |2d x +λ ∫Rdg (x ) u2(x ) d x -1/2 ∫Rd∫Rdu/2(x ) u2(y ) |x -y |2 d x d y . Here the steep potential g(x) satisfies 0 =g (0 ) =infRdg (x ) ≤g (x ) ≤1 and 1 -g (x ) ∈Ld/2(Rd ) . We prove that there exists a constant N* > 0, independent of λg(x), such that if N ≥ N*, then eλ(N) does not admit minimizers for any λ > 0; if 0 N N*, then there exists a constant λ*(N) > 0 such that eλ(N) admits minimizers for any λ > λ*(N) and eλ(N) does not admit minimizers for 0 N). For any given 0 N N*, the limit behavior of positive minimizers for eλ(N) is also studied as λ → ∞, where the mass concentrates at the bottom of g(x).

  16. A new mathematical model for single machine batch scheduling problem for minimizing maximum lateness with deteriorating jobs

    Directory of Open Access Journals (Sweden)

    Ahmad Zeraatkar Moghaddam

    2012-01-01

    Full Text Available This paper presents a mathematical model for the problem of minimizing the maximum lateness on a single machine when the deteriorated jobs are delivered to each customer in various size batches. In reality, this issue may happen within a supply chain in which delivering goods to customers entails cost. Under such situation, keeping completed jobs to deliver in batches may result in reducing delivery costs. In literature review of batch scheduling, minimizing the maximum lateness is known as NP-Hard problem; therefore the present issue aiming at minimizing the costs of delivering, in addition to the aforementioned objective function, remains an NP-Hard problem. In order to solve the proposed model, a Simulation annealing meta-heuristic is used, where the parameters are calibrated by Taguchi approach and the results are compared to the global optimal values generated by Lingo 10 software. Furthermore, in order to check the efficiency of proposed method to solve larger scales of problem, a lower bound is generated. The results are also analyzed based on the effective factors of the problem. Computational study validates the efficiency and the accuracy of the presented model.

  17. A discrete firefly meta-heuristic with local search for makespan minimization in permutation flow shop scheduling problems

    Directory of Open Access Journals (Sweden)

    Nader Ghaffari-Nasab

    2010-07-01

    Full Text Available During the past two decades, there have been increasing interests on permutation flow shop with different types of objective functions such as minimizing the makespan, the weighted mean flow-time etc. The permutation flow shop is formulated as a mixed integer programming and it is classified as NP-Hard problem. Therefore, a direct solution is not available and meta-heuristic approaches need to be used to find the near-optimal solutions. In this paper, we present a new discrete firefly meta-heuristic to minimize the makespan for the permutation flow shop scheduling problem. The results of implementation of the proposed method are compared with other existing ant colony optimization technique. The preliminary results indicate that the new proposed method performs better than the ant colony for some well known benchmark problems.

  18. Minimal surfaces

    CERN Document Server

    Dierkes, Ulrich; Sauvigny, Friedrich; Jakob, Ruben; Kuster, Albrecht

    2010-01-01

    Minimal Surfaces is the first volume of a three volume treatise on minimal surfaces (Grundlehren Nr. 339-341). Each volume can be read and studied independently of the others. The central theme is boundary value problems for minimal surfaces. The treatise is a substantially revised and extended version of the monograph Minimal Surfaces I, II (Grundlehren Nr. 295 & 296). The first volume begins with an exposition of basic ideas of the theory of surfaces in three-dimensional Euclidean space, followed by an introduction of minimal surfaces as stationary points of area, or equivalently

  19. The analytic solution of the firm's cost-minimization problem with box constraints and the Cobb-Douglas model

    Science.gov (United States)

    Bayón, L.; Grau, J. M.; Ruiz, M. M.; Suárez, P. M.

    2012-12-01

    One of the most well-known problems in the field of Microeconomics is the Firm's Cost-Minimization Problem. In this paper we establish the analytical expression for the cost function using the Cobb-Douglas model and considering maximum constraints for the inputs. Moreover we prove that it belongs to the class C1.

  20. Mathematical models for a batch scheduling problem to minimize earliness and tardiness

    Directory of Open Access Journals (Sweden)

    Basar Ogun

    2018-05-01

    Full Text Available Purpose: Today’s manufacturing facilities are challenged by highly customized products and just in time manufacturing and delivery of these products. In this study, a batch scheduling problem is addressed to provide on-time completion of customer orders in the environment of lean manufacturing. The problem is to optimize partitioning of product components into batches and scheduling of the resulting batches where each customer order is received as a set of products made of various components. Design/methodology/approach: Three different mathematical models for minimization of total earliness and tardiness of customer orders are developed to provide on-time completion of customer orders and also, to avoid from inventory of final products. The first model is a non-linear integer programming model while the second is a linearized version of the first. Finally, to solve larger sized instances of the problem, an alternative linear integer model is presented. Findings: Computational study using a suit set of test instances showed that the alternative linear integer model is able to solve all test instances in varying sizes within quite shorter computer times comparing to the other two models. It was also showed that the alternative model can solve moderate sized real-world problems. Originality/value: The problem under study differentiates from existing batch scheduling problems in the literature since it includes new circumstances which may arise in real-world applications. This research, also, contributes the literature of batch scheduling problem by presenting new optimization models.

  1. A novel discrete PSO algorithm for solving job shop scheduling problem to minimize makespan

    Science.gov (United States)

    Rameshkumar, K.; Rajendran, C.

    2018-02-01

    In this work, a discrete version of PSO algorithm is proposed to minimize the makespan of a job-shop. A novel schedule builder has been utilized to generate active schedules. The discrete PSO is tested using well known benchmark problems available in the literature. The solution produced by the proposed algorithms is compared with best known solution published in the literature and also compared with hybrid particle swarm algorithm and variable neighborhood search PSO algorithm. The solution construction methodology adopted in this study is found to be effective in producing good quality solutions for the various benchmark job-shop scheduling problems.

  2. Triple Hierarchical Variational Inequalities with Constraints of Mixed Equilibria, Variational Inequalities, Convex Minimization, and Hierarchical Fixed Point Problems

    Directory of Open Access Journals (Sweden)

    Lu-Chuan Ceng

    2014-01-01

    Full Text Available We introduce and analyze a hybrid iterative algorithm by virtue of Korpelevich's extragradient method, viscosity approximation method, hybrid steepest-descent method, and averaged mapping approach to the gradient-projection algorithm. It is proven that under appropriate assumptions, the proposed algorithm converges strongly to a common element of the fixed point set of infinitely many nonexpansive mappings, the solution set of finitely many generalized mixed equilibrium problems (GMEPs, the solution set of finitely many variational inequality problems (VIPs, the solution set of general system of variational inequalities (GSVI, and the set of minimizers of convex minimization problem (CMP, which is just a unique solution of a triple hierarchical variational inequality (THVI in a real Hilbert space. In addition, we also consider the application of the proposed algorithm to solve a hierarchical fixed point problem with constraints of finitely many GMEPs, finitely many VIPs, GSVI, and CMP. The results obtained in this paper improve and extend the corresponding results announced by many others.

  3. A new smoothing modified three-term conjugate gradient method for [Formula: see text]-norm minimization problem.

    Science.gov (United States)

    Du, Shouqiang; Chen, Miao

    2018-01-01

    We consider a kind of nonsmooth optimization problems with [Formula: see text]-norm minimization, which has many applications in compressed sensing, signal reconstruction, and the related engineering problems. Using smoothing approximate techniques, this kind of nonsmooth optimization problem can be transformed into a general unconstrained optimization problem, which can be solved by the proposed smoothing modified three-term conjugate gradient method. The smoothing modified three-term conjugate gradient method is based on Polak-Ribière-Polyak conjugate gradient method. For the Polak-Ribière-Polyak conjugate gradient method has good numerical properties, the proposed method possesses the sufficient descent property without any line searches, and it is also proved to be globally convergent. Finally, the numerical experiments show the efficiency of the proposed method.

  4. Minimizing total weighted tardiness for the single machine scheduling problem with dependent setup time and precedence constraints

    Directory of Open Access Journals (Sweden)

    Hamidreza Haddad

    2012-04-01

    Full Text Available This paper tackles the single machine scheduling problem with dependent setup time and precedence constraints. The primary objective of this paper is minimization of total weighted tardiness. Since the complexity of the resulted problem is NP-hard we use metaheuristics method to solve the resulted model. The proposed model of this paper uses genetic algorithm to solve the problem in reasonable amount of time. Because of high sensitivity of GA to its initial values of parameters, a Taguchi approach is presented to calibrate its parameters. Computational experiments validate the effectiveness and capability of proposed method.

  5. Regularity of Minimal Surfaces

    CERN Document Server

    Dierkes, Ulrich; Tromba, Anthony J; Kuster, Albrecht

    2010-01-01

    "Regularity of Minimal Surfaces" begins with a survey of minimal surfaces with free boundaries. Following this, the basic results concerning the boundary behaviour of minimal surfaces and H-surfaces with fixed or free boundaries are studied. In particular, the asymptotic expansions at interior and boundary branch points are derived, leading to general Gauss-Bonnet formulas. Furthermore, gradient estimates and asymptotic expansions for minimal surfaces with only piecewise smooth boundaries are obtained. One of the main features of free boundary value problems for minimal surfaces is t

  6. MOCUS, Minimal Cut Sets and Minimal Path Sets from Fault Tree Analysis

    International Nuclear Information System (INIS)

    Fussell, J.B.; Henry, E.B.; Marshall, N.H.

    1976-01-01

    1 - Description of problem or function: From a description of the Boolean failure logic of a system, called a fault tree, and control parameters specifying the minimal cut set length to be obtained MOCUS determines the system failure modes, or minimal cut sets, and the system success modes, or minimal path sets. 2 - Method of solution: MOCUS uses direct resolution of the fault tree into the cut and path sets. The algorithm used starts with the main failure of interest, the top event, and proceeds to basic independent component failures, called primary events, to resolve the fault tree to obtain the minimal sets. A key point of the algorithm is that an and gate alone always increases the number of path sets; an or gate alone always increases the number of cut sets and increases the size of path sets. Other types of logic gates must be described in terms of and and or logic gates. 3 - Restrictions on the complexity of the problem: Output from MOCUS can include minimal cut and path sets for up to 20 gates

  7. Exact and Heuristic Solutions to Minimize Total Waiting Time in the Blood Products Distribution Problem

    Directory of Open Access Journals (Sweden)

    Amir Salehipour

    2012-01-01

    Full Text Available This paper presents a novel application of operations research to support decision making in blood distribution management. The rapid and dynamic increasing demand, criticality of the product, storage, handling, and distribution requirements, and the different geographical locations of hospitals and medical centers have made blood distribution a complex and important problem. In this study, a real blood distribution problem containing 24 hospitals was tackled by the authors, and an exact approach was presented. The objective of the problem is to distribute blood and its products among hospitals and medical centers such that the total waiting time of those requiring the product is minimized. Following the exact solution, a hybrid heuristic algorithm is proposed. Computational experiments showed the optimal solutions could be obtained for medium size instances, while for larger instances the proposed hybrid heuristic is very competitive.

  8. Convex Minimization with Constraints of Systems of Variational Inequalities, Mixed Equilibrium, Variational Inequality, and Fixed Point Problems

    Directory of Open Access Journals (Sweden)

    Lu-Chuan Ceng

    2014-01-01

    Full Text Available We introduce and analyze one iterative algorithm by hybrid shrinking projection method for finding a solution of the minimization problem for a convex and continuously Fréchet differentiable functional, with constraints of several problems: finitely many generalized mixed equilibrium problems, finitely many variational inequalities, the general system of variational inequalities and the fixed point problem of an asymptotically strict pseudocontractive mapping in the intermediate sense in a real Hilbert space. We prove strong convergence theorem for the iterative algorithm under suitable conditions. On the other hand, we also propose another iterative algorithm by hybrid shrinking projection method for finding a fixed point of infinitely many nonexpansive mappings with the same constraints, and derive its strong convergence under mild assumptions.

  9. Inverse atmospheric radiative transfer problems - A nonlinear minimization search method of solution. [aerosol pollution monitoring

    Science.gov (United States)

    Fymat, A. L.

    1976-01-01

    The paper studies the inversion of the radiative transfer equation describing the interaction of electromagnetic radiation with atmospheric aerosols. The interaction can be considered as the propagation in the aerosol medium of two light beams: the direct beam in the line-of-sight attenuated by absorption and scattering, and the diffuse beam arising from scattering into the viewing direction, which propagates more or less in random fashion. The latter beam has single scattering and multiple scattering contributions. In the former case and for single scattering, the problem is reducible to first-kind Fredholm equations, while for multiple scattering it is necessary to invert partial integrodifferential equations. A nonlinear minimization search method, applicable to the solution of both types of problems has been developed, and is applied here to the problem of monitoring aerosol pollution, namely the complex refractive index and size distribution of aerosol particles.

  10. An information geometric approach to least squares minimization

    Science.gov (United States)

    Transtrum, Mark; Machta, Benjamin; Sethna, James

    2009-03-01

    Parameter estimation by nonlinear least squares minimization is a ubiquitous problem that has an elegant geometric interpretation: all possible parameter values induce a manifold embedded within the space of data. The minimization problem is then to find the point on the manifold closest to the origin. The standard algorithm for minimizing sums of squares, the Levenberg-Marquardt algorithm, also has geometric meaning. When the standard algorithm fails to efficiently find accurate fits to the data, geometric considerations suggest improvements. Problems involving large numbers of parameters, such as often arise in biological contexts, are notoriously difficult. We suggest an algorithm based on geodesic motion that may offer improvements over the standard algorithm for a certain class of problems.

  11. Approximate error conjugation gradient minimization methods

    Science.gov (United States)

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  12. Sequential unconstrained minimization algorithms for constrained optimization

    International Nuclear Information System (INIS)

    Byrne, Charles

    2008-01-01

    The problem of minimizing a function f(x):R J → R, subject to constraints on the vector variable x, occurs frequently in inverse problems. Even without constraints, finding a minimizer of f(x) may require iterative methods. We consider here a general class of iterative algorithms that find a solution to the constrained minimization problem as the limit of a sequence of vectors, each solving an unconstrained minimization problem. Our sequential unconstrained minimization algorithm (SUMMA) is an iterative procedure for constrained minimization. At the kth step we minimize the function G k (x)=f(x)+g k (x), to obtain x k . The auxiliary functions g k (x):D subset of R J → R + are nonnegative on the set D, each x k is assumed to lie within D, and the objective is to minimize the continuous function f:R J → R over x in the set C = D-bar, the closure of D. We assume that such minimizers exist, and denote one such by x-circumflex. We assume that the functions g k (x) satisfy the inequalities 0≤g k (x)≤G k-1 (x)-G k-1 (x k-1 ), for k = 2, 3, .... Using this assumption, we show that the sequence {(x k )} is decreasing and converges to f(x-circumflex). If the restriction of f(x) to D has bounded level sets, which happens if x-circumflex is unique and f(x) is closed, proper and convex, then the sequence {x k } is bounded, and f(x*)=f(x-circumflex), for any cluster point x*. Therefore, if x-circumflex is unique, x* = x-circumflex and {x k } → x-circumflex. When x-circumflex is not unique, convergence can still be obtained, in particular cases. The SUMMA includes, as particular cases, the well-known barrier- and penalty-function methods, the simultaneous multiplicative algebraic reconstruction technique (SMART), the proximal minimization algorithm of Censor and Zenios, the entropic proximal methods of Teboulle, as well as certain cases of gradient descent and the Newton–Raphson method. The proof techniques used for SUMMA can be extended to obtain related results

  13. Charge and energy minimization in electrical/magnetic stimulation of nervous tissue.

    Science.gov (United States)

    Jezernik, Saso; Sinkjaer, Thomas; Morari, Manfred

    2010-08-01

    In this work we address the problem of stimulating nervous tissue with the minimal necessary energy at reduced/minimal charge. Charge minimization is related to a valid safety concern (avoidance and reduction of stimulation-induced tissue and electrode damage). Energy minimization plays a role in battery-driven electrical or magnetic stimulation systems (increased lifetime, repetition rates, reduction of power requirements, thermal management). Extensive new theoretical results are derived by employing an optimal control theory framework. These results include derivation of the optimal electrical stimulation waveform for a mixed energy/charge minimization problem, derivation of the charge-balanced energy-minimal electrical stimulation waveform, solutions of a pure charge minimization problem with and without a constraint on the stimulation amplitude, and derivation of the energy-minimal magnetic stimulation waveform. Depending on the set stimulus pulse duration, energy and charge reductions of up to 80% are deemed possible. Results are verified in simulations with an active, mammalian-like nerve fiber model.

  14. CONSIDERATIONS ABOVE THE MINI-CONSTITUENT PROPOSAL AND THE PROBLEMS MINIMIZATION (FROM CONSTITUTIONAL DIRIGISME TO THE EXPECTATIONS FRUSTRATION)

    OpenAIRE

    Padua, Átila Andrade

    2015-01-01

    With the June movements of 2013 was fostered the proposal to convene a "mini constituent" as a possibility to minimize the problems experienced by Brazilian society. Considering the constitutional  work  of  the  influx  Portuguese  José  Joaquim  Gomes  Canotilho  this constitutional model and the breaking of paradigms that represented the turgid Brazilian and Portuguese  constitutions,  dedicated  special  attention  to  the  differentiation  of  program standards with the constitutional di...

  15. Minimizing the Carbon Footprint for the Time-Dependent Heterogeneous-Fleet Vehicle Routing Problem with Alternative Paths

    Directory of Open Access Journals (Sweden)

    Wan-Yu Liu

    2014-07-01

    Full Text Available Torespondto the reduction of greenhouse gas emissions and global warming, this paper investigates the minimal-carbon-footprint time-dependent heterogeneous-fleet vehicle routing problem with alternative paths (MTHVRPP. This finds a route with the smallestcarbon footprint, instead of the shortestroute distance, which is the conventional approach, to serve a number of customers with a heterogeneous fleet of vehicles in cases wherethere may not be only one path between each pair of customers, and the vehicle speed differs at different times of the day. Inheriting from the NP-hardness of the vehicle routing problem, the MTHVRPP is also NP-hard. This paper further proposes a genetic algorithm (GA to solve this problem. The solution representedbyour GA determines the customer serving ordering of each vehicle type. Then, the capacity check is used to classify multiple routes of each vehicle type, and the path selection determines the detailed paths of each route. Additionally, this paper improves the energy consumption model used for calculating the carbon footprint amount more precisely. Compared with the results without alternative paths, our experimental results show that the alternative path in this experimenthas a significant impact on the experimental results in terms of carbon footprint.

  16. Theories of minimalism in architecture: Post scriptum

    Directory of Open Access Journals (Sweden)

    Stevanović Vladimir

    2012-01-01

    Full Text Available Owing to the period of intensive development in the last decade of XX century, architectural phenomenon called Minimalism in Architecture was remembered as the Style of the Nineties, which is characterized, morphologically speaking, by simplicity and formal reduction. Simultaneously with its development in practice, on a theoretical level several dominant interpretative models were able to establish themselves. The new millennium and time distance bring new problems; therefore this paper represents a discussion on specific theorization related to Minimalism in Architecture that can bear the designation of post scriptum, because their development starts after the constitutional period of architectural minimalist discourse. In XXI century theories, the problem of definition of minimalism remains important topic, approached by theorists through resolving on the axis: Modernism - Minimal Art - Postmodernism - Minimalism in Architecture. With regard to this, analyzed texts can be categorized in two groups: 1 texts of affirmative nature and historical-associative approach in which minimalism is identified with anything that is simple and reduced, in an idealizing manner, relied mostly on the existing hypotheses; 2 critically oriented texts, in which authors reconsider adequacy of the very term 'minimalism' in the context of architecture and take a metacritical attitude towards previous texts.

  17. Fast nonconvex nonsmooth minimization methods for image restoration and reconstruction.

    Science.gov (United States)

    Nikolova, Mila; Ng, Michael K; Tam, Chi-Pan

    2010-12-01

    Nonconvex nonsmooth regularization has advantages over convex regularization for restoring images with neat edges. However, its practical interest used to be limited by the difficulty of the computational stage which requires a nonconvex nonsmooth minimization. In this paper, we deal with nonconvex nonsmooth minimization methods for image restoration and reconstruction. Our theoretical results show that the solution of the nonconvex nonsmooth minimization problem is composed of constant regions surrounded by closed contours and neat edges. The main goal of this paper is to develop fast minimization algorithms to solve the nonconvex nonsmooth minimization problem. Our experimental results show that the effectiveness and efficiency of the proposed algorithms.

  18. Minimal quantization and confinement

    International Nuclear Information System (INIS)

    Ilieva, N.P.; Kalinowskij, Yu.L.; Nguyen Suan Han; Pervushin, V.N.

    1987-01-01

    A ''minimal'' version of the Hamiltonian quantization based on the explicit solution of the Gauss equation and on the gauge-invariance principle is considered. By the example of the one-particle Green function we show that the requirement for gauge invariance leads to relativistic covariance of the theory and to more proper definition of the Faddeev - Popov integral that does not depend on the gauge choice. The ''minimal'' quantization is applied to consider the gauge-ambiguity problem and a new topological mechanism of confinement

  19. Statistically Efficient Construction of α-Risk-Minimizing Portfolio

    Directory of Open Access Journals (Sweden)

    Hiroyuki Taniai

    2012-01-01

    Full Text Available We propose a semiparametrically efficient estimator for α-risk-minimizing portfolio weights. Based on the work of Bassett et al. (2004, an α-risk-minimizing portfolio optimization is formulated as a linear quantile regression problem. The quantile regression method uses a pseudolikelihood based on an asymmetric Laplace reference density, and asymptotic properties such as consistency and asymptotic normality are obtained. We apply the results of Hallin et al. (2008 to the problem of constructing α-risk-minimizing portfolios using residual signs and ranks and a general reference density. Monte Carlo simulations assess the performance of the proposed method. Empirical applications are also investigated.

  20. A Hybrid Metaheuristic Approach for Minimizing the Total Flow Time in A Flow Shop Sequence Dependent Group Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Antonio Costa

    2014-07-01

    Full Text Available Production processes in Cellular Manufacturing Systems (CMS often involve groups of parts sharing the same technological requirements in terms of tooling and setup. The issue of scheduling such parts through a flow-shop production layout is known as the Flow-Shop Group Scheduling (FSGS problem or, whether setup times are sequence-dependent, the Flow-Shop Sequence-Dependent Group Scheduling (FSDGS problem. This paper addresses the FSDGS issue, proposing a hybrid metaheuristic procedure integrating features from Genetic Algorithms (GAs and Biased Random Sampling (BRS search techniques with the aim of minimizing the total flow time, i.e., the sum of completion times of all jobs. A well-known benchmark of test cases, entailing problems with two, three, and six machines, is employed for both tuning the relevant parameters of the developed procedure and assessing its performances against two metaheuristic algorithms recently presented by literature. The obtained results and a properly arranged ANOVA analysis highlight the superiority of the proposed approach in tackling the scheduling problem under investigation.

  1. Probabilistic Properties of Rectilinear Steiner Minimal Trees

    Directory of Open Access Journals (Sweden)

    V. N. Salnikov

    2015-01-01

    Full Text Available This work concerns the properties of Steiner minimal trees for the manhattan plane in the context of introducing a probability measure. This problem is important because exact algorithms to solve the Steiner problem are computationally expensive (NP-hard and the solution (especially in the case of big number of points to be connected has a diversity of practical applications. That is why the work considers a possibility to rank the possible topologies of the minimal trees with respect to a probability of their usage. For this, the known facts about the structural properties of minimal trees for selected metrics have been analyzed to see their usefulness for the problem in question. For the small amount of boundary (fixed vertices, the paper offers a way to introduce a probability measure as a corollary of proved theorem about some structural properties of the minimal trees.This work is considered to further the previous similar activity concerning a problem of searching for minimal fillings, and it is a door opener to the more general (complicated task. The stated method demonstrates the possibility to reach the final result analytically, which gives a chance of its applicability to the case of the bigger number of boundary vertices (probably, with the use of computer engineering.The introducing definition of an essential Steiner point allowed a considerable restriction of the ambiguity of initial problem solution and, at the same time, comparison of such an approach with more classical works in the field concerned. The paper also lists main barriers of classical approaches, preventing their use for the task of introducing a probability measure.In prospect, application areas of the described method are expected to be wider both in terms of system enlargement (the number of boundary vertices and in terms of other metric spaces (the Euclidean case is of especial interest. The main interest is to find the classes of topologies with significantly

  2. Minimally conscious state or cortically mediated state?

    Science.gov (United States)

    Naccache, Lionel

    2018-04-01

    Durable impairments of consciousness are currently classified in three main neurological categories: comatose state, vegetative state (also recently coined unresponsive wakefulness syndrome) and minimally conscious state. While the introduction of minimally conscious state, in 2002, was a major progress to help clinicians recognize complex non-reflexive behaviours in the absence of functional communication, it raises several problems. The most important issue related to minimally conscious state lies in its criteria: while behavioural definition of minimally conscious state lacks any direct evidence of patient's conscious content or conscious state, it includes the adjective 'conscious'. I discuss this major problem in this review and propose a novel interpretation of minimally conscious state: its criteria do not inform us about the potential residual consciousness of patients, but they do inform us with certainty about the presence of a cortically mediated state. Based on this constructive criticism review, I suggest three proposals aiming at improving the way we describe the subjective and cognitive state of non-communicating patients. In particular, I present a tentative new classification of impairments of consciousness that combines behavioural evidence with functional brain imaging data, in order to probe directly and univocally residual conscious processes.

  3. Minimal Time Problem with Impulsive Controls

    Energy Technology Data Exchange (ETDEWEB)

    Kunisch, Karl, E-mail: karl.kunisch@uni-graz.at [University of Graz, Institute for Mathematics and Scientific Computing (Austria); Rao, Zhiping, E-mail: zhiping.rao@ricam.oeaw.ac.at [Austrian Academy of Sciences, Radon Institute of Computational and Applied Mathematics (Austria)

    2017-02-15

    Time optimal control problems for systems with impulsive controls are investigated. Sufficient conditions for the existence of time optimal controls are given. A dynamical programming principle is derived and Lipschitz continuity of an appropriately defined value functional is established. The value functional satisfies a Hamilton–Jacobi–Bellman equation in the viscosity sense. A numerical example for a rider-swing system is presented and it is shown that the reachable set is enlargered by allowing for impulsive controls, when compared to nonimpulsive controls.

  4. The Quest for Minimal Quotients for Probabilistic Automata

    DEFF Research Database (Denmark)

    Eisentraut, Christian; Hermanns, Holger; Schuster, Johann

    2013-01-01

    One of the prevailing ideas in applied concurrency theory and verification is the concept of automata minimization with respect to strong or weak bisimilarity. The minimal automata can be seen as canonical representations of the behaviour modulo the bisimilarity considered. Together with congruence...... results wrt. process algebraic operators, this can be exploited to alleviate the notorious state space explosion problem. In this paper, we aim at identifying minimal automata and canonical representations for concurrent probabilistic models. We present minimality and canonicity results for probabilistic...... automata wrt. strong and weak bisimilarity, together with polynomial time minimization algorithms....

  5. Classical strings and minimal surfaces

    International Nuclear Information System (INIS)

    Urbantke, H.

    1986-01-01

    Real Lorentzian forms of some complex or complexified Euclidean minimal surfaces are obtained as an application of H.A. Schwarz' solution to the initial value problem or a search for surfaces admitting a group of Poincare transformations. (Author)

  6. Distributed Submodular Minimization And Motion Coordination Over Discrete State Space

    KAUST Repository

    Jaleel, Hassan

    2017-09-21

    Submodular set-functions are extensively used in large-scale combinatorial optimization problems arising in complex networks and machine learning. While there has been significant interest in distributed formulations of convex optimization, distributed minimization of submodular functions has not received significant attention. Thus, our main contribution is a framework for minimizing submodular functions in a distributed manner. The proposed framework is based on the ideas of Lovasz extension of submodular functions and distributed optimization of convex functions. The framework exploits a fundamental property of submodularity that the Lovasz extension of a submodular function is a convex function and can be computed efficiently. Moreover, a minimizer of a submodular function can be computed by computing the minimizer of its Lovasz extension. In the proposed framework, we employ a consensus based distributed optimization algorithm to minimize set-valued submodular functions as well as general submodular functions defined over set products. We also identify distributed motion coordination in multiagent systems as a new application domain for submodular function minimization. For demonstrating key ideas of the proposed framework, we select a complex setup of the capture the flag game, which offers a variety of challenges relevant to multiagent system. We formulate the problem as a submodular minimization problem and verify through extensive simulations that the proposed framework results in feasible policies for the agents.

  7. Adoption of waste minimization technology to benefit electroplaters

    Energy Technology Data Exchange (ETDEWEB)

    Ching, E.M.K.; Li, C.P.H.; Yu, C.M.K. [Hong Kong Productivity Council, Kowloon (Hong Kong)

    1996-12-31

    Because of increasingly stringent environmental legislation and enhanced environmental awareness, electroplaters in Hong Kong are paying more heed to protect the environment. To comply with the array of environmental controls, electroplaters can no longer rely solely on the end-of-pipe approach as a means for abating their pollution problems under the particular local industrial environment. The preferred approach is to adopt waste minimization measures that yield both economic and environmental benefits. This paper gives an overview of electroplating activities in Hong Kong, highlights their characteristics, and describes the pollution problems associated with conventional electroplating operations. The constraints of using pollution control measures to achieve regulatory compliance are also discussed. Examples and case studies are given on some low-cost waste minimization techniques readily available to electroplaters, including dragout minimization and water conservation techniques. Recommendations are given as to how electroplaters can adopt and exercise waste minimization techniques in their operations. 1 tab.

  8. Graphical approach for multiple values logic minimization

    Science.gov (United States)

    Awwal, Abdul Ahad S.; Iftekharuddin, Khan M.

    1999-03-01

    Multiple valued logic (MVL) is sought for designing high complexity, highly compact, parallel digital circuits. However, the practical realization of an MVL-based system is dependent on optimization of cost, which directly affects the optical setup. We propose a minimization technique for MVL logic optimization based on graphical visualization, such as a Karnaugh map. The proposed method is utilized to solve signed-digit binary and trinary logic minimization problems. The usefulness of the minimization technique is demonstrated for the optical implementation of MVL circuits.

  9. Minimal Coleman-Weinberg theory explains the diphoton excess

    DEFF Research Database (Denmark)

    Antipin, Oleg; Mojaza, Matin; Sannino, Francesco

    2016-01-01

    It is possible to delay the hierarchy problem, by replacing the standard Higgs-sector by the Coleman-Weinberg mechanism, and at the same time ensure perturbative naturalness through the so-called Veltman conditions. As we showed in a previous study, minimal models of this type require the introdu......It is possible to delay the hierarchy problem, by replacing the standard Higgs-sector by the Coleman-Weinberg mechanism, and at the same time ensure perturbative naturalness through the so-called Veltman conditions. As we showed in a previous study, minimal models of this type require...

  10. A novel approach to error function minimization for feedforward neural networks

    International Nuclear Information System (INIS)

    Sinkus, R.

    1995-01-01

    Feedforward neural networks with error backpropagation are widely applied to pattern recognition. One general problem encountered with this type of neural networks is the uncertainty, whether the minimization procedure has converged to a global minimum of the cost function. To overcome this problem a novel approach to minimize the error function is presented. It allows to monitor the approach to the global minimum and as an outcome several ambiguities related to the choice of free parameters of the minimization procedure are removed. (orig.)

  11. Matrix interdiction problem

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Feng [Los Alamos National Laboratory; Kasiviswanathan, Shiva [Los Alamos National Laboratory

    2010-01-01

    In the matrix interdiction problem, a real-valued matrix and an integer k is given. The objective is to remove k columns such that the sum over all rows of the maximum entry in each row is minimized. This combinatorial problem is closely related to bipartite network interdiction problem which can be applied to prioritize the border checkpoints in order to minimize the probability that an adversary can successfully cross the border. After introducing the matrix interdiction problem, we will prove the problem is NP-hard, and even NP-hard to approximate with an additive n{gamma} factor for a fixed constant {gamma}. We also present an algorithm for this problem that achieves a factor of (n-k) mUltiplicative approximation ratio.

  12. Generalized monotonicity from global minimization in fourth-order ODEs

    NARCIS (Netherlands)

    M.A. Peletier (Mark)

    2000-01-01

    textabstractWe consider solutions of the stationary Extended Fisher-Kolmogorov equation with general potential that are global minimizers of an associated variational problem. We present results that relate the global minimization property to a generalized concept of monotonicity of the solutions.

  13. Cognitive radio adaptation for power consumption minimization using biogeography-based optimization

    International Nuclear Information System (INIS)

    Qi Pei-Han; Zheng Shi-Lian; Yang Xiao-Niu; Zhao Zhi-Jin

    2016-01-01

    Adaptation is one of the key capabilities of cognitive radio, which focuses on how to adjust the radio parameters to optimize the system performance based on the knowledge of the radio environment and its capability and characteristics. In this paper, we consider the cognitive radio adaptation problem for power consumption minimization. The problem is formulated as a constrained power consumption minimization problem, and the biogeography-based optimization (BBO) is introduced to solve this optimization problem. A novel habitat suitability index (HSI) evaluation mechanism is proposed, in which both the power consumption minimization objective and the quality of services (QoS) constraints are taken into account. The results show that under different QoS requirement settings corresponding to different types of services, the algorithm can minimize power consumption while still maintaining the QoS requirements. Comparison with particle swarm optimization (PSO) and cat swarm optimization (CSO) reveals that BBO works better, especially at the early stage of the search, which means that the BBO is a better choice for real-time applications. (paper)

  14. A videoscope for use in minimally invasive periodontal surgery.

    Science.gov (United States)

    Harrel, Stephen K; Wilson, Thomas G; Rivera-Hidalgo, Francisco

    2013-09-01

    Minimally invasive periodontal procedures have been reported to produce excellent clinical results. Visualization during minimally invasive procedures has traditionally been obtained by the use of surgical telescopes, surgical microscopes, glass fibre endoscopes or a combination of these devices. All of these methods for visualization are less than fully satisfactory due to problems with access, magnification and blurred imaging. A videoscope for use with minimally invasive periodontal procedures has been developed to overcome some of the difficulties that exist with current visualization approaches. This videoscope incorporates a gas shielding technology that eliminates the problems of fogging and fouling of the optics of the videoscope that has previously prevented the successful application of endoscopic visualization to periodontal surgery. In addition, as part of the gas shielding technology the videoscope also includes a moveable retractor specifically adapted for minimally invasive surgery. The clinical use of the videoscope during minimally invasive periodontal surgery is demonstrated and discussed. The videoscope with gas shielding alleviates many of the difficulties associated with visualization during minimally invasive periodontal surgery. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  15. On minimizers of causal variational principles

    International Nuclear Information System (INIS)

    Schiefeneder, Daniela

    2011-01-01

    Causal variational principles are a class of nonlinear minimization problems which arise in a formulation of relativistic quantum theory referred to as the fermionic projector approach. This thesis is devoted to a numerical and analytic study of the minimizers of a general class of causal variational principles. We begin with a numerical investigation of variational principles for the fermionic projector in discrete space-time. It is shown that for sufficiently many space-time points, the minimizing fermionic projector induces non-trivial causal relations on the space-time points. We then generalize the setting by introducing a class of causal variational principles for measures on a compact manifold. In our main result we prove under general assumptions that the support of a minimizing measure is either completely timelike, or it is singular in the sense that its interior is empty. In the examples of the circle, the sphere and certain flag manifolds, the general results are supplemented by a more detailed analysis of the minimizers. (orig.)

  16. Energy-efficient approach to minimizing the energy consumption in an extended job-shop scheduling problem

    Science.gov (United States)

    Tang, Dunbing; Dai, Min

    2015-09-01

    The traditional production planning and scheduling problems consider performance indicators like time, cost and quality as optimization objectives in manufacturing processes. However, environmentally-friendly factors like energy consumption of production have not been completely taken into consideration. Against this background, this paper addresses an approach to modify a given schedule generated by a production planning and scheduling system in a job shop floor, where machine tools can work at different cutting speeds. It can adjust the cutting speeds of the operations while keeping the original assignment and processing sequence of operations of each job fixed in order to obtain energy savings. First, the proposed approach, based on a mixed integer programming mathematical model, changes the total idle time of the given schedule to minimize energy consumption in the job shop floor while accepting the optimal solution of the scheduling objective, makespan. Then, a genetic-simulated annealing algorithm is used to explore the optimal solution due to the fact that the problem is strongly NP-hard. Finally, the effectiveness of the approach is performed smalland large-size instances, respectively. The experimental results show that the approach can save 5%-10% of the average energy consumption while accepting the optimal solution of the makespan in small-size instances. In addition, the average maximum energy saving ratio can reach to 13%. And it can save approximately 1%-4% of the average energy consumption and approximately 2.4% of the average maximum energy while accepting the near-optimal solution of the makespan in large-size instances. The proposed research provides an interesting point to explore an energy-aware schedule optimization for a traditional production planning and scheduling problem.

  17. Flattening the inflaton potential beyond minimal gravity

    Directory of Open Access Journals (Sweden)

    Lee Hyun Min

    2018-01-01

    Full Text Available We review the status of the Starobinsky-like models for inflation beyond minimal gravity and discuss the unitarity problem due to the presence of a large non-minimal gravity coupling. We show that the induced gravity models allow for a self-consistent description of inflation and discuss the implications of the inflaton couplings to the Higgs field in the Standard Model.

  18. Contribution of Fuzzy Minimal Cost Flow Problem by Possibility Programming

    Directory of Open Access Journals (Sweden)

    S. Fanati Rashidi

    2010-06-01

    Full Text Available Using the concept of possibility proposed by zadeh, luhandjula ([4,8] and buckley ([1] have proposed the possibility programming. The formulation of buckley results in nonlinear programming problems. Negi [6]re-formulated the approach of Buckley by the use of trapezoidal fuzzy numbers and reduced the problem into fuzzy linear programming problem. Shih and Lee ([7] used the Negi approach to solve a minimum cost flow problem, whit fuzzy costs and the upper and lower bound. In this paper we shall consider the general form of this problem where all of the parameters and variables are fuzzy and also a model for solving is proposed

  19. Non-minimally coupled tachyon and inflation

    International Nuclear Information System (INIS)

    Piao Yunsong; Huang Qingguo; Zhang Xinmin; Zhang Yuanzhong

    2003-01-01

    In this Letter, we consider a model of tachyon with a non-minimal coupling to gravity and study its cosmological effects. Regarding inflation, we show that only for a specific coupling of tachyon to gravity this model satisfies observations and solves various problems which exist in the single and multi tachyon inflation models. But noting in the string theory the coupling coefficient of tachyon to gravity is of order g s , which in general is very small, we can hardly expect that the non-minimally coupling of tachyon to gravity could provide a reasonable tachyon inflation scenario. Our work may be a meaningful try for the cosmological effect of tachyon non-minimally coupled to gravity

  20. Contribution of Fuzzy Minimal Cost Flow Problem by Possibility Programming

    OpenAIRE

    S. Fanati Rashidi; A. A. Noora

    2010-01-01

    Using the concept of possibility proposed by zadeh, luhandjula ([4,8]) and buckley ([1]) have proposed the possibility programming. The formulation of buckley results in nonlinear programming problems. Negi [6]re-formulated the approach of Buckley by the use of trapezoidal fuzzy numbers and reduced the problem into fuzzy linear programming problem. Shih and Lee ([7]) used the Negi approach to solve a minimum cost flow problem, whit fuzzy costs and the upper and lower bound. ...

  1. Integer batch scheduling problems for a single-machine with simultaneous effect of learning and forgetting to minimize total actual flow time

    Directory of Open Access Journals (Sweden)

    Rinto Yusriski

    2015-09-01

    Full Text Available This research discusses an integer batch scheduling problems for a single-machine with position-dependent batch processing time due to the simultaneous effect of learning and forgetting. The decision variables are the number of batches, batch sizes, and the sequence of the resulting batches. The objective is to minimize total actual flow time, defined as total interval time between the arrival times of parts in all respective batches and their common due date. There are two proposed algorithms to solve the problems. The first is developed by using the Integer Composition method, and it produces an optimal solution. Since the problems can be solved by the first algorithm in a worst-case time complexity O(n2n-1, this research proposes the second algorithm. It is a heuristic algorithm based on the Lagrange Relaxation method. Numerical experiments show that the heuristic algorithm gives outstanding results.

  2. A survey on classical minimal surface theory

    CERN Document Server

    Meeks, William H

    2012-01-01

    Meeks and Pérez present a survey of recent spectacular successes in classical minimal surface theory. The classification of minimal planar domains in three-dimensional Euclidean space provides the focus of the account. The proof of the classification depends on the work of many currently active leading mathematicians, thus making contact with much of the most important results in the field. Through the telling of the story of the classification of minimal planar domains, the general mathematician may catch a glimpse of the intrinsic beauty of this theory and the authors' perspective of what is happening at this historical moment in a very classical subject. This book includes an updated tour through some of the recent advances in the theory, such as Colding-Minicozzi theory, minimal laminations, the ordering theorem for the space of ends, conformal structure of minimal surfaces, minimal annular ends with infinite total curvature, the embedded Calabi-Yau problem, local pictures on the scale of curvature and t...

  3. Does self-help increase rates of help seeking for student mental health problems by minimizing stigma as a barrier?

    Science.gov (United States)

    Levin, Michael E; Krafft, Jennifer; Levin, Crissa

    2018-01-01

    This study examined whether self-help (books, websites, mobile apps) increases help seeking for mental health problems among college students by minimizing stigma as a barrier. A survey was conducted with 200 college students reporting elevated distress from February to April 2017. Intentions to use self-help were low, but a significant portion of students unwilling to see mental health professionals intended to use self-help. Greater self-stigma related to lower intentions to seek professional help, but was unrelated to seeking self-help. Similarly, students who only used self-help in the past reported higher self-stigma than those who sought professional treatment in the past. Although stigma was not a barrier for self-help, alternate barriers were identified. Offering self-help may increase rates of students receiving help for mental health problems, possibly by offering an alternative for students unwilling to seek in-person therapy due to stigma concerns.

  4. Minimizing waste (off-cuts using cutting stock model: The case of one dimensional cutting stock problem in wood working industry

    Directory of Open Access Journals (Sweden)

    Gbemileke A. Ogunranti

    2016-09-01

    Full Text Available Purpose: The main objective of this study is to develop a model for solving the one dimensional cutting stock problem in the wood working industry, and develop a program for its implementation. Design/methodology/approach: This study adopts the pattern oriented approach in the formulation of the cutting stock model. A pattern generation algorithm was developed and coded using Visual basic.NET language. The cutting stock model developed is a Linear Programming (LP Model constrained by numerous feasible patterns. A LP solver was integrated with the pattern generation algorithm program to develop a one - dimensional cutting stock model application named GB Cutting Stock Program. Findings and Originality/value: Applying the model to a real life optimization problem significantly reduces material waste (off-cuts and minimizes the total stock used. The result yielded about 30.7% cost savings for company-I when the total stock materials used is compared with the former cutting plan. Also, to evaluate the efficiency of the application, Case I problem was solved using two top commercial 1D-cutting stock software.  The results show that the GB program performs better when related results were compared. Research limitations/implications: This study round up the linear programming solution for the number of pattern to cut. Practical implications: From Managerial perspective, implementing optimized cutting plans increases productivity by eliminating calculating errors and drastically reducing operator mistakes. Also, financial benefits that can annually amount to millions in cost savings can be achieved through significant material waste reduction. Originality/value: This paper developed a linear programming one dimensional cutting stock model based on a pattern generation algorithm to minimize waste in the wood working industry. To implement the model, the algorithm was coded using VisualBasic.net and linear programming solver called lpsolvedll (dynamic

  5. Stabilization of a locally minimal forest

    Science.gov (United States)

    Ivanov, A. O.; Mel'nikova, A. E.; Tuzhilin, A. A.

    2014-03-01

    The method of partial stabilization of locally minimal networks, which was invented by Ivanov and Tuzhilin to construct examples of shortest trees with given topology, is developed. According to this method, boundary vertices of degree 2 are not added to all edges of the original locally minimal tree, but only to some of them. The problem of partial stabilization of locally minimal trees in a finite-dimensional Euclidean space is solved completely in the paper, that is, without any restrictions imposed on the number of edges remaining free of subdivision. A criterion for the realizability of such stabilization is established. In addition, the general problem of searching for the shortest forest connecting a finite family of boundary compact sets in an arbitrary metric space is formalized; it is shown that such forests exist for any family of compact sets if and only if for any finite subset of the ambient space there exists a shortest tree connecting it. The theory developed here allows us to establish further generalizations of the stabilization theorem both for arbitrary metric spaces and for metric spaces with some special properties. Bibliography: 10 titles.

  6. On the Metric-based Approximate Minimization of Markov Chains

    DEFF Research Database (Denmark)

    Bacci, Giovanni; Bacci, Giorgio; Larsen, Kim Guldstrand

    2018-01-01

    In this paper we address the approximate minimization problem of Markov Chains (MCs) from a behavioral metric-based perspective. Specifically, given a finite MC and a positive integer k, we are looking for an MC with at most k states having minimal distance to the original. The metric considered...

  7. A Matrix Splitting Method for Composite Function Minimization

    KAUST Repository

    Yuan, Ganzhao

    2016-12-07

    Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization and cardinality regularized optimization as special cases. This paper proposes and analyzes a new Matrix Splitting Method (MSM) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Incorporating a new Gaussian elimination procedure, the matrix splitting method achieves state-of-the-art performance. For convex problems, we establish the global convergence, convergence rate, and iteration complexity of MSM, while for non-convex problems, we prove its global convergence. Finally, we validate the performance of our matrix splitting method on two particular applications: nonnegative matrix factorization and cardinality regularized sparse coding. Extensive experiments show that our method outperforms existing composite function minimization techniques in term of both efficiency and efficacy.

  8. A Matrix Splitting Method for Composite Function Minimization

    KAUST Repository

    Yuan, Ganzhao; Zheng, Wei-Shi; Ghanem, Bernard

    2016-01-01

    Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization and cardinality regularized optimization as special cases. This paper proposes and analyzes a new Matrix Splitting Method (MSM) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Incorporating a new Gaussian elimination procedure, the matrix splitting method achieves state-of-the-art performance. For convex problems, we establish the global convergence, convergence rate, and iteration complexity of MSM, while for non-convex problems, we prove its global convergence. Finally, we validate the performance of our matrix splitting method on two particular applications: nonnegative matrix factorization and cardinality regularized sparse coding. Extensive experiments show that our method outperforms existing composite function minimization techniques in term of both efficiency and efficacy.

  9. A perturbation technique for shield weight minimization

    International Nuclear Information System (INIS)

    Watkins, E.F.; Greenspan, E.

    1993-01-01

    The radiation shield optimization code SWAN (Ref. 1) was originally developed for minimizing the thickness of a shield that will meet a given dose (or another) constraint or for extremizing a performance parameter of interest (e.g., maximizing energy multiplication or minimizing dose) while maintaining the shield volume constraint. The SWAN optimization process proved to be highly effective (e.g., see Refs. 2, 3, and 4). The purpose of this work is to investigate the applicability of the SWAN methodology to problems in which the weight rather than the volume is the relevant shield characteristic. Such problems are encountered in shield design for space nuclear power systems. The investigation is carried out using SWAN with the coupled neutron-photon cross-section library FLUNG (Ref. 5)

  10. On the Metric-Based Approximate Minimization of Markov Chains

    DEFF Research Database (Denmark)

    Bacci, Giovanni; Bacci, Giorgio; Larsen, Kim Guldstrand

    2017-01-01

    We address the behavioral metric-based approximate minimization problem of Markov Chains (MCs), i.e., given a finite MC and a positive integer k, we are interested in finding a k-state MC of minimal distance to the original. By considering as metric the bisimilarity distance of Desharnais at al...

  11. A minimally-resolved immersed boundary model for reaction-diffusion problems

    OpenAIRE

    Pal Singh Bhalla, A; Griffith, BE; Patankar, NA; Donev, A

    2013-01-01

    We develop an immersed boundary approach to modeling reaction-diffusion processes in dispersions of reactive spherical particles, from the diffusion-limited to the reaction-limited setting. We represent each reactive particle with a minimally-resolved "blob" using many fewer degrees of freedom per particle than standard discretization approaches. More complicated or more highly resolved particle shapes can be built out of a collection of reactive blobs. We demonstrate numerically that the blo...

  12. Minimizing communication cost among distributed controllers in software defined networks

    Science.gov (United States)

    Arlimatti, Shivaleela; Elbreiki, Walid; Hassan, Suhaidi; Habbal, Adib; Elshaikh, Mohamed

    2016-08-01

    Software Defined Networking (SDN) is a new paradigm to increase the flexibility of today's network by promising for a programmable network. The fundamental idea behind this new architecture is to simplify network complexity by decoupling control plane and data plane of the network devices, and by making the control plane centralized. Recently controllers have distributed to solve the problem of single point of failure, and to increase scalability and flexibility during workload distribution. Even though, controllers are flexible and scalable to accommodate more number of network switches, yet the problem of intercommunication cost between distributed controllers is still challenging issue in the Software Defined Network environment. This paper, aims to fill the gap by proposing a new mechanism, which minimizes intercommunication cost with graph partitioning algorithm, an NP hard problem. The methodology proposed in this paper is, swapping of network elements between controller domains to minimize communication cost by calculating communication gain. The swapping of elements minimizes inter and intra communication cost among network domains. We validate our work with the OMNeT++ simulation environment tool. Simulation results show that the proposed mechanism minimizes the inter domain communication cost among controllers compared to traditional distributed controllers.

  13. On a minimization of the eigenvalues of Schroedinger operator relatively domains

    International Nuclear Information System (INIS)

    Gasymov, Yu.S.; Niftiev, A.A.

    2001-01-01

    Minimization of the eigenvalues plays an important role in the operators spectral theory. The problem on the minimization of the eigenvalues of the Schroedinger operator by areas is considered in this work. The algorithm, analogous to the conditional gradient method, is proposed for the numerical solution of this problem in the common case. The result is generalized for the case of the positively determined completely continuous operator [ru

  14. Structural Identification Problem

    Directory of Open Access Journals (Sweden)

    Suvorov Aleksei

    2016-01-01

    Full Text Available The identification problem of the existing structures though the Quasi-Newton and its modification, Trust region algorithms is discussed. For the structural problems, which could be represented by means of the mathematical modelling of the finite element code discussed method is extremely useful. The nonlinear minimization problem of the L2 norm for the structures with linear elastic behaviour is solved by using of the Optimization Toolbox of Matlab. The direct and inverse procedures for the composition of the desired function to minimize are illustrated for the spatial 3D truss structure as well as for the problem of plane finite elements. The truss identification problem is solved with 2 and 3 unknown parameters in order to compare the computational efforts and for the graphical purposes. The particular commands of the Matlab codes are present in this paper.

  15. Minimal conformal model

    Energy Technology Data Exchange (ETDEWEB)

    Helmboldt, Alexander; Humbert, Pascal; Lindner, Manfred; Smirnov, Juri [Max-Planck-Institut fuer Kernphysik, Heidelberg (Germany)

    2016-07-01

    The gauge hierarchy problem is one of the crucial drawbacks of the standard model of particle physics (SM) and thus has triggered model building over the last decades. Its most famous solution is the introduction of low-scale supersymmetry. However, without any significant signs of supersymmetric particles at the LHC to date, it makes sense to devise alternative mechanisms to remedy the hierarchy problem. One such mechanism is based on classically scale-invariant extensions of the SM, in which both the electroweak symmetry and the (anomalous) scale symmetry are broken radiatively via the Coleman-Weinberg mechanism. Apart from giving an introduction to classically scale-invariant models, the talk presents our results on obtaining a theoretically consistent minimal extension of the SM, which reproduces the correct low-scale phenomenology.

  16. An alternating minimization method for blind deconvolution from Poisson data

    International Nuclear Information System (INIS)

    Prato, Marco; La Camera, Andrea; Bonettini, Silvia

    2014-01-01

    Blind deconvolution is a particularly challenging inverse problem since information on both the desired target and the acquisition system have to be inferred from the measured data. When the collected data are affected by Poisson noise, this problem is typically addressed by the minimization of the Kullback-Leibler divergence, in which the unknowns are sought in particular feasible sets depending on the a priori information provided by the specific application. If these sets are separated, then the resulting constrained minimization problem can be addressed with an inexact alternating strategy. In this paper we apply this optimization tool to the problem of reconstructing astronomical images from adaptive optics systems, and we show that the proposed approach succeeds in providing very good results in the blind deconvolution of nondense stellar clusters

  17. Basic Minimal Dominating Functions of Quadratic Residue Cayley ...

    African Journals Online (AJOL)

    Domination arises in the study of numerous facility location problems where the number of facilities is fixed and one attempt to minimize the number of facilities necessary so that everyone is serviced. This problem reduces to finding a minimum dominating set in the graph corresponding to this network. In this paper we study ...

  18. A convergent overlapping domain decomposition method for total variation minimization

    KAUST Repository

    Fornasier, Massimo

    2010-06-22

    In this paper we are concerned with the analysis of convergent sequential and parallel overlapping domain decomposition methods for the minimization of functionals formed by a discrepancy term with respect to the data and a total variation constraint. To our knowledge, this is the first successful attempt of addressing such a strategy for the nonlinear, nonadditive, and nonsmooth problem of total variation minimization. We provide several numerical experiments, showing the successful application of the algorithm for the restoration of 1D signals and 2D images in interpolation/inpainting problems, respectively, and in a compressed sensing problem, for recovering piecewise constant medical-type images from partial Fourier ensembles. © 2010 Springer-Verlag.

  19. [Minimal emotional dysfunction and first impression formation in personality disorders].

    Science.gov (United States)

    Linden, M; Vilain, M

    2011-01-01

    "Minimal cerebral dysfunctions" are isolated impairments of basic mental functions, which are elements of complex functions like speech. The best described are cognitive dysfunctions such as reading and writing problems, dyscalculia, attention deficits, but also motor dysfunctions such as problems with articulation, hyperactivity or impulsivity. Personality disorders can be characterized by isolated emotional dysfunctions in relation to emotional adequacy, intensity and responsivity. For example, paranoid personality disorders can be characterized by continuous and inadequate distrust, as a disorder of emotional adequacy. Schizoid personality disorders can be characterized by low expressive emotionality, as a disorder of effect intensity, or dissocial personality disorders can be characterized by emotional non-responsivity. Minimal emotional dysfunctions cause interactional misunderstandings because of the psychology of "first impression formation". Studies have shown that in 100 ms persons build up complex and lasting emotional judgements about other persons. Therefore, minimal emotional dysfunctions result in interactional problems and adjustment disorders and in corresponding cognitive schemata.From the concept of minimal emotional dysfunctions specific psychotherapeutic interventions in respect to the patient-therapist relationship, the diagnostic process, the clarification of emotions and reality testing, and especially an understanding of personality disorders as impairment and "selection, optimization, and compensation" as a way of coping can be derived.

  20. KCUT, code to generate minimal cut sets for fault trees

    International Nuclear Information System (INIS)

    Han, Sang Hoon

    2008-01-01

    1 - Description of program or function: KCUT is a software to generate minimal cut sets for fault trees. 2 - Methods: Expand a fault tree into cut sets and delete non minimal cut sets. 3 - Restrictions on the complexity of the problem: Size and complexity of the fault tree

  1. Local Risk-Minimization for Defaultable Claims with Recovery Process

    International Nuclear Information System (INIS)

    Biagini, Francesca; Cretarola, Alessandra

    2012-01-01

    We study the local risk-minimization approach for defaultable claims with random recovery at default time, seen as payment streams on the random interval [0,τ∧T], where T denotes the fixed time-horizon. We find the pseudo-locally risk-minimizing strategy in the case when the agent information takes into account the possibility of a default event (local risk-minimization with G-strategies) and we provide an application in the case of a corporate bond. We also discuss the problem of finding a pseudo-locally risk-minimizing strategy if we suppose the agent obtains her information only by observing the non-defaultable assets.

  2. Stabilization of a locally minimal forest

    International Nuclear Information System (INIS)

    Ivanov, A O; Mel'nikova, A E; Tuzhilin, A A

    2014-01-01

    The method of partial stabilization of locally minimal networks, which was invented by Ivanov and Tuzhilin to construct examples of shortest trees with given topology, is developed. According to this method, boundary vertices of degree 2 are not added to all edges of the original locally minimal tree, but only to some of them. The problem of partial stabilization of locally minimal trees in a finite-dimensional Euclidean space is solved completely in the paper, that is, without any restrictions imposed on the number of edges remaining free of subdivision. A criterion for the realizability of such stabilization is established. In addition, the general problem of searching for the shortest forest connecting a finite family of boundary compact sets in an arbitrary metric space is formalized; it is shown that such forests exist for any family of compact sets if and only if for any finite subset of the ambient space there exists a shortest tree connecting it. The theory developed here allows us to establish further generalizations of the stabilization theorem both for arbitrary metric spaces and for metric spaces with some special properties. Bibliography: 10 titles

  3. Generalized Bilinear Differential Operators, Binary Bell Polynomials, and Exact Periodic Wave Solution of Boiti-Leon-Manna-Pempinelli Equation

    Directory of Open Access Journals (Sweden)

    Huanhe Dong

    2014-01-01

    Full Text Available We introduce how to obtain the bilinear form and the exact periodic wave solutions of a class of (2+1-dimensional nonlinear integrable differential equations directly and quickly with the help of the generalized Dp-operators, binary Bell polynomials, and a general Riemann theta function in terms of the Hirota method. As applications, we solve the periodic wave solution of BLMP equation and it can be reduced to soliton solution via asymptotic analysis when the value of p is 5.

  4. Optimum distributed generation placement with voltage sag effect minimization

    International Nuclear Information System (INIS)

    Biswas, Soma; Goswami, Swapan Kumar; Chatterjee, Amitava

    2012-01-01

    Highlights: ► A new optimal distributed generation placement algorithm is proposed. ► Optimal number, sizes and locations of the DGs are determined. ► Technical factors like loss, voltage sag problem are minimized. ► The percentage savings are optimized. - Abstract: The present paper proposes a new formulation for the optimum distributed generator (DG) placement problem which considers a hybrid combination of technical factors, like minimization of the line loss, reduction in the voltage sag problem, etc., and economical factors, like installation and maintenance cost of the DGs. The new formulation proposed is inspired by the idea that the optimum placement of the DGs can help in reducing and mitigating voltage dips in low voltage distribution networks. The problem is configured as a multi-objective, constrained optimization problem, where the optimal number of DGs, along with their sizes and bus locations, are simultaneously obtained. This problem has been solved using genetic algorithm, a traditionally popular stochastic optimization algorithm. A few benchmark systems radial and networked (like 34-bus radial distribution system, 30 bus loop distribution system and IEEE 14 bus system) are considered as the case study where the effectiveness of the proposed algorithm is aptly demonstrated.

  5. Controllers with Minimal Observation Power (Application to Timed Systems)

    DEFF Research Database (Denmark)

    Bulychev, Petr; Cassez, Franck; David, Alexandre

    2012-01-01

    We consider the problem of controller synthesis under imper- fect information in a setting where there is a set of available observable predicates equipped with a cost function. The problem that we address is the computation of a subset of predicates sufficient for control and whose cost is minimal...

  6. A minimal dissipation type-based classification in irreversible thermodynamics and microeconomics

    Science.gov (United States)

    Tsirlin, A. M.; Kazakov, V.; Kolinko, N. A.

    2003-10-01

    We formulate the problem of finding classes of kinetic dependencies in irreversible thermodynamic and microeconomic systems for which minimal dissipation processes belong to the same type. We show that this problem is an inverse optimal control problem and solve it. The commonality of this problem in irreversible thermodynamics and microeconomics is emphasized.

  7. Hydrogen atom in momentum space with a minimal length

    International Nuclear Information System (INIS)

    Bouaziz, Djamil; Ferkous, Nourredine

    2010-01-01

    A momentum representation treatment of the hydrogen atom problem with a generalized uncertainty relation, which leads to a minimal length ΔX imin =(ℎ/2π)√(3β+β ' ), is presented. We show that the distance squared operator can be factorized in the case β ' =2β. We analytically solve the s-wave bound-state equation. The leading correction to the energy spectrum caused by the minimal length depends on √(β). An upper bound for the minimal length is found to be about 10 -9 fm.

  8. A Singlet Extension of the Minimal Supersymmetric Standard Model: Towards a More Natural Solution to the Little Hierarchy Problem

    Energy Technology Data Exchange (ETDEWEB)

    de la Puente, Alejandro [Univ. of Notre Dame, IN (United States)

    2012-05-01

    In this work, I present a generalization of the Next-to-Minimal Supersymmetric Standard Model (NMSSM), with an explicit μ-term and a supersymmetric mass for the singlet superfield, as a route to alleviating the little hierarchy problem of the Minimal Supersymmetric Standard Model (MSSM). I analyze two limiting cases of the model, characterized by the size of the supersymmetric mass for the singlet superfield. The small and large limits of this mass parameter are studied, and I find that I can generate masses for the lightest neutral Higgs boson up to 140 GeV with top squarks below the TeV scale, all couplings perturbative up to the gauge unification scale, and with no need to fine tune parameters in the scalar potential. This model, which I call the S-MSSM is also embedded in a gauge-mediated supersymmetry breaking scheme. I find that even with a minimal embedding of the S-MSSM into a gauge mediated scheme, the mass for the lightest Higgs boson can easily be above 114 GeV, while keeping the top squarks below the TeV scale. Furthermore, I also study the forward-backward asymmetry in the t¯t system within the framework of the S-MSSM. For this purpose, non-renormalizable couplings between the first and third generation of quarks to scalars are introduced. The two limiting cases of the S-MSSM, characterized by the size of the supersymmetric mass for the singlet superfield is analyzed, and I find that in the region of small singlet supersymmetric mass a large asymmetry can be obtained while being consistent with constraints arising from flavor physics, quark masses and top quark decays.

  9. On the convergence of nonconvex minimization methods for image recovery.

    Science.gov (United States)

    Xiao, Jin; Ng, Michael Kwok-Po; Yang, Yu-Fei

    2015-05-01

    Nonconvex nonsmooth regularization method has been shown to be effective for restoring images with neat edges. Fast alternating minimization schemes have also been proposed and developed to solve the nonconvex nonsmooth minimization problem. The main contribution of this paper is to show the convergence of these alternating minimization schemes, based on the Kurdyka-Łojasiewicz property. In particular, we show that the iterates generated by the alternating minimization scheme, converges to a critical point of this nonconvex nonsmooth objective function. We also extend the analysis to nonconvex nonsmooth regularization model with box constraints, and obtain similar convergence results of the related minimization algorithm. Numerical examples are given to illustrate our convergence analysis.

  10. Wastewater minimization in multipurpose batch plants with a regeneration unit: multiple contaminants

    CSIR Research Space (South Africa)

    Adekola, O

    2011-12-01

    Full Text Available Wastewater minimization can be achieved by employing water reuse opportunities. This paper presents a methodology to address the problem of wastewater minimization by extending the concept of water reuse to include a wastewater regenerator...

  11. Recent developments in the DOE Waste Minimization Pollution Prevention Program

    International Nuclear Information System (INIS)

    Hancock, J.K.

    1993-01-01

    The U.S. Department of Energy (DOE) is involved in a wide variety of research and development, remediation, and production activities at more than 100 sites throughout the United States. The wastes generated cover a diverse spectrum of sanitary, hazardous, and radioactive waste streams, including typical office environments, power generation facilities, laboratories, remediation sites, production facilities, and defense facilities. The DOE's initial waste minimization activities pre-date the Pollution Prevention Act of 1990 and focused on the defense program. Little emphasis was placed on nonproduction activities. In 1991 the Office of Waste Management Operations developed the Waste Minimization Division with the intention of coordinating and expanding the waste minimization pollution prevention approach to the entire complex. The diverse nature of DOE activities has led to several unique problems in addressing the needs of waste minimization and pollution prevention. The first problem is developing a program that addresses the geographical and institutional hurdles that exist; the second is developing a monitoring and reporting mechanism that one can use to assess the overall performance of the program

  12. A majorization-minimization approach to design of power distribution networks

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Jason K [Los Alamos National Laboratory; Chertkov, Michael [Los Alamos National Laboratory

    2010-01-01

    We consider optimization approaches to design cost-effective electrical networks for power distribution. This involves a trade-off between minimizing the power loss due to resistive heating of the lines and minimizing the construction cost (modeled by a linear cost in the number of lines plus a linear cost on the conductance of each line). We begin with a convex optimization method based on the paper 'Minimizing Effective Resistance of a Graph' [Ghosh, Boyd & Saberi]. However, this does not address the Alternating Current (AC) realm and the combinatorial aspect of adding/removing lines of the network. Hence, we consider a non-convex continuation method that imposes a concave cost of the conductance of each line thereby favoring sparser solutions. By varying a parameter of this penalty we extrapolate from the convex problem (with non-sparse solutions) to the combinatorial problem (with sparse solutions). This is used as a heuristic to find good solutions (local minima) of the non-convex problem. To perform the necessary non-convex optimization steps, we use the majorization-minimization algorithm that performs a sequence of convex optimizations obtained by iteratively linearizing the concave part of the objective. A number of examples are presented which suggest that the overall method is a good heuristic for network design. We also consider how to obtain sparse networks that are still robust against failures of lines and/or generators.

  13. Radiological terrorism: problems of prevention and minimization of consequences

    International Nuclear Information System (INIS)

    Bolshov, Leonid; Arutyunyan, Rafael; Pavlovski, Oleg

    2008-01-01

    This paper gives a review of the key factors defining the extent of potential hazard caused by ionizing radiation sources for the purpose of radiological terrorism and the key areas of activities in the field of counteractions and minimization of possible consequences of such acts. The importance of carrying out system analysis of the practical experience of response to radiation accidents and elimination of their consequences is emphasized. The need to develop scientific approaches, methods and software to realistically analyze possible scenarios and predict the scale of consequences of the acts of terrorism involving radioactive materials is pointed out. The importance of improvement of radioactive materials accounting, control and monitoring systems, especially in non-nuclear areas, as well as improvement of the legal and regulatory framework governing all aspects of radiation source application in the national economy is of particular importance. (author)

  14. Properties and solution methods for large location-allocation problems

    DEFF Research Database (Denmark)

    Juel, Henrik; Love, Robert F.

    1982-01-01

    Location-allocation with l$ _p$ distances is studied. It is shown that this structure can be expressed as a concave minimization programming problem. Since concave minimization algorithms are not yet well developed, five solution methods are developed which utilize the special properties of the l......Location-allocation with l$ _p$ distances is studied. It is shown that this structure can be expressed as a concave minimization programming problem. Since concave minimization algorithms are not yet well developed, five solution methods are developed which utilize the special properties...... of the location-allocation problem. Using the rectilinear distance measure, two of these algorithms achieved optimal solutions in all 102 test problems for which solutions were known. The algorithms can be applied to much larger problems than any existing exact methods....

  15. A novel particle swarm optimization algorithm for permutation flow-shop scheduling to minimize makespan

    International Nuclear Information System (INIS)

    Lian Zhigang; Gu Xingsheng; Jiao Bin

    2008-01-01

    It is well known that the flow-shop scheduling problem (FSSP) is a branch of production scheduling and is NP-hard. Now, many different approaches have been applied for permutation flow-shop scheduling to minimize makespan, but current algorithms even for moderate size problems cannot be solved to guarantee optimality. Some literatures searching PSO for continuous optimization problems are reported, but papers searching PSO for discrete scheduling problems are few. In this paper, according to the discrete characteristic of FSSP, a novel particle swarm optimization (NPSO) algorithm is presented and successfully applied to permutation flow-shop scheduling to minimize makespan. Computation experiments of seven representative instances (Taillard) based on practical data were made, and comparing the NPSO with standard GA, we obtain that the NPSO is clearly more efficacious than standard GA for FSSP to minimize makespan

  16. L∞ Variational Problems with Running Costs and Constraints

    International Nuclear Information System (INIS)

    Aronsson, G.; Barron, E. N.

    2012-01-01

    Various approaches are used to derive the Aronsson–Euler equations for L ∞ calculus of variations problems with constraints. The problems considered involve holonomic, nonholonomic, isoperimetric, and isosupremic constraints on the minimizer. In addition, we derive the Aronsson–Euler equation for the basic L ∞ problem with a running cost and then consider properties of an absolute minimizer. Many open problems are introduced for further study.

  17. On the Support of Minimizers of Causal Variational Principles

    Science.gov (United States)

    Finster, Felix; Schiefeneder, Daniela

    2013-11-01

    A class of causal variational principles on a compact manifold is introduced and analyzed both numerically and analytically. It is proved under general assumptions that the support of a minimizing measure is either completely timelike, or it is singular in the sense that its interior is empty. In the examples of the circle, the sphere and certain flag manifolds, the general results are supplemented by a more detailed and explicit analysis of the minimizers. On the sphere, we get a connection to packing problems and the Tammes distribution. Moreover, the minimal action is estimated from above and below.

  18. The minimally tuned minimal supersymmetric standard model

    International Nuclear Information System (INIS)

    Essig, Rouven; Fortin, Jean-Francois

    2008-01-01

    The regions in the Minimal Supersymmetric Standard Model with the minimal amount of fine-tuning of electroweak symmetry breaking are presented for general messenger scale. No a priori relations among the soft supersymmetry breaking parameters are assumed and fine-tuning is minimized with respect to all the important parameters which affect electroweak symmetry breaking. The superpartner spectra in the minimally tuned region of parameter space are quite distinctive with large stop mixing at the low scale and negative squark soft masses at the high scale. The minimal amount of tuning increases enormously for a Higgs mass beyond roughly 120 GeV

  19. Optimizing Processes to Minimize Risk

    Science.gov (United States)

    Loyd, David

    2017-01-01

    NASA, like the other hazardous industries, has suffered very catastrophic losses. Human error will likely never be completely eliminated as a factor in our failures. When you can't eliminate risk, focus on mitigating the worst consequences and recovering operations. Bolstering processes to emphasize the role of integration and problem solving is key to success. Building an effective Safety Culture bolsters skill-based performance that minimizes risk and encourages successful engagement.

  20. The minimal non-minimal standard model

    International Nuclear Information System (INIS)

    Bij, J.J. van der

    2006-01-01

    In this Letter I discuss a class of extensions of the standard model that have a minimal number of possible parameters, but can in principle explain dark matter and inflation. It is pointed out that the so-called new minimal standard model contains a large number of parameters that can be put to zero, without affecting the renormalizability of the model. With the extra restrictions one might call it the minimal (new) non-minimal standard model (MNMSM). A few hidden discrete variables are present. It is argued that the inflaton should be higher-dimensional. Experimental consequences for the LHC and the ILC are discussed

  1. Waste minimization and control: a review of problems and available technologies

    International Nuclear Information System (INIS)

    Butt, W.M.

    1999-01-01

    A country's environmental problems are affected by the level of its economic development, the availability of national resources, and the socio-economic level of this population. Poverty presents special problems for a heavily populated country with limited resources. environmental problems in Pakistan have become serious and should no longer be neglected. These relate air and water pollution particularly in metropolitan and industrial zones, degradation of common property sources which affect the poor adversely due to the degradation of their life support system, threat to biodiversity, inadequate system of solid waste disposal and sanitation with consequent adverse impact on health, infant mortality, birth rate. These problems impose a serious cost on society although it is impossible to comprehend the extent of these on costs. (author)

  2. On minimizing the maximum broadcast decoding delay for instantly decodable network coding

    KAUST Repository

    Douik, Ahmed S.

    2014-09-01

    In this paper, we consider the problem of minimizing the maximum broadcast decoding delay experienced by all the receivers of generalized instantly decodable network coding (IDNC). Unlike the sum decoding delay, the maximum decoding delay as a definition of delay for IDNC allows a more equitable distribution of the delays between the different receivers and thus a better Quality of Service (QoS). In order to solve this problem, we first derive the expressions for the probability distributions of maximum decoding delay increments. Given these expressions, we formulate the problem as a maximum weight clique problem in the IDNC graph. Although this problem is known to be NP-hard, we design a greedy algorithm to perform effective packet selection. Through extensive simulations, we compare the sum decoding delay and the max decoding delay experienced when applying the policies to minimize the sum decoding delay and our policy to reduce the max decoding delay. Simulations results show that our policy gives a good agreement among all the delay aspects in all situations and outperforms the sum decoding delay policy to effectively minimize the sum decoding delay when the channel conditions become harsher. They also show that our definition of delay significantly improve the number of served receivers when they are subject to strict delay constraints.

  3. Embeddings of planar graphs that minimize the number of long face cycles

    NARCIS (Netherlands)

    Woeginger, Gerhard

    2002-01-01

    We consider the problem of finding embeddings of planar graphs that minimize the number of long-face cycles. We prove that for any k≥4, it is NP-complete to find an embedding that minimizes the number of face cycles of length at least k.

  4. Minimalism

    CERN Document Server

    Obendorf, Hartmut

    2009-01-01

    The notion of Minimalism is proposed as a theoretical tool supporting a more differentiated understanding of reduction and thus forms a standpoint that allows definition of aspects of simplicity. This book traces the development of minimalism, defines the four types of minimalism in interaction design, and looks at how to apply it.

  5. Minimization and parameter estimation for seminorm regularization models with I-divergence constraints

    International Nuclear Information System (INIS)

    Teuber, T; Steidl, G; Chan, R H

    2013-01-01

    In this paper, we analyze the minimization of seminorms ‖L · ‖ on R n under the constraint of a bounded I-divergence D(b, H · ) for rather general linear operators H and L. The I-divergence is also known as Kullback–Leibler divergence and appears in many models in imaging science, in particular when dealing with Poisson data but also in the case of multiplicative Gamma noise. Often H represents, e.g., a linear blur operator and L is some discrete derivative or frame analysis operator. A central part of this paper consists in proving relations between the parameters of I-divergence constrained and penalized problems. To solve the I-divergence constrained problem, we consider various first-order primal–dual algorithms which reduce the problem to the solution of certain proximal minimization problems in each iteration step. One of these proximation problems is an I-divergence constrained least-squares problem which can be solved based on Morozov’s discrepancy principle by a Newton method. We prove that these algorithms produce not only a sequence of vectors which converges to a minimizer of the constrained problem but also a sequence of parameters which converges to a regularization parameter so that the corresponding penalized problem has the same solution. Furthermore, we derive a rule for automatically setting the constraint parameter for data corrupted by multiplicative Gamma noise. The performance of the various algorithms is finally demonstrated for different image restoration tasks both for images corrupted by Poisson noise and multiplicative Gamma noise. (paper)

  6. The construction of minimal multilayered perceptrons : a case study for sorting

    NARCIS (Netherlands)

    Zwietering, P.J.; Aarts, E.H.L.; Wessels, J.

    1993-01-01

    We consider the construction of minimal multilayered perceptrons for solving combinatorial optimization problems. Though general in nature, the proposed construction method is presented as a case study for the sorting problem. The presentation starts with an O((n!)2) three-layered perceptron based

  7. Generalized bi-quasi-variational inequalities for quasi-semi-monotone and bi-quasi-semi-monotone operators with applications in non-compact settings and minimization problems

    Directory of Open Access Journals (Sweden)

    Chowdhury Molhammad SR

    2000-01-01

    Full Text Available Results are obtained on existence theorems of generalized bi-quasi-variational inequalities for quasi-semi-monotone and bi-quasi-semi-monotone operators in both compact and non-compact settings. We shall use the concept of escaping sequences introduced by Border (Fixed Point Theorem with Applications to Economics and Game Theory, Cambridge University Press, Cambridge, 1985 to obtain results in non-compact settings. Existence theorems on non-compact generalized bi-complementarity problems for quasi-semi-monotone and bi-quasi-semi-monotone operators are also obtained. Moreover, as applications of some results of this paper on generalized bi-quasi-variational inequalities, we shall obtain existence of solutions for some kind of minimization problems with quasi- semi-monotone and bi-quasi-semi-monotone operators.

  8. Constructal entransy dissipation minimization for 'volume-point' heat conduction

    International Nuclear Information System (INIS)

    Chen Lingen; Wei Shuhuan; Sun Fengrui

    2008-01-01

    The 'volume to point' heat conduction problem, which can be described as to how to determine the optimal distribution of high conductivity material through the given volume such that the heat generated at every point is transferred most effectively to its boundary, has became the focus of attention in the current constructal theory literature. In general, the minimization of the maximum temperature difference in the volume is taken as the optimization objective. A new physical quantity, entransy, has been identified as a basis for optimizing heat transfer processes in terms of the analogy between heat and electrical conduction recently. Heat transfer analyses show that the entransy of an object describes its heat transfer ability, just as the electrical energy in a capacitor describes its charge transfer ability. Entransy dissipation occurs during heat transfer processes, as a measure of the heat transfer irreversibility with the dissipation related thermal resistance. By taking equivalent thermal resistance (it corresponds to the mean temperature difference), which reflects the average heat conduction effect and is defined based on entransy dissipation, as an optimization objective, the 'volume to point' constructal problem is re-analysed and re-optimized in this paper. The constructal shape of the control volume with the best average heat conduction effect is deduced. For the elemental area and the first order construct assembly, when the thermal current density in the high conductive link is linear with the length, the optimized shapes of assembly based on the minimization of entransy dissipation are the same as those based on minimization of the maximum temperature difference, and the mean temperature difference is 2/3 of the maximum temperature difference. For the second and higher order construct assemblies, the thermal current densities in the high conductive link are not linear with the length, and the optimized shapes of the assembly based on the

  9. An optimization based method for line planning to minimize travel time

    DEFF Research Database (Denmark)

    Bull, Simon Henry; Lusby, Richard Martin; Larsen, Jesper

    2015-01-01

    The line planning problem is to select a number of lines from a potential pool which provides sufficient passenger capacity and meets operational requirements, with some objective measure of solution line quality. We model the problem of minimizing the average passenger system time, including...

  10. Replica analysis for the duality of the portfolio optimization problem.

    Science.gov (United States)

    Shinzato, Takashi

    2016-11-01

    In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.

  11. Replica analysis for the duality of the portfolio optimization problem

    Science.gov (United States)

    Shinzato, Takashi

    2016-11-01

    In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.

  12. Subspace Correction Methods for Total Variation and $\\ell_1$-Minimization

    KAUST Repository

    Fornasier, Massimo

    2009-01-01

    This paper is concerned with the numerical minimization of energy functionals in Hilbert spaces involving convex constraints coinciding with a seminorm for a subspace. The optimization is realized by alternating minimizations of the functional on a sequence of orthogonal subspaces. On each subspace an iterative proximity-map algorithm is implemented via oblique thresholding, which is the main new tool introduced in this work. We provide convergence conditions for the algorithm in order to compute minimizers of the target energy. Analogous results are derived for a parallel variant of the algorithm. Applications are presented in domain decomposition methods for degenerate elliptic PDEs arising in total variation minimization and in accelerated sparse recovery algorithms based on 1-minimization. We include numerical examples which show e.cient solutions to classical problems in signal and image processing. © 2009 Society for Industrial and Applied Physics.

  13. Minimizing the Makespan for a Two-Stage Three-Machine Assembly Flow Shop Problem with the Sum-of-Processing-Time Based Learning Effect

    Directory of Open Access Journals (Sweden)

    Win-Chin Lin

    2018-01-01

    Full Text Available Two-stage production process and its applications appear in many production environments. Job processing times are usually assumed to be constant throughout the process. In fact, the learning effect accrued from repetitive work experiences, which leads to the reduction of actual job processing times, indeed exists in many production environments. However, the issue of learning effect is rarely addressed in solving a two-stage assembly scheduling problem. Motivated by this observation, the author studies a two-stage three-machine assembly flow shop problem with a learning effect based on sum of the processing times of already processed jobs to minimize the makespan criterion. Because this problem is proved to be NP-hard, a branch-and-bound method embedded with some developed dominance propositions and a lower bound is employed to search for optimal solutions. A cloud theory-based simulated annealing (CSA algorithm and an iterated greedy (IG algorithm with four different local search methods are used to find near-optimal solutions for small and large number of jobs. The performances of adopted algorithms are subsequently compared through computational experiments and nonparametric statistical analyses, including the Kruskal–Wallis test and a multiple comparison procedure.

  14. Improving the performance of minimizers and winnowing schemes.

    Science.gov (United States)

    Marçais, Guillaume; Pellow, David; Bork, Daniel; Orenstein, Yaron; Shamir, Ron; Kingsford, Carl

    2017-07-15

    The minimizers scheme is a method for selecting k -mers from sequences. It is used in many bioinformatics software tools to bin comparable sequences or to sample a sequence in a deterministic fashion at approximately regular intervals, in order to reduce memory consumption and processing time. Although very useful, the minimizers selection procedure has undesirable behaviors (e.g. too many k -mers are selected when processing certain sequences). Some of these problems were already known to the authors of the minimizers technique, and the natural lexicographic ordering of k -mers used by minimizers was recognized as their origin. Many software tools using minimizers employ ad hoc variations of the lexicographic order to alleviate those issues. We provide an in-depth analysis of the effect of k -mer ordering on the performance of the minimizers technique. By using small universal hitting sets (a recently defined concept), we show how to significantly improve the performance of minimizers and avoid some of its worse behaviors. Based on these results, we encourage bioinformatics software developers to use an ordering based on a universal hitting set or, if not possible, a randomized ordering, rather than the lexicographic order. This analysis also settles negatively a conjecture (by Schleimer et al. ) on the expected density of minimizers in a random sequence. The software used for this analysis is available on GitHub: https://github.com/gmarcais/minimizers.git . gmarcais@cs.cmu.edu or carlk@cs.cmu.edu. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  15. Surface Reconstruction and Image Enhancement via $L^1$-Minimization

    KAUST Repository

    Dobrev, Veselin; Guermond, Jean-Luc; Popov, Bojan

    2010-01-01

    A surface reconstruction technique based on minimization of the total variation of the gradient is introduced. Convergence of the method is established, and an interior-point algorithm solving the associated linear programming problem is introduced

  16. Enumeration of minimal stoichiometric precursor sets in metabolic networks.

    Science.gov (United States)

    Andrade, Ricardo; Wannagat, Martin; Klein, Cecilia C; Acuña, Vicente; Marchetti-Spaccamela, Alberto; Milreu, Paulo V; Stougie, Leen; Sagot, Marie-France

    2016-01-01

    What an organism needs at least from its environment to produce a set of metabolites, e.g. target(s) of interest and/or biomass, has been called a minimal precursor set. Early approaches to enumerate all minimal precursor sets took into account only the topology of the metabolic network (topological precursor sets). Due to cycles and the stoichiometric values of the reactions, it is often not possible to produce the target(s) from a topological precursor set in the sense that there is no feasible flux. Although considering the stoichiometry makes the problem harder, it enables to obtain biologically reasonable precursor sets that we call stoichiometric. Recently a method to enumerate all minimal stoichiometric precursor sets was proposed in the literature. The relationship between topological and stoichiometric precursor sets had however not yet been studied. Such relationship between topological and stoichiometric precursor sets is highlighted. We also present two algorithms that enumerate all minimal stoichiometric precursor sets. The first one is of theoretical interest only and is based on the above mentioned relationship. The second approach solves a series of mixed integer linear programming problems. We compared the computed minimal precursor sets to experimentally obtained growth media of several Escherichia coli strains using genome-scale metabolic networks. The results show that the second approach efficiently enumerates minimal precursor sets taking stoichiometry into account, and allows for broad in silico studies of strains or species interactions that may help to understand e.g. pathotype and niche-specific metabolic capabilities. sasita is written in Java, uses cplex as LP solver and can be downloaded together with all networks and input files used in this paper at http://www.sasita.gforge.inria.fr.

  17. Minimal changes in health status questionnaires: distinction between minimally detectable change and minimally important change

    Directory of Open Access Journals (Sweden)

    Knol Dirk L

    2006-08-01

    Full Text Available Abstract Changes in scores on health status questionnaires are difficult to interpret. Several methods to determine minimally important changes (MICs have been proposed which can broadly be divided in distribution-based and anchor-based methods. Comparisons of these methods have led to insight into essential differences between these approaches. Some authors have tried to come to a uniform measure for the MIC, such as 0.5 standard deviation and the value of one standard error of measurement (SEM. Others have emphasized the diversity of MIC values, depending on the type of anchor, the definition of minimal importance on the anchor, and characteristics of the disease under study. A closer look makes clear that some distribution-based methods have been merely focused on minimally detectable changes. For assessing minimally important changes, anchor-based methods are preferred, as they include a definition of what is minimally important. Acknowledging the distinction between minimally detectable and minimally important changes is useful, not only to avoid confusion among MIC methods, but also to gain information on two important benchmarks on the scale of a health status measurement instrument. Appreciating the distinction, it becomes possible to judge whether the minimally detectable change of a measurement instrument is sufficiently small to detect minimally important changes.

  18. Mean-field approximation minimizes relative entropy

    International Nuclear Information System (INIS)

    Bilbro, G.L.; Snyder, W.E.; Mann, R.C.

    1991-01-01

    The authors derive the mean-field approximation from the information-theoretic principle of minimum relative entropy instead of by minimizing Peierls's inequality for the Weiss free energy of statistical physics theory. They show that information theory leads to the statistical mechanics procedure. As an example, they consider a problem in binary image restoration. They find that mean-field annealing compares favorably with the stochastic approach

  19. Fuzzy-TLBO optimal reactive power control variables planning for energy loss minimization

    International Nuclear Information System (INIS)

    Moghadam, Ahmad; Seifi, Ali Reza

    2014-01-01

    Highlights: • A new approach to the problem of optimal reactive power control variables planning is proposed. • The energy loss minimization problem has been formulated by modeling the load of system as a Load Duration Curve. • To solving the energy loss problem, the classic methods and the evolutionary methods are used. • A new proposed fuzzy teaching–learning based algorithm is applied to energy loss problem. • Simulations are done to show the effectiveness and superiority of the proposed algorithm compared with other methods. - Abstract: This paper offers a new approach to the problem of optimal reactive power control variables planning (ORPVCP). The basic idea is division of Load Duration Curve (LDC) into several time intervals with constant active power demand in each interval and then solving the energy loss minimization (ELM) problem to obtain an optimal initial set of control variables of the system so that is valid for all time intervals and can be used as an initial operating condition of the system. In this paper, the ELM problem has been solved by the linear programming (LP) and fuzzy linear programming (Fuzzy-LP) and evolutionary algorithms i.e. MHBMO and TLBO and the results are compared with the proposed Fuzzy-TLBO method. In the proposed method both objective function and constraints are evaluated by membership functions. The inequality constraints are embedded into the fitness function by the membership function of the fuzzy decision and the problem is modeled by fuzzy set theory. The proposed Fuzzy-TLBO method is performed on the IEEE 30 bus test system by considering two different LDC; and it is shown that using this method has further minimized objective function than original TLBO and other optimization techniques and confirms its potential to solve the ORPCVP problem with considering ELM as the objective function

  20. No-go theorems for the minimization of potentials

    International Nuclear Information System (INIS)

    Chang, D.; Kumar, A.

    1985-01-01

    Using a theorem in linear algebra, we prove some no-go theorems in the minimization of potentials related to the problem of symmetry breaking. Some applications in the grand unified model building are mentioned. Another application of the algebraic theorem is also included to demonstrate its usefulness

  1. Linearly convergent stochastic heavy ball method for minimizing generalization error

    KAUST Repository

    Loizou, Nicolas

    2017-10-30

    In this work we establish the first linear convergence result for the stochastic heavy ball method. The method performs SGD steps with a fixed stepsize, amended by a heavy ball momentum term. In the analysis, we focus on minimizing the expected loss and not on finite-sum minimization, which is typically a much harder problem. While in the analysis we constrain ourselves to quadratic loss, the overall objective is not necessarily strongly convex.

  2. Minimization of number of setups for mounting machines

    Energy Technology Data Exchange (ETDEWEB)

    Kolman, Pavel; Nchor, Dennis; Hampel, David [Department of Statistics and Operation Analysis, Faculty of Business and Economics, Mendel University in Brno, Zemědělská 1, 603 00 Brno (Czech Republic); Žák, Jaroslav [Institute of Technology and Business, Okružní 517/10, 370 01 České Budejovice (Czech Republic)

    2015-03-10

    The article deals with the problem of minimizing the number of setups for mounting SMT machines. SMT is a device used to assemble components on printed circuit boards (PCB) during the manufacturing of electronics. Each type of PCB has a different set of components, which are obligatory. Components are placed in the SMT tray. The problem consists in the fact that the total number of components used for all products is greater than the size of the tray. Therefore, every change of manufactured product requires a complete change of components in the tray (i.e., a setup change). Currently, the number of setups corresponds to the number of printed circuit board type. Any production change affects the change of setup and stops production on one shift. Many components occur in more products therefore the question arose as to how to deploy the products into groups so as to minimize the number of setups. This would result in a huge increase in efficiency of production.

  3. Replica Approach for Minimal Investment Risk with Cost

    Science.gov (United States)

    Shinzato, Takashi

    2018-06-01

    In the present work, the optimal portfolio minimizing the investment risk with cost is discussed analytically, where an objective function is constructed in terms of two negative aspects of investment, the risk and cost. We note the mathematical similarity between the Hamiltonian in the mean-variance model and the Hamiltonians in the Hopfield model and the Sherrington-Kirkpatrick model, show that we can analyze this portfolio optimization problem by using replica analysis, and derive the minimal investment risk with cost and the investment concentration of the optimal portfolio. Furthermore, we validate our proposed method through numerical simulations.

  4. An applied optimization based method for line planning to minimize travel time

    DEFF Research Database (Denmark)

    Bull, Simon Henry; Rezanova, Natalia Jurjevna; Lusby, Richard Martin

    The line planning problem in rail is to select a number of lines froma potential pool which provides sufficient passenger capacity and meetsoperational requirements, with some objective measure of solution linequality. We model the problem of minimizing the average passenger systemtime, including...

  5. A Variance Minimization Criterion to Feature Selection Using Laplacian Regularization.

    Science.gov (United States)

    He, Xiaofei; Ji, Ming; Zhang, Chiyuan; Bao, Hujun

    2011-10-01

    In many information processing tasks, one is often confronted with very high-dimensional data. Feature selection techniques are designed to find the meaningful feature subset of the original features which can facilitate clustering, classification, and retrieval. In this paper, we consider the feature selection problem in unsupervised learning scenarios, which is particularly difficult due to the absence of class labels that would guide the search for relevant information. Based on Laplacian regularized least squares, which finds a smooth function on the data manifold and minimizes the empirical loss, we propose two novel feature selection algorithms which aim to minimize the expected prediction error of the regularized regression model. Specifically, we select those features such that the size of the parameter covariance matrix of the regularized regression model is minimized. Motivated from experimental design, we use trace and determinant operators to measure the size of the covariance matrix. Efficient computational schemes are also introduced to solve the corresponding optimization problems. Extensive experimental results over various real-life data sets have demonstrated the superiority of the proposed algorithms.

  6. Transformation of general binary MRF minimization to the first-order case.

    Science.gov (United States)

    Ishikawa, Hiroshi

    2011-06-01

    We introduce a transformation of general higher-order Markov random field with binary labels into a first-order one that has the same minima as the original. Moreover, we formalize a framework for approximately minimizing higher-order multi-label MRF energies that combines the new reduction with the fusion-move and QPBO algorithms. While many computer vision problems today are formulated as energy minimization problems, they have mostly been limited to using first-order energies, which consist of unary and pairwise clique potentials, with a few exceptions that consider triples. This is because of the lack of efficient algorithms to optimize energies with higher-order interactions. Our algorithm challenges this restriction that limits the representational power of the models so that higher-order energies can be used to capture the rich statistics of natural scenes. We also show that some minimization methods can be considered special cases of the present framework, as well as comparing the new method experimentally with other such techniques.

  7. Image denoising by a direct variational minimization

    Directory of Open Access Journals (Sweden)

    Pilipović Stevan

    2011-01-01

    Full Text Available Abstract In this article we introduce a novel method for the image de-noising which combines a mathematically well-posdenes of the variational modeling with the efficiency of a patch-based approach in the field of image processing. It based on a direct minimization of an energy functional containing a minimal surface regularizer that uses fractional gradient. The minimization is obtained on every predefined patch of the image, independently. By doing so, we avoid the use of an artificial time PDE model with its inherent problems of finding optimal stopping time, as well as the optimal time step. Moreover, we control the level of image smoothing on each patch (and thus on the whole image by adapting the Lagrange multiplier using the information on the level of discontinuities on a particular patch, which we obtain by pre-processing. In order to reduce the average number of vectors in the approximation generator and still to obtain the minimal degradation, we combine a Ritz variational method for the actual minimization on a patch, and a complementary fractional variational principle. Thus, the proposed method becomes computationally feasible and applicable for practical purposes. We confirm our claims with experimental results, by comparing the proposed method with a couple of PDE-based methods, where we get significantly better denoising results specially on the oscillatory regions.

  8. Scheduling a maintenance activity under skills constraints to minimize total weighted tardiness and late tasks

    Directory of Open Access Journals (Sweden)

    Djalal Hedjazi

    2015-04-01

    Full Text Available Skill management is a key factor in improving effectiveness of industrial companies, notably their maintenance services. The problem considered in this paper concerns scheduling of maintenance tasks under resource (maintenance teams constraints. This problem is generally known as unrelated parallel machine scheduling. We consider the problem with a both objectives of minimizing total weighted tardiness (TWT and number of tardiness tasks. Our interest is focused particularly on solving this problem under skill constraints, which each resource has a skill level. So, we propose a new efficient heuristic to obtain an approximate solution for this NP-hard problem and demonstrate his effectiveness through computational experiments. This heuristic is designed for implementation in a static maintenance scheduling problem (with unequal release dates, processing times and resource skills, while minimizing objective functions aforementioned.

  9. Stowage Planning in Multiple Ports with Shifting Fee Minimization

    Directory of Open Access Journals (Sweden)

    E. Zhang

    2018-01-01

    Full Text Available This paper studies the problem of stowage planning within a vessel bay in a multiple port transportation route, aiming at minimizing the total container shifting fee. Since the access to containers is in the top-to-bottom order for each stack, reshuffle operations occur when a target container to be unloaded at its destination port is not stowed on the top of a stack at the time. Each container shift via a quay crane induces one unit of shifting fee that depends on the charge policy of the local container port. Previous studies assume that each container shift consumes a uniform cost in all ports and thus focus on minimizing the total number of shifts or the turnaround time of the vessel. Motivated by the observation that different ports are of nonuniform fee for each container shift, we propose a mixed integer programming (MIP model for the problem to produce an optimal stowage planning with minimum total shifting fee in this work. Moreover, as the considered problem is NP-hard due to the NP-hardness of its counterpart with uniform unit shifting fee, we propose an improved genetic algorithm to solve the problem. The efficiency of the proposed algorithm is demonstrated via numerical experiments.

  10. Detection of Cavities by Inverse Heat Conduction Boundary Element Method Using Minimal Energy Technique

    International Nuclear Information System (INIS)

    Choi, C. Y.

    1997-01-01

    A geometrical inverse heat conduction problem is solved for the infrared scanning cavity detection by the boundary element method using minimal energy technique. By minimizing the kinetic energy of temperature field, boundary element equations are converted to the quadratic programming problem. A hypothetical inner boundary is defined such that the actual cavity is located interior to the domain. Temperatures at hypothetical inner boundary are determined to meet the constraints of measurement error of surface temperature obtained by infrared scanning, and then boundary element analysis is performed for the position of an unknown boundary (cavity). Cavity detection algorithm is provided, and the effects of minimal energy technique on the inverse solution method are investigated by means of numerical analysis

  11. The Construction Solid Waste Minimization Practices among Malaysian Contractors

    Directory of Open Access Journals (Sweden)

    Che Ahmad A.

    2014-01-01

    Full Text Available The function of minimization of construction solid waste is to reduce or eliminates the adverse impacts on the environment and to human health. Due to the increase of population that leads to rapid development, there are possibilities of construction solid waste to be increased shortly from the construction works, demolition or renovation works. Materials such as wood, concrete, paint, brick, roofing, tiles, plastic and any other materials would contribute problem involving construction solid waste. Therefore, the proper waste minimization is needed to control the quantity of construction solid waste produced. This paper identifies the type of construction solid waste produced and discusses the waste minimization practice by the contractors at construction sites in Selangor, Kuala Lumpur and Putrajaya, Malaysia.

  12. Projected Gauss-Seidel subspace minimization method for interactive rigid body dynamics

    DEFF Research Database (Denmark)

    Silcowitz-Hansen, Morten; Abel, Sarah Maria Niebe; Erleben, Kenny

    2010-01-01

    artifacts such as viscous or damped contact response. In this paper, we present a new approach to contact force determination. We formulate the contact force problem as a nonlinear complementarity problem, and discretize the problem to derive the Projected Gauss–Seidel method. We combine the Projected Gauss......–Seidel method with a subspace minimization method. Our new method shows improved qualities and superior convergence properties for specific configurations....

  13. L{sup {infinity}} Variational Problems with Running Costs and Constraints

    Energy Technology Data Exchange (ETDEWEB)

    Aronsson, G., E-mail: gunnar.aronsson@liu.se [Linkoeping University, Department of Mathematics (Sweden); Barron, E. N., E-mail: enbarron@math.luc.edu [Loyola University of Chicago, Department of Mathematics and Statistics (United States)

    2012-02-15

    Various approaches are used to derive the Aronsson-Euler equations for L{sup {infinity}} calculus of variations problems with constraints. The problems considered involve holonomic, nonholonomic, isoperimetric, and isosupremic constraints on the minimizer. In addition, we derive the Aronsson-Euler equation for the basic L{sup {infinity}} problem with a running cost and then consider properties of an absolute minimizer. Many open problems are introduced for further study.

  14. New Exact Penalty Functions for Nonlinear Constrained Optimization Problems

    Directory of Open Access Journals (Sweden)

    Bingzhuang Liu

    2014-01-01

    Full Text Available For two kinds of nonlinear constrained optimization problems, we propose two simple penalty functions, respectively, by augmenting the dimension of the primal problem with a variable that controls the weight of the penalty terms. Both of the penalty functions enjoy improved smoothness. Under mild conditions, it can be proved that our penalty functions are both exact in the sense that local minimizers of the associated penalty problem are precisely the local minimizers of the original constrained problem.

  15. Geometric Measure Theory and Minimal Surfaces

    CERN Document Server

    Bombieri, Enrico

    2011-01-01

    W.K. ALLARD: On the first variation of area and generalized mean curvature.- F.J. ALMGREN Jr.: Geometric measure theory and elliptic variational problems.- E. GIUSTI: Minimal surfaces with obstacles.- J. GUCKENHEIMER: Singularities in soap-bubble-like and soap-film-like surfaces.- D. KINDERLEHRER: The analyticity of the coincidence set in variational inequalities.- M. MIRANDA: Boundaries of Caciopoli sets in the calculus of variations.- L. PICCININI: De Giorgi's measure and thin obstacles.

  16. Knee point search using cascading top-k sorting with minimized time complexity.

    Science.gov (United States)

    Wang, Zheng; Tseng, Shian-Shyong

    2013-01-01

    Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.

  17. Energy levels of one-dimensional systems satisfying the minimal length uncertainty relation

    Energy Technology Data Exchange (ETDEWEB)

    Bernardo, Reginald Christian S., E-mail: rcbernardo@nip.upd.edu.ph; Esguerra, Jose Perico H., E-mail: jesguerra@nip.upd.edu.ph

    2016-10-15

    The standard approach to calculating the energy levels for quantum systems satisfying the minimal length uncertainty relation is to solve an eigenvalue problem involving a fourth- or higher-order differential equation in quasiposition space. It is shown that the problem can be reformulated so that the energy levels of these systems can be obtained by solving only a second-order quasiposition eigenvalue equation. Through this formulation the energy levels are calculated for the following potentials: particle in a box, harmonic oscillator, Pöschl–Teller well, Gaussian well, and double-Gaussian well. For the particle in a box, the second-order quasiposition eigenvalue equation is a second-order differential equation with constant coefficients. For the harmonic oscillator, Pöschl–Teller well, Gaussian well, and double-Gaussian well, a method that involves using Wronskians has been used to solve the second-order quasiposition eigenvalue equation. It is observed for all of these quantum systems that the introduction of a nonzero minimal length uncertainty induces a positive shift in the energy levels. It is shown that the calculation of energy levels in systems satisfying the minimal length uncertainty relation is not limited to a small number of problems like particle in a box and the harmonic oscillator but can be extended to a wider class of problems involving potentials such as the Pöschl–Teller and Gaussian wells.

  18. Geothermal Energy: Prospects and Problems

    Science.gov (United States)

    Ritter, William W.

    1973-01-01

    An examination of geothermal energy as a means of increasing the United States power resources with minimal pollution problems. Developed and planned geothermal-electric power installations around the world, capacities, installation dates, etc., are reviewed. Environmental impact, problems, etc. are discussed. (LK)

  19. Energy Cost Minimization in Heterogeneous Cellular Networks with Hybrid Energy Supplies

    Directory of Open Access Journals (Sweden)

    Bang Wang

    2016-01-01

    Full Text Available The ever increasing data demand has led to the significant increase of energy consumption in cellular mobile networks. Recent advancements in heterogeneous cellular networks and green energy supplied base stations provide promising solutions for cellular communications industry. In this article, we first review the motivations and challenges as well as approaches to address the energy cost minimization problem for such green heterogeneous networks. Owing to the diversities of mobile traffic and renewable energy, the energy cost minimization problem involves both temporal and spatial optimization of resource allocation. We next present a new solution to illustrate how to combine the optimization of the temporal green energy allocation and spatial mobile traffic distribution. The whole optimization problem is decomposed into four subproblems, and correspondingly our proposed solution is divided into four parts: energy consumption estimation, green energy allocation, user association, and green energy reallocation. Simulation results demonstrate that our proposed algorithm can significantly reduce the total energy cost.

  20. A Prospective Randomized Study on Operative Treatment for Simple Distal Tibial Fractures-Minimally Invasive Plate Osteosynthesis Versus Minimal Open Reduction and Internal Fixation.

    Science.gov (United States)

    Kim, Ji Wan; Kim, Hyun Uk; Oh, Chang-Wug; Kim, Joon-Woo; Park, Ki Chul

    2018-01-01

    To compare the radiologic and clinical results of minimally invasive plate osteosynthesis (MIPO) and minimal open reduction and internal fixation (ORIF) for simple distal tibial fractures. Randomized prospective study. Three level 1 trauma centers. Fifty-eight patients with simple and distal tibial fractures were randomized into a MIPO group (treatment with MIPO; n = 29) or a minimal group (treatment with minimal ORIF; n = 29). These numbers were designed to define the rate of soft tissue complication; therefore, validation of superiority in union time or determination of differences in rates of delayed union was limited in this study. Simple distal tibial fractures treated with MIPO or minimal ORIF. The clinical outcome measurements included operative time, radiation exposure time, and soft tissue complications. To evaluate a patient's function, the American Orthopedic Foot and Ankle Society ankle score (AOFAS) was used. Radiologic measurements included fracture alignment, delayed union, and union time. All patients acquired bone union without any secondary intervention. The mean union time was 17.4 weeks and 16.3 weeks in the MIPO and minimal groups, respectively. There was 1 case of delayed union and 1 case of superficial infection in each group. The radiation exposure time was shorter in the minimal group than in the MIPO group. Coronal angulation showed a difference between both groups. The American Orthopedic Foot and Ankle Society ankle scores were 86.0 and 86.7 in the MIPO and minimal groups, respectively. Minimal ORIF resulted in similar outcomes, with no increased rate of soft tissue problems compared to MIPO. Both MIPO and minimal ORIF have high union rates and good functional outcomes for simple distal tibial fractures. Minimal ORIF did not result in increased rates of infection and wound dehiscence. Therapeutic Level II. See Instructions for Authors for a complete description of levels of evidence.

  1. Rule extraction from minimal neural networks for credit card screening.

    Science.gov (United States)

    Setiono, Rudy; Baesens, Bart; Mues, Christophe

    2011-08-01

    While feedforward neural networks have been widely accepted as effective tools for solving classification problems, the issue of finding the best network architecture remains unresolved, particularly so in real-world problem settings. We address this issue in the context of credit card screening, where it is important to not only find a neural network with good predictive performance but also one that facilitates a clear explanation of how it produces its predictions. We show that minimal neural networks with as few as one hidden unit provide good predictive accuracy, while having the added advantage of making it easier to generate concise and comprehensible classification rules for the user. To further reduce model size, a novel approach is suggested in which network connections from the input units to this hidden unit are removed by a very straightaway pruning procedure. In terms of predictive accuracy, both the minimized neural networks and the rule sets generated from them are shown to compare favorably with other neural network based classifiers. The rules generated from the minimized neural networks are concise and thus easier to validate in a real-life setting.

  2. Higher Integrability for Minimizers of the Mumford-Shah Functional

    Science.gov (United States)

    De Philippis, Guido; Figalli, Alessio

    2014-08-01

    We prove higher integrability for the gradient of local minimizers of the Mumford-Shah energy functional, providing a positive answer to a conjecture of De Giorgi (Free discontinuity problems in calculus of variations. Frontiers in pure and applied mathematics, North-Holland, Amsterdam, pp 55-62, 1991).

  3. The stochastic goodwill problem

    OpenAIRE

    Marinelli, Carlo

    2003-01-01

    Stochastic control problems related to optimal advertising under uncertainty are considered. In particular, we determine the optimal strategies for the problem of maximizing the utility of goodwill at launch time and minimizing the disutility of a stream of advertising costs that extends until the launch time for some classes of stochastic perturbations of the classical Nerlove-Arrow dynamics. We also consider some generalizations such as problems with constrained budget and with discretionar...

  4. Computers and the Environment: Minimizing the Carbon Footprint

    Science.gov (United States)

    Kaestner, Rich

    2009-01-01

    Computers can be good and bad for the environment; one can maximize the good and minimize the bad. When dealing with environmental issues, it's difficult to ignore the computing infrastructure. With an operations carbon footprint equal to the airline industry's, computer energy use is only part of the problem; everyone is also dealing with the use…

  5. Optimizing Ship Speed to Minimize Total Fuel Consumption with Multiple Time Windows

    Directory of Open Access Journals (Sweden)

    Jae-Gon Kim

    2016-01-01

    Full Text Available We study the ship speed optimization problem with the objective of minimizing the total fuel consumption. We consider multiple time windows for each port call as constraints and formulate the problem as a nonlinear mixed integer program. We derive intrinsic properties of the problem and develop an exact algorithm based on the properties. Computational experiments show that the suggested algorithm is very efficient in finding an optimal solution.

  6. Minimizing makespan for a no-wait flowshop using genetic algorithm

    Indian Academy of Sciences (India)

    This paper explains minimization of makespan or total completion time .... lead to a natural reduction of the no-wait flow shop problem to the travelling sales- ... FCH can also be applied in real time scheduling and rescheduling for no-wait flow.

  7. SAR image regularization with fast approximate discrete minimization.

    Science.gov (United States)

    Denis, Loïc; Tupin, Florence; Darbon, Jérôme; Sigelle, Marc

    2009-07-01

    Synthetic aperture radar (SAR) images, like other coherent imaging modalities, suffer from speckle noise. The presence of this noise makes the automatic interpretation of images a challenging task and noise reduction is often a prerequisite for successful use of classical image processing algorithms. Numerous approaches have been proposed to filter speckle noise. Markov random field (MRF) modelization provides a convenient way to express both data fidelity constraints and desirable properties of the filtered image. In this context, total variation minimization has been extensively used to constrain the oscillations in the regularized image while preserving its edges. Speckle noise follows heavy-tailed distributions, and the MRF formulation leads to a minimization problem involving nonconvex log-likelihood terms. Such a minimization can be performed efficiently by computing minimum cuts on weighted graphs. Due to memory constraints, exact minimization, although theoretically possible, is not achievable on large images required by remote sensing applications. The computational burden of the state-of-the-art algorithm for approximate minimization (namely the alpha -expansion) is too heavy specially when considering joint regularization of several images. We show that a satisfying solution can be reached, in few iterations, by performing a graph-cut-based combinatorial exploration of large trial moves. This algorithm is applied to joint regularization of the amplitude and interferometric phase in urban area SAR images.

  8. A New Iterative Method for Equilibrium Problems and Fixed Point Problems

    Directory of Open Access Journals (Sweden)

    Abdul Latif

    2013-01-01

    Full Text Available Introducing a new iterative method, we study the existence of a common element of the set of solutions of equilibrium problems for a family of monotone, Lipschitz-type continuous mappings and the sets of fixed points of two nonexpansive semigroups in a real Hilbert space. We establish strong convergence theorems of the new iterative method for the solution of the variational inequality problem which is the optimality condition for the minimization problem. Our results improve and generalize the corresponding recent results of Anh (2012, Cianciaruso et al. (2010, and many others.

  9. The maximum number of minimal codewords in long codes

    DEFF Research Database (Denmark)

    Alahmadi, A.; Aldred, R.E.L.; dela Cruz, R.

    2013-01-01

    Upper bounds on the maximum number of minimal codewords in a binary code follow from the theory of matroids. Random coding provides lower bounds. In this paper, we compare these bounds with analogous bounds for the cycle code of graphs. This problem (in the graphic case) was considered in 1981 by...

  10. Optimum geometry for torque ripple minimization of switched reluctance motors

    NARCIS (Netherlands)

    Sahin, F.; Ertan, H.B.; Leblebicioglu, K.

    2000-01-01

    For switched reluctance motors, one of the major problems is torque ripple which causes increased undesirable acoustic noise and possibly speed ripple. This paper describes an approach to determine optimum magnetic circuit parameters to minimize low speed torque ripple for such motors. The

  11. [Minimally invasive coronary artery surgery].

    Science.gov (United States)

    Zalaquett, R; Howard, M; Irarrázaval, M J; Morán, S; Maturana, G; Becker, P; Medel, J; Sacco, C; Lema, G; Canessa, R; Cruz, F

    1999-01-01

    There is a growing interest to perform a left internal mammary artery (LIMA) graft to the left anterior descending coronary artery (LAD) on a beating heart through a minimally invasive access to the chest cavity. To report the experience with minimally invasive coronary artery surgery. Analysis of 11 patients aged 48 to 79 years old with single vessel disease that, between 1996 and 1997, had a LIMA graft to the LAD performed through a minimally invasive left anterior mediastinotomy, without cardiopulmonary bypass. A 6 to 10 cm left parasternal incision was done. The LIMA to the LAD anastomosis was done after pharmacological heart rate and blood pressure control and a period of ischemic pre conditioning. Graft patency was confirmed intraoperatively by standard Doppler techniques. Patients were followed for a mean of 11.6 months (7-15 months). All patients were extubated in the operating room and transferred out of the intensive care unit on the next morning. Seven patients were discharged on the third postoperative day. Duplex scanning confirmed graft patency in all patients before discharge; in two patients, it was confirmed additionally by arteriography. There was no hospital mortality, no perioperative myocardial infarction and no bleeding problems. After follow up, ten patients were free of angina, in functional class I and pleased with the surgical and cosmetic results. One patient developed atypical angina on the seventh postoperative month and a selective arteriography confirmed stenosis of the anastomosis. A successful angioplasty of the original LAD lesion was carried out. A minimally invasive left anterior mediastinotomy is a good surgical access to perform a successful LIMA to LAD graft without cardiopulmonary bypass, allowing a shorter hospital stay and earlier postoperative recovery. However, a larger experience and a longer follow up is required to define its role in the treatment of coronary artery disease.

  12. Global Search Strategies for Solving Multilinear Least-Squares Problems

    Directory of Open Access Journals (Sweden)

    Mats Andersson

    2012-04-01

    Full Text Available The multilinear least-squares (MLLS problem is an extension of the linear least-squares problem. The difference is that a multilinear operator is used in place of a matrix-vector product. The MLLS is typically a large-scale problem characterized by a large number of local minimizers. It originates, for instance, from the design of filter networks. We present a global search strategy that allows for moving from one local minimizer to a better one. The efficiency of this strategy is illustrated by the results of numerical experiments performed for some problems related to the design of filter networks.

  13. Lorentz Invariant Spectrum of Minimal Chiral Schwinger Model

    Science.gov (United States)

    Kim, Yong-Wan; Kim, Seung-Kook; Kim, Won-Tae; Park, Young-Jai; Kim, Kee Yong; Kim, Yongduk

    We study the Lorentz transformation of the minimal chiral Schwinger model in terms of the alternative action. We automatically obtain a chiral constraint, which is equivalent to the frame constraint introduced by McCabe, in order to solve the frame problem in phase space. As a result we obtain the Lorentz invariant spectrum in any moving frame by choosing a frame parameter.

  14. The environmental cost of subsistence: Optimizing diets to minimize footprints

    International Nuclear Information System (INIS)

    Gephart, Jessica A.; Davis, Kyle F.; Emery, Kyle A.; Leach, Allison M.; Galloway, James N.; Pace, Michael L.

    2016-01-01

    The question of how to minimize monetary cost while meeting basic nutrient requirements (a subsistence diet) was posed by George Stigler in 1945. The problem, known as Stigler's diet problem, was famously solved using the simplex algorithm. Today, we are not only concerned with the monetary cost of food, but also the environmental cost. Efforts to quantify environmental impacts led to the development of footprint (FP) indicators. The environmental footprints of food production span multiple dimensions, including greenhouse gas emissions (carbon footprint), nitrogen release (nitrogen footprint), water use (blue and green water footprint) and land use (land footprint), and a diet minimizing one of these impacts could result in higher impacts in another dimension. In this study based on nutritional and population data for the United States, we identify diets that minimize each of these four footprints subject to nutrient constraints. We then calculate tradeoffs by taking the composition of each footprint's minimum diet and calculating the other three footprints. We find that diets for the minimized footprints tend to be similar for the four footprints, suggesting there are generally synergies, rather than tradeoffs, among low footprint diets. Plant-based food and seafood (fish and other aquatic foods) commonly appear in minimized diets and tend to most efficiently supply macronutrients and micronutrients, respectively. Livestock products rarely appear in minimized diets, suggesting these foods tend to be less efficient from an environmental perspective, even when nutrient content is considered. The results' emphasis on seafood is complicated by the environmental impacts of aquaculture versus capture fisheries, increasing in aquaculture, and shifting compositions of aquaculture feeds. While this analysis does not make specific diet recommendations, our approach demonstrates potential environmental synergies of plant- and seafood-based diets. As a result, this study

  15. The environmental cost of subsistence: Optimizing diets to minimize footprints

    Energy Technology Data Exchange (ETDEWEB)

    Gephart, Jessica A.; Davis, Kyle F. [University of Virginia, Department of Environmental Sciences, 291 McCormick Road, Charlottesville, VA 22904 (United States); Emery, Kyle A. [University of Virginia, Department of Environmental Sciences, 291 McCormick Road, Charlottesville, VA 22904 (United States); University of California, Santa Barbara. Marine Science Institute, Santa Barbara, CA 93106 (United States); Leach, Allison M. [University of New Hampshire, 107 Nesmith Hall, 131 Main Street, Durham, NH, 03824 (United States); Galloway, James N.; Pace, Michael L. [University of Virginia, Department of Environmental Sciences, 291 McCormick Road, Charlottesville, VA 22904 (United States)

    2016-05-15

    The question of how to minimize monetary cost while meeting basic nutrient requirements (a subsistence diet) was posed by George Stigler in 1945. The problem, known as Stigler's diet problem, was famously solved using the simplex algorithm. Today, we are not only concerned with the monetary cost of food, but also the environmental cost. Efforts to quantify environmental impacts led to the development of footprint (FP) indicators. The environmental footprints of food production span multiple dimensions, including greenhouse gas emissions (carbon footprint), nitrogen release (nitrogen footprint), water use (blue and green water footprint) and land use (land footprint), and a diet minimizing one of these impacts could result in higher impacts in another dimension. In this study based on nutritional and population data for the United States, we identify diets that minimize each of these four footprints subject to nutrient constraints. We then calculate tradeoffs by taking the composition of each footprint's minimum diet and calculating the other three footprints. We find that diets for the minimized footprints tend to be similar for the four footprints, suggesting there are generally synergies, rather than tradeoffs, among low footprint diets. Plant-based food and seafood (fish and other aquatic foods) commonly appear in minimized diets and tend to most efficiently supply macronutrients and micronutrients, respectively. Livestock products rarely appear in minimized diets, suggesting these foods tend to be less efficient from an environmental perspective, even when nutrient content is considered. The results' emphasis on seafood is complicated by the environmental impacts of aquaculture versus capture fisheries, increasing in aquaculture, and shifting compositions of aquaculture feeds. While this analysis does not make specific diet recommendations, our approach demonstrates potential environmental synergies of plant- and seafood-based diets. As a result

  16. The numerical solution of total variation minimization problems in image processing

    Energy Technology Data Exchange (ETDEWEB)

    Vogel, C.R.; Oman, M.E. [Montana State Univ., Bozeman, MT (United States)

    1994-12-31

    Consider the minimization of penalized least squares functionals of the form: f(u) = 1/2 ({parallel}Au {minus} z{parallel}){sup 2} + {alpha}{integral}{sub {Omega}}{vert_bar}{del}u{vert_bar}dx. Here A is a bounded linear operator, z represents data, {parallel} {center_dot} {parallel} is a Hilbert space norm, {alpha} is a positive parameter, {integral}{sub {Omega}}{vert_bar}{del}u{vert_bar} dx represents the total variation (TV) of a function u {element_of} BV ({Omega}), the class of functions of bounded variation on a bounded region {Omega}, and {vert_bar} {center_dot} {vert_bar} denotes Euclidean norm. In image processing, u represents an image which is to be recovered from noisy data z. Certain {open_quotes}blurring processes{close_quotes} may be represented by the action of an operator A on the image u.

  17. Random Matrix Approach for Primal-Dual Portfolio Optimization Problems

    Science.gov (United States)

    Tada, Daichi; Yamamoto, Hisashi; Shinzato, Takashi

    2017-12-01

    In this paper, we revisit the portfolio optimization problems of the minimization/maximization of investment risk under constraints of budget and investment concentration (primal problem) and the maximization/minimization of investment concentration under constraints of budget and investment risk (dual problem) for the case that the variances of the return rates of the assets are identical. We analyze both optimization problems by the Lagrange multiplier method and the random matrix approach. Thereafter, we compare the results obtained from our proposed approach with the results obtained in previous work. Moreover, we use numerical experiments to validate the results obtained from the replica approach and the random matrix approach as methods for analyzing both the primal and dual portfolio optimization problems.

  18. Investigations on quantum mechanics with minimal length

    International Nuclear Information System (INIS)

    Chargui, Yassine

    2009-01-01

    We consider a modified quantum mechanics where the coordinates and momenta are assumed to satisfy a non-standard commutation relation of the form( X i , P j ) = iℎ(δ ij (1+βP 2 )+β'P i P j ). Such an algebra results in a generalized uncertainty relation which leads to the existence of a minimal observable length. Moreover, it incorporates an UV/IR mixing and non commutative position space. We analyse the possible representations in terms of differential operators. The latter are used to study the low energy effects of the minimal length by considering different quantum systems : the harmonic oscillator, the Klein-Gordon oscillator, the spinless Salpeter Coulomb problem, and the Dirac equation with a linear confining potential. We also discuss whether such effects are observable in precision measurements on a relativistic electron trapped in strong magnetic field.

  19. Optimal design method to minimize users' thinking mapping load in human-machine interactions.

    Science.gov (United States)

    Huang, Yanqun; Li, Xu; Zhang, Jie

    2015-01-01

    The discrepancy between human cognition and machine requirements/behaviors usually results in serious mental thinking mapping loads or even disasters in product operating. It is important to help people avoid human-machine interaction confusions and difficulties in today's mental work mastered society. Improving the usability of a product and minimizing user's thinking mapping and interpreting load in human-machine interactions. An optimal human-machine interface design method is introduced, which is based on the purpose of minimizing the mental load in thinking mapping process between users' intentions and affordance of product interface states. By analyzing the users' thinking mapping problem, an operating action model is constructed. According to human natural instincts and acquired knowledge, an expected ideal design with minimized thinking loads is uniquely determined at first. Then, creative alternatives, in terms of the way human obtains operational information, are provided as digital interface states datasets. In the last, using the cluster analysis method, an optimum solution is picked out from alternatives, by calculating the distances between two datasets. Considering multiple factors to minimize users' thinking mapping loads, a solution nearest to the ideal value is found in the human-car interaction design case. The clustering results show its effectiveness in finding an optimum solution to the mental load minimizing problems in human-machine interaction design.

  20. Pointing with a One-Eyed Cursor for Supervised Training in Minimally Invasive Robotic Surgery

    DEFF Research Database (Denmark)

    Kibsgaard, Martin; Kraus, Martin

    2016-01-01

    Pointing in the endoscopic view of a surgical robot is a natural and effcient way for instructors to communicate with trainees in robot-assisted minimally invasive surgery. However, pointing in a stereo-endoscopic view can be limited by problems such as video delay, double vision, arm fatigue......-day training units in robot- assisted minimally invasive surgery on anaesthetised pigs....

  1. A software system for oilfield facility investment minimization

    International Nuclear Information System (INIS)

    Ding, Z.X.; Startzman, R.A.

    1996-01-01

    Minimizing investment in oilfield development is an important subject that has attracted a considerable amount of industry attention. One method to reduce investment involves the optimal placement and selection of production facilities. Because of the large amount of capital used in this process, saving a small percent of the total investment may represent a large monetary value. The literature reports algorithms using mathematical programming techniques that were designed to solve the proposed problem in a global optimal manner. Owing to the high-computational complexity and the lack of user-friendly interfaces for data entry and results display, mathematical programming techniques have not been given enough attention in practice. This paper describes an interactive, graphical software system that provides a global optimal solution to the problem of placement and selection of production facilities in oil-field development processes. This software system can be used as an investment minimization tool and a scenario-study simulator. The developed software system consists of five basic modules: (1) an interactive data-input unit, (2) a cost function generator, (3) an optimization unit, (4) a graphic-output display, and (5) a sensitivity-analysis unit

  2. BACFIRE, Minimal Cut Sets Common Cause Failure Fault Tree Analysis

    International Nuclear Information System (INIS)

    Fussell, J.B.

    1983-01-01

    1 - Description of problem or function: BACFIRE, designed to aid in common cause failure analysis, searches among the basic events of a minimal cut set of the system logic model for common potential causes of failure. The potential cause of failure is called a qualitative failure characteristics. The algorithm searches qualitative failure characteristics (that are part of the program input) of the basic events contained in a set to find those characteristics common to all basic events. This search is repeated for all cut sets input to the program. Common cause failure analysis is thereby performed without inclusion of secondary failure in the system logic model. By using BACFIRE, a common cause failure analysis can be added to an existing system safety and reliability analysis. 2 - Method of solution: BACFIRE searches the qualitative failure characteristics of the basic events contained in the fault tree minimal cut set to find those characteristics common to all basic events by either of two criteria. The first criterion can be met if all the basic events in a minimal cut set are associated by a condition which alone may increase the probability of multiple component malfunction. The second criterion is met if all the basic events in a minimal cut set are susceptible to the same secondary failure cause and are located in the same domain for that cause of secondary failure. 3 - Restrictions on the complexity of the problem - Maxima of: 1001 secondary failure maps, 101 basic events, 10 cut sets

  3. The Electric Traveling Salesman Problem with Time Windows

    DEFF Research Database (Denmark)

    Roberti, Roberto; Wen, Min

    2016-01-01

    To minimize greenhouse gas emissions, the logistic field has seen an increasing usage of electric vehicles. The resulting distribution planning problems present new computational challenges.We address a problem, called Electric Traveling Salesman Problem with Time Windows. We propose a mixed...

  4. Products of Snowflaked Euclidean Lines Are Not Minimal for Looking Down

    Directory of Open Access Journals (Sweden)

    Joseph Matthieu

    2017-11-01

    Full Text Available We show that products of snowflaked Euclidean lines are not minimal for looking down. This question was raised in Fractured fractals and broken dreams, Problem 11.17, by David and Semmes. The proof uses arguments developed by Le Donne, Li and Rajala to prove that the Heisenberg group is not minimal for looking down. By a method of shortcuts, we define a new distance d such that the product of snowflaked Euclidean lines looks down on (RN , d, but not vice versa.

  5. Analysis of stationary fluence by minimization of a functional in tension and velocity

    International Nuclear Information System (INIS)

    Loula, A.F.D.; Guerreiro, J.N.C.; Toledo, E.M.

    1989-11-01

    New mixed finite element formulations for plane elasticity problems are presented with no limitation in the choice of conforming finite element spaces. Adding least square residual form of the governing equations to the classical Galerkin formulation the original saddle point problem is transformed into a minimization problem. Stability analysis, error estimates and numerical results are presented, confirming the error estimates and the good performance of this new formulation. (author) [pt

  6. Problem of quality assurance during metal constructions welding via robotic technological complexes

    Science.gov (United States)

    Fominykh, D. S.; Rezchikov, A. F.; Kushnikov, V. A.; Ivashchenko, V. A.; Bogomolov, A. S.; Filimonyuk, L. Yu; Dolinina, O. N.; Kushnikov, O. V.; Shulga, T. E.; Tverdokhlebov, V. A.

    2018-05-01

    The problem of minimizing the probability for critical combinations of events that lead to a loss in welding quality via robotic process automation is examined. The problem is formulated, models and algorithms for its solution are developed. The problem is solved by minimizing the criterion characterizing the losses caused by defective products. Solving the problem may enhance the quality and accuracy of operations performed and reduce the losses caused by defective product

  7. Self-Averaging Property of Minimal Investment Risk of Mean-Variance Model.

    Science.gov (United States)

    Shinzato, Takashi

    2015-01-01

    In portfolio optimization problems, the minimum expected investment risk is not always smaller than the expected minimal investment risk. That is, using a well-known approach from operations research, it is possible to derive a strategy that minimizes the expected investment risk, but this strategy does not always result in the best rate of return on assets. Prior to making investment decisions, it is important to an investor to know the potential minimal investment risk (or the expected minimal investment risk) and to determine the strategy that will maximize the return on assets. We use the self-averaging property to analyze the potential minimal investment risk and the concentrated investment level for the strategy that gives the best rate of return. We compare the results from our method with the results obtained by the operations research approach and with those obtained by a numerical simulation using the optimal portfolio. The results of our method and the numerical simulation are in agreement, but they differ from that of the operations research approach.

  8. Self-Averaging Property of Minimal Investment Risk of Mean-Variance Model.

    Directory of Open Access Journals (Sweden)

    Takashi Shinzato

    Full Text Available In portfolio optimization problems, the minimum expected investment risk is not always smaller than the expected minimal investment risk. That is, using a well-known approach from operations research, it is possible to derive a strategy that minimizes the expected investment risk, but this strategy does not always result in the best rate of return on assets. Prior to making investment decisions, it is important to an investor to know the potential minimal investment risk (or the expected minimal investment risk and to determine the strategy that will maximize the return on assets. We use the self-averaging property to analyze the potential minimal investment risk and the concentrated investment level for the strategy that gives the best rate of return. We compare the results from our method with the results obtained by the operations research approach and with those obtained by a numerical simulation using the optimal portfolio. The results of our method and the numerical simulation are in agreement, but they differ from that of the operations research approach.

  9. Parallel-Batch Scheduling with Two Models of Deterioration to Minimize the Makespan

    Directory of Open Access Journals (Sweden)

    Cuixia Miao

    2014-01-01

    Full Text Available We consider the bounded parallel-batch scheduling with two models of deterioration, in which the processing time of the first model is pj=aj+αt and of the second model is pj=a+αjt. The objective is to minimize the makespan. We present O(n log n time algorithms for the single-machine problems, respectively. And we propose fully polynomial time approximation schemes to solve the identical-parallel-machine problem and uniform-parallel-machine problem, respectively.

  10. Non-minimal Higgs inflation and frame dependence in cosmology

    International Nuclear Information System (INIS)

    Steinwachs, Christian F.; Kamenshchik, Alexander Yu.

    2013-01-01

    We investigate a very general class of cosmological models with scalar fields non-minimally coupled to gravity. A particular representative in this class is given by the non-minimal Higgs inflation model in which the Standard Model Higgs boson and the inflaton are described by one and the same scalar particle. While the predictions of the non-minimal Higgs inflation scenario come numerically remarkably close to the recently discovered mass of the Higgs boson, there remains a conceptual problem in this model that is associated with the choice of the cosmological frame. While the classical theory is independent of this choice, we find by an explicit calculation that already the first quantum corrections induce a frame dependence. We give a geometrical explanation of this frame dependence by embedding it into a more general field theoretical context. From this analysis, some conceptional points in the long lasting cosmological debate: 'Jordan frame vs. Einstein frame' become more transparent and in principle can be resolved in a natural way.

  11. Bilevel formulation of a policy design problem considering multiple objectives and incomplete preferences

    Science.gov (United States)

    Hawthorne, Bryant; Panchal, Jitesh H.

    2014-07-01

    A bilevel optimization formulation of policy design problems considering multiple objectives and incomplete preferences of the stakeholders is presented. The formulation is presented for Feed-in-Tariff (FIT) policy design for decentralized energy infrastructure. The upper-level problem is the policy designer's problem and the lower-level problem is a Nash equilibrium problem resulting from market interactions. The policy designer has two objectives: maximizing the quantity of energy generated and minimizing policy cost. The stakeholders decide on quantities while maximizing net present value and minimizing capital investment. The Nash equilibrium problem in the presence of incomplete preferences is formulated as a stochastic linear complementarity problem and solved using expected value formulation, expected residual minimization formulation, and the Monte Carlo technique. The primary contributions in this article are the mathematical formulation of the FIT policy, the extension of computational policy design problems to multiple objectives, and the consideration of incomplete preferences of stakeholders for policy design problems.

  12. Scheduling Non-Preemptible Jobs to Minimize Peak Demand

    Directory of Open Access Journals (Sweden)

    Sean Yaw

    2017-10-01

    Full Text Available This paper examines an important problem in smart grid energy scheduling; peaks in power demand are proportionally more expensive to generate and provision for. The issue is exacerbated in local microgrids that do not benefit from the aggregate smoothing experienced by large grids. Demand-side scheduling can reduce these peaks by taking advantage of the fact that there is often flexibility in job start times. We focus attention on the case where the jobs are non-preemptible, meaning once started, they run to completion. The associated optimization problem is called the peak demand minimization problem, and has been previously shown to be NP-hard. Our results include an optimal fixed-parameter tractable algorithm, a polynomial-time approximation algorithm, as well as an effective heuristic that can also be used in an online setting of the problem. Simulation results show that these methods can reduce peak demand by up to 50% versus on-demand scheduling for household power jobs.

  13. Heuristics for minimizing the maximum within-clusters distance

    Directory of Open Access Journals (Sweden)

    José Augusto Fioruci

    2012-12-01

    Full Text Available The clustering problem consists in finding patterns in a data set in order to divide it into clusters with high within-cluster similarity. This paper presents the study of a problem, here called MMD problem, which aims at finding a clustering with a predefined number of clusters that minimizes the largest within-cluster distance (diameter among all clusters. There are two main objectives in this paper: to propose heuristics for the MMD and to evaluate the suitability of the best proposed heuristic results according to the real classification of some data sets. Regarding the first objective, the results obtained in the experiments indicate a good performance of the best proposed heuristic that outperformed the Complete Linkage algorithm (the most used method from the literature for this problem. Nevertheless, regarding the suitability of the results according to the real classification of the data sets, the proposed heuristic achieved better quality results than C-Means algorithm, but worse than Complete Linkage.

  14. Design of power controller in CDMA system with power and SIR error minimization

    Institute of Scientific and Technical Information of China (English)

    Shulan KONG; Huanshui ZHANG; Zhaosheng ZHANG; Hongxia WANG

    2007-01-01

    In this paper, an uplink power control problem is considered for code division multiple access (CDMA) systems. A distributed algorithm is proposed based on linear quadratic optimal control theory. The proposed scheme minimizes the sum of the power and the error of signal-to-interference ratio (SIR). A power controller is designed by constructing an optimization problem of a stochastic linear quadratic type in Krein space and solving a Kalman filter problem.

  15. Statistical problems in medical research

    African Journals Online (AJOL)

    STORAGESEVER

    2008-12-29

    Dec 29, 2008 ... medical research, there are some common problems in using statistical methodology which may result ... optimal combination of diagnostic tests for osteoporosis .... randomization used include stratification and minimize-.

  16. The thermodynamic quantity minimized in steady heat and fluid flow processes: A control volume approach

    International Nuclear Information System (INIS)

    Sahin, Ahmet Z.

    2012-01-01

    Highlights: ► The optimality in both heat and fluid flow systems has been investigated. ► A new thermodynamic property has been introduced. ► The second law of thermodynamics was extended to present the temheat balance that included the temheat destruction. ► The principle of temheat destruction minimization was introduced. ► It is shown that the rate of total temheat destruction is minimized in steady heat conduction and fluid flow problems. - Abstract: Heat transfer and fluid flow processes exhibit similarities as they occur naturally and are governed by the same type of differential equations. Natural phenomena occur always in an optimum way. In this paper, the natural optimality that exists in the heat transfer and fluid flow processes is investigated. In this regard, heat transfer and fluid flow problems are treated as optimization problems. We discovered a thermodynamic quantity that is optimized during the steady heat transfer and fluid flow processes. Consequently, a new thermodynamic property, the so called temheat, is introduced using the second law of thermodynamics and the definition of entropy. It is shown, through several examples, that overall temheat destruction is always minimized in steady heat and fluid flow processes. The principle of temheat destruction minimization that is based on the temheat balance equation provides a better insight to understand how the natural flow processes take place.

  17. Infinite periodic minimal surfaces and their crystallography in the hyperbolic plane

    International Nuclear Information System (INIS)

    Sadoc, J.F.; Charvolin, J.

    1989-01-01

    Infinite periodic minimal surfaces are now being introduced to describe some complex structures with large cells, formed by inorganic and organic materials, which can be considered as crystals of surfaces or films. Among them are the spectacular cubic crystalline structures built by amphiphilic molecules in the presence of water. The crystallographic properties of these surfaces are studied from an intrinsic point of view, using operations of groups of symmetry defined by displacements on their surface. This approach takes advantage of the relation existing between these groups and those characterizing the tilings of the hyperbolic plane. First, the general bases of the particular crystallography of the hyperbolic plane are presented. Then the translation subgroups of the hyperbolic plane are determined in one particular case, that of the tiling involved in the problem of cubic structures of liquid crystals. Finally, it is shown that the infinite periodic minimal surfaces used to describe these structures can be obtained from the hyperbolic plane when some translations are forced to identity. This is indeed formally analogous to the simple process of transformation of a Euclidean plane into a cylinder, when a translation of the plane is forced to identity by rolling the plane onto itself. Thus, this approach transforms the 3D problem of infinite periodic minimal surfaces into a 2D problem and, although the latter is to be treated in a non-Euclidean space, provides a relatively simple formalism for the investigation of infinite periodic surfaces in general and the study of the geometrical transformations relating them. (orig.)

  18. Surface Reconstruction and Image Enhancement via $L^1$-Minimization

    KAUST Repository

    Dobrev, Veselin

    2010-01-01

    A surface reconstruction technique based on minimization of the total variation of the gradient is introduced. Convergence of the method is established, and an interior-point algorithm solving the associated linear programming problem is introduced. The reconstruction algorithm is illustrated on various test cases including natural and urban terrain data, and enhancement oflow-resolution or aliased images. Copyright © by SIAM.

  19. Minimizing Sum-MSE Implies Identical Downlink and Dual Uplink Power Allocations

    OpenAIRE

    Tenenbaum, Adam J.; Adve, Raviraj S.

    2009-01-01

    In the multiuser downlink, power allocation for linear precoders that minimize the sum of mean squared errors under a sum power constraint is a non-convex problem. Many existing algorithms solve an equivalent convex problem in the virtual uplink and apply a transformation based on uplink-downlink duality to find a downlink solution. In this letter, we analyze the optimality criteria for the power allocation subproblem in the virtual uplink, and demonstrate that the optimal solution leads to i...

  20. Optimal Control of a PEM Fuel Cell for the Inputs Minimization

    Directory of Open Access Journals (Sweden)

    José de Jesús Rubio

    2014-01-01

    Full Text Available The trajectory tracking problem of a proton exchange membrane (PEM fuel cell is considered. To solve this problem, an optimal controller is proposed. The optimal technique has the objective that the system states should reach the desired trajectories while the inputs are minimized. The proposed controller uses the Hamilton-Jacobi-Bellman method where its Riccati equation is considered as an adaptive function. The effectiveness of the proposed technique is verified by two simulations.

  1. On minimizing the maximum broadcast decoding delay for instantly decodable network coding

    KAUST Repository

    Douik, Ahmed S.; Sorour, Sameh; Alouini, Mohamed-Slim; Ai-Naffouri, Tareq Y.

    2014-01-01

    In this paper, we consider the problem of minimizing the maximum broadcast decoding delay experienced by all the receivers of generalized instantly decodable network coding (IDNC). Unlike the sum decoding delay, the maximum decoding delay as a

  2. Solving large nonlinear generalized eigenvalue problems from Density Functional Theory calculations in parallel

    DEFF Research Database (Denmark)

    Bendtsen, Claus; Nielsen, Ole Holm; Hansen, Lars Bruno

    2001-01-01

    The quantum mechanical ground state of electrons is described by Density Functional Theory, which leads to large minimization problems. An efficient minimization method uses a self-consistent field (SCF) solution of large eigenvalue problems. The iterative Davidson algorithm is often used, and we...

  3. A bicriterion Steiner tree problem on graph

    Directory of Open Access Journals (Sweden)

    Vujošević Mirko B.

    2003-01-01

    Full Text Available This paper presents a formulation of bicriterion Steiner tree problem which is stated as a task of finding a Steiner tree with maximal capacity and minimal length. It is considered as a lexicographic multicriteria problem. This means that the bottleneck Steiner tree problem is solved first. After that, the next optimization problem is stated as a classical minimums Steiner tree problem under the constraint on capacity of the tree. The paper also presents some computational experiments with the multicriteria problem.

  4. Periodical cicadas: A minimal automaton model

    Science.gov (United States)

    de O. Cardozo, Giovano; de A. M. M. Silvestre, Daniel; Colato, Alexandre

    2007-08-01

    The Magicicada spp. life cycles with its prime periods and highly synchronized emergence have defied reasonable scientific explanation since its discovery. During the last decade several models and explanations for this phenomenon appeared in the literature along with a great deal of discussion. Despite this considerable effort, there is no final conclusion about this long standing biological problem. Here, we construct a minimal automaton model without predation/parasitism which reproduces some of these aspects. Our results point towards competition between different strains with limited dispersal threshold as the main factor leading to the emergence of prime numbered life cycles.

  5. Technological Minimalism: A Cost-Effective Alternative for Course Design and Development.

    Science.gov (United States)

    Lorenzo, George

    2001-01-01

    Discusses the use of minimum levels of technology, or technological minimalism, for Web-based multimedia course content. Highlights include cost effectiveness; problems with video streaming, the use of XML for Web pages, and Flash and Java applets; listservs instead of proprietary software; and proper faculty training. (LRW)

  6. Numerical algorithms for contact problems in linear elastostatics

    International Nuclear Information System (INIS)

    Barbosa, H.J.C.; Feijoo, R.A.

    1984-01-01

    In this work contact problems in linear elasticity are analysed by means of Finite Elements and Mathematical Programming Techniques. The principle of virtual work leads in this case to a variational inequality which in turn is equivalent, for Hookean materials and infinitesimal strains, to the minimization of the total potential energy over the set of all admissible virtual displacements. The use of Gauss-Seidel algorithm with relaxation and projection and also Lemke's algorithm and Uzawa's algorithm for solving the minimization problem is discussed. Finally numerical examples are presented. (Author) [pt

  7. Minimally extended SILH

    International Nuclear Information System (INIS)

    Chala, Mikael; Grojean, Christophe; Humboldt-Univ. Berlin; Lima, Leonardo de; Univ. Estadual Paulista, Sao Paulo

    2017-03-01

    Higgs boson compositeness is a phenomenologically viable scenario addressing the hierarchy problem. In minimal models, the Higgs boson is the only degree of freedom of the strong sector below the strong interaction scale. We present here the simplest extension of such a framework with an additional composite spin-zero singlet. To this end, we adopt an effective field theory approach and develop a set of rules to estimate the size of the various operator coefficients, relating them to the parameters of the strong sector and its structural features. As a result, we obtain the patterns of new interactions affecting both the new singlet and the Higgs boson's physics. We identify the characteristics of the singlet field which cause its effects on Higgs physics to dominate over the ones inherited from the composite nature of the Higgs boson. Our effective field theory construction is supported by comparisons with explicit UV models.

  8. Minimally extended SILH

    Energy Technology Data Exchange (ETDEWEB)

    Chala, Mikael [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Valencia Univ. (Spain). Dept. de Fisica Teorica y IFIC; Durieux, Gauthier; Matsedonskyi, Oleksii [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Grojean, Christophe [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Humboldt-Univ. Berlin (Germany). Inst. fuer Physik; Lima, Leonardo de [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Univ. Estadual Paulista, Sao Paulo (Brazil). Inst. de Fisica Teorica

    2017-03-15

    Higgs boson compositeness is a phenomenologically viable scenario addressing the hierarchy problem. In minimal models, the Higgs boson is the only degree of freedom of the strong sector below the strong interaction scale. We present here the simplest extension of such a framework with an additional composite spin-zero singlet. To this end, we adopt an effective field theory approach and develop a set of rules to estimate the size of the various operator coefficients, relating them to the parameters of the strong sector and its structural features. As a result, we obtain the patterns of new interactions affecting both the new singlet and the Higgs boson's physics. We identify the characteristics of the singlet field which cause its effects on Higgs physics to dominate over the ones inherited from the composite nature of the Higgs boson. Our effective field theory construction is supported by comparisons with explicit UV models.

  9. Overuse of helicopter transport in the minimally injured: A health care system problem that should be corrected.

    Science.gov (United States)

    Vercruysse, Gary A; Friese, Randall S; Khalil, Mazhar; Ibrahim-Zada, Irada; Zangbar, Bardiya; Hashmi, Ammar; Tang, Andrew; O'Keeffe, Terrence; Kulvatunyou, Narong; Green, Donald J; Gries, Lynn; Joseph, Bellal; Rhee, Peter M

    2015-03-01

    Mortality benefit has been demonstrated for trauma patients transported via helicopter but at great cost. This study identified patients who did not benefit from helicopter transport to our facility and demonstrates potential cost savings when transported instead by ground. We performed a 6-year (2007-2013) retrospective analysis of all trauma patients presenting to our center. Patients with a known mode of transfer were included in the study. Patients with missing data and those who were dead on arrival were excluded from the study. Patients were then dichotomized into helicopter transfer and ground transfer groups. A subanalysis was performed between minimally injured patients (ISS helicopter and 76.7% (3,992) were transferred via ground transport. Helicopter-transferred patients had longer hospital (p = 0.001) and intensive care unit (p = 0.001) stays. There was no difference in mortality between the groups (p = 0.6).On subanalysis of minimally injured patients there was no difference in hospital length of stay (p = 0.1) and early discharge (p = 0.6) between the helicopter transfer and ground transfer group. Average helicopter transfer cost at our center was $18,000, totaling $4,860,000 for 270 minimally injured helicopter-transferred patients. Nearly one third of patients transported by helicopter were minimally injured. Policies to identify patients who do not benefit from helicopter transport should be developed. Significant reduction in transport cost can be made by judicious selection of patients. Education to physicians calling for transport and identification of alternate means of transportation would be both safe and financially beneficial to our system. Epidemiologic study, level III. Therapeutic study, level IV.

  10. SOLUTION OF A MULTIVARIATE STRATIFIED SAMPLING PROBLEM THROUGH CHEBYSHEV GOAL PROGRAMMING

    Directory of Open Access Journals (Sweden)

    Mohd. Vaseem Ismail

    2010-12-01

    Full Text Available In this paper, we consider the problem of minimizing the variances for the various characters with fixed (given budget. Each convex objective function is first linearised at its minimal point where it meets the linear cost constraint. The resulting multiobjective linear programming problem is then solved by Chebyshev goal programming. A numerical example is given to illustrate the procedure.

  11. Minimally invasive surgical treatment of malignant pleural effusions.

    Science.gov (United States)

    Ciuche, Adrian; Nistor, Claudiu; Pantile, Daniel; Prof Horvat, Teodor

    2011-10-01

    Usually the pleural cavity contains a small amount of liquid (approximately 10 ml). Pleural effusions appear when the liquid production rate overpasses the absorption rate with a greater amount of liquid inside the pleural cavity. Between January 1998 to December 2008 we conducted a study in order to establish the adequate surgical treatment for MPEs. Effective control of a recurrent malignant pleural effusion can greatly improve the quality of life of the cancer patient. The present review collects and examines the clinical results of minimally invasive techniques designed to treat this problem. Patients with MPEs were studied according to several criteria. In our study we observed the superiority of intraoperative talc poudrage, probably due to a more uniform distribution of talc particles over the pleural surface. Minimal pleurotomy with thoracic drainage and instillation of a talc suspension is also a safe and effective technique and should be employed when there are contraindications for the thoracoscopic minimally invasive procedure. On the basis of comparisons involving effectiveness, morbidity, and convenience, we recommend the thoracoscopic insufflations of talc as a fine powder with pleural drainage as the procedure of choice.

  12. Addressing the strong CP problem with quark mass ratios

    Energy Technology Data Exchange (ETDEWEB)

    Diaz-Cruz, J.L.; Saldana-Salazar, U.J. [Benemerita Univ. Autonoma de Puebla (Mexico). Facultad de Ciencias Fisico-Matematicas; Hollik, W.G. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2016-05-15

    The strong CP problem is one of many puzzles in the theoretical description of elementary particles physics that still lacks an explanation. Solutions to that problem usually comprise new symmetries or fields or both. The main problem seems to be how to achieve small CP in the strong interactions despite large CP violation in weak interactions. Observation of CP violation is exclusively through the Higgs-Yukawa interactions. In this letter, we show that with minimal assumptions on the structure of mass (Yukawa) matrices the strong CP problem does not exist in the Standard Model and no extension to solve this is needed. However, to solve the flavor puzzle, models based on minimal SU(3) flavor groups leading to the proposed flavor matrices are favored.

  13. Addressing the strong CP problem with quark mass ratios

    International Nuclear Information System (INIS)

    Diaz-Cruz, J.L.; Saldana-Salazar, U.J.

    2016-05-01

    The strong CP problem is one of many puzzles in the theoretical description of elementary particles physics that still lacks an explanation. Solutions to that problem usually comprise new symmetries or fields or both. The main problem seems to be how to achieve small CP in the strong interactions despite large CP violation in weak interactions. Observation of CP violation is exclusively through the Higgs-Yukawa interactions. In this letter, we show that with minimal assumptions on the structure of mass (Yukawa) matrices the strong CP problem does not exist in the Standard Model and no extension to solve this is needed. However, to solve the flavor puzzle, models based on minimal SU(3) flavor groups leading to the proposed flavor matrices are favored.

  14. An Equivalent Emission Minimization Strategy for Causal Optimal Control of Diesel Engines

    Directory of Open Access Journals (Sweden)

    Stephan Zentner

    2014-02-01

    Full Text Available One of the main challenges during the development of operating strategies for modern diesel engines is the reduction of the CO2 emissions, while complying with ever more stringent limits for the pollutant emissions. The inherent trade-off between the emissions of CO2 and pollutants renders a simultaneous reduction difficult. Therefore, an optimal operating strategy is sought that yields minimal CO2 emissions, while holding the cumulative pollutant emissions at the allowed level. Such an operating strategy can be obtained offline by solving a constrained optimal control problem. However, the final-value constraint on the cumulated pollutant emissions prevents this approach from being adopted for causal control. This paper proposes a framework for causal optimal control of diesel engines. The optimization problem can be solved online when the constrained minimization of the CO2 emissions is reformulated as an unconstrained minimization of the CO2 emissions and the weighted pollutant emissions (i.e., equivalent emissions. However, the weighting factors are not known a priori. A method for the online calculation of these weighting factors is proposed. It is based on the Hamilton–Jacobi–Bellman (HJB equation and a physically motivated approximation of the optimal cost-to-go. A case study shows that the causal control strategy defined by the online calculation of the equivalence factor and the minimization of the equivalent emissions is only slightly inferior to the non-causal offline optimization, while being applicable to online control.

  15. Solving the rectangular assignment problem and applications

    NARCIS (Netherlands)

    Bijsterbosch, J.; Volgenant, A.

    2010-01-01

    The rectangular assignment problem is a generalization of the linear assignment problem (LAP): one wants to assign a number of persons to a smaller number of jobs, minimizing the total corresponding costs. Applications are, e.g., in the fields of object recognition and scheduling. Further, we show

  16. Ant Foraging Behavior for Job Shop Problem

    Directory of Open Access Journals (Sweden)

    Mahad Diyana Abdul

    2016-01-01

    Full Text Available Ant Colony Optimization (ACO is a new algorithm approach, inspired by the foraging behavior of real ants. It has frequently been applied to many optimization problems and one such problem is in solving the job shop problem (JSP. The JSP is a finite set of jobs processed on a finite set of machine where once a job initiates processing on a given machine, it must complete processing and uninterrupted. In solving the Job Shop Scheduling problem, the process is measure by the amount of time required in completing a job known as a makespan and minimizing the makespan is the main objective of this study. In this paper, we developed an ACO algorithm to minimize the makespan. A real set of problems from a metal company in Johor bahru, producing 20 parts with jobs involving the process of clinching, tapping and power press respectively. The result from this study shows that the proposed ACO heuristics managed to produce a god result in a short time.

  17. Minimally invasive orthognathic surgery.

    Science.gov (United States)

    Resnick, Cory M; Kaban, Leonard B; Troulis, Maria J

    2009-02-01

    Minimally invasive surgery is defined as the discipline in which operative procedures are performed in novel ways to diminish the sequelae of standard surgical dissections. The goals of minimally invasive surgery are to reduce tissue trauma and to minimize bleeding, edema, and injury, thereby improving the rate and quality of healing. In orthognathic surgery, there are two minimally invasive techniques that can be used separately or in combination: (1) endoscopic exposure and (2) distraction osteogenesis. This article describes the historical developments of the fields of orthognathic surgery and minimally invasive surgery, as well as the integration of the two disciplines. Indications, techniques, and the most current outcome data for specific minimally invasive orthognathic surgical procedures are presented.

  18. Optimization of Control Self Assessment Application to Minimize Fraud

    Directory of Open Access Journals (Sweden)

    Wendy Endrianto

    2016-05-01

    Full Text Available This article discussed a method that can be done by a company to minimize fraud action by applying Control Self Assessment (CSA. The study was conducted by studying literature on the topics discussed that were presented descriptively in a systematic manner through the review one by one from the initial problem to solve the problem. It can be concluded that CSA is one form of auditing practices that emphasizes anticipatory action (preventive of the act of detection (detective that the concept of modern internal audit which is carried more precise in application. It is one alternative that is most efficient and effective in reducing fraud.

  19. Non-minimal Higgs inflation and frame dependence in cosmology

    Energy Technology Data Exchange (ETDEWEB)

    Steinwachs, Christian F. [School of Mathematical Sciences, University of Nottingham University Park, Nottingham, NG7 2RD (United Kingdom); Kamenshchik, Alexander Yu. [Dipartimento di Fisica e Astronomia and INFN, Via Irnerio 46, 40126 Bologna, Italy and L.D. Landau Institute for Theoretical Physics of the Russian Academy of Sciences, Kosygin str. 2, 119334 Moscow (Russian Federation)

    2013-02-21

    We investigate a very general class of cosmological models with scalar fields non-minimally coupled to gravity. A particular representative in this class is given by the non-minimal Higgs inflation model in which the Standard Model Higgs boson and the inflaton are described by one and the same scalar particle. While the predictions of the non-minimal Higgs inflation scenario come numerically remarkably close to the recently discovered mass of the Higgs boson, there remains a conceptual problem in this model that is associated with the choice of the cosmological frame. While the classical theory is independent of this choice, we find by an explicit calculation that already the first quantum corrections induce a frame dependence. We give a geometrical explanation of this frame dependence by embedding it into a more general field theoretical context. From this analysis, some conceptional points in the long lasting cosmological debate: 'Jordan frame vs. Einstein frame' become more transparent and in principle can be resolved in a natural way.

  20. Time Dependent Heterogeneous Vehicle Routing Problem for Catering Service Delivery Problem

    Science.gov (United States)

    Azis, Zainal; Mawengkang, Herman

    2017-09-01

    The heterogeneous vehicle routing problem (HVRP) is a variant of vehicle routing problem (VRP) which describes various types of vehicles with different capacity to serve a set of customers with known geographical locations. This paper considers the optimal service deliveries of meals of a catering company located in Medan City, Indonesia. Due to the road condition as well as traffic, it is necessary for the company to use different type of vehicle to fulfill customers demand in time. The HVRP incorporates time dependency of travel times on the particular time of the day. The objective is to minimize the sum of the costs of travelling and elapsed time over the planning horizon. The problem can be modeled as a linear mixed integer program and we address a feasible neighbourhood search approach to solve the problem.

  1. Stability of the Minimizers of Least Squares with a Non-Convex Regularization. Part I: Local Behavior

    International Nuclear Information System (INIS)

    Durand, S.; Nikolova, M.

    2006-01-01

    Many estimation problems amount to minimizing a piecewise C m objective function, with m ≥ 2, composed of a quadratic data-fidelity term and a general regularization term. It is widely accepted that the minimizers obtained using non-convex and possibly non-smooth regularization terms are frequently good estimates. However, few facts are known on the ways to control properties of these minimizers. This work is dedicated to the stability of the minimizers of such objective functions with respect to variations of the data. It consists of two parts: first we consider all local minimizers, whereas in a second part we derive results on global minimizers. In this part we focus on data points such that every local minimizer is isolated and results from a C m-1 local minimizer function, defined on some neighborhood. We demonstrate that all data points for which this fails form a set whose closure is negligible

  2. Dynamical relaxation of the CP phases in next-to-minimal supersymmetry

    International Nuclear Information System (INIS)

    Demir, D.A.

    1999-11-01

    After promoting the phases of the soft masses to dynamical fields corresponding to Goldstone bosons of spontaneously broken global symmetries in the supersymmetry breaking sector, the next-to-minimal supersymmetric model is found to solve the μ problem and the strong CP problem simultaneously with an invisible axion. The domain wall problem persists in the form of axionic domain formation. Relaxation dynamics of the physical CP-violating phases is determined only by the short-distance physics and their relaxation values are not necessarily close to the CP-conserving points. Consequently, the solution of tile supersymmetric CP problem may require heavy enough superpartners and nonminimal flavor structures, where the latter may be also relevant for avoiding the formation of axionic domain walls. (author)

  3. One Improvement Method of Reducing Duration Directly to Solve Time-Cost Tradeoff Problem

    Science.gov (United States)

    Jian-xun, Qi; Dedong, Sun

    Time and cost are two of the most important factors for project plan and schedule management, and specially, time-cost tradeoff problem is one classical problem in project scheduling, which is also a difficult problem. Methods of solving the problem mainly contain method of network flow and method of mending the minimal cost. Thereinto, for the method of mending the minimal cost is intuitionistic, convenient and lesser computation, these advantages make the method being used widely in practice. But disadvantage of the method is that the result of each step is optimal but the terminal result maybe not optimal. In this paper, firstly, method of confirming the maximal effective quantity of reducing duration is designed; secondly, on the basis of above method and the method of mending the minimal cost, the main method of reducing duration directly is designed to solve time-cost tradeoff problem, and by analyzing validity of the method, the method could obtain more optimal result for the problem.

  4. JIT-transportation problem and its algorithm

    Science.gov (United States)

    Bai, Guozhong; Gan, Xiao-Xiong

    2011-12-01

    This article introduces the (just-in-time) JIT-transportation problem, which requires that all demanded goods be shipped to their destinations on schedule, at a zero or minimal destination-storage cost. The JIT-transportation problem is a special goal programming problem with discrete constraints. This article provides a mathematical model for such a transportation problem and introduces the JIT solution, the deviation solution, the JIT deviation, etc. By introducing the B(λ)-problem, this article establishes the equivalence between the optimal solutions of the B(λ)-problem and the optimal solutions of the JIT-transportation problem, and then provides an algorithm for the JIT-transportation problems. This algorithm is proven mathematically and is also illustrated by an example.

  5. Analysis of labor employment assessment on production machine to minimize time production

    Science.gov (United States)

    Hernawati, Tri; Suliawati; Sari Gumay, Vita

    2018-03-01

    Every company both in the field of service and manufacturing always trying to pass efficiency of it’s resource use. One resource that has an important role is labor. Labor has different efficiency levels for different jobs anyway. Problems related to the optimal allocation of labor that has different levels of efficiency for different jobs are called assignment problems, which is a special case of linear programming. In this research, Analysis of Labor Employment Assesment on Production Machine to Minimize Time Production, in PT PDM is done by using Hungarian algorithm. The aim of the research is to get the assignment of optimal labor on production machine to minimize time production. The results showed that the assignment of existing labor is not suitable because the time of completion of the assignment is longer than the assignment by using the Hungarian algorithm. By applying the Hungarian algorithm obtained time savings of 16%.

  6. On the choice of minimization parameters using 4 momentum conservation law for particle momenta improvement

    International Nuclear Information System (INIS)

    Anykeyev, V.B.; Zhigunov, V.P.; Spiridonov, A.A.

    1981-01-01

    Special choice of parameters for minimization is offered in the problem of improving estimates for particle momenta in the vertex of the event with the use of 4-momentum conservation law. This choice permits to use any unconditional minimization method instead of that of Lagrange multipliers. The above method is used when analysing the data on the K - +p→n + anti k 0 +π 0 reaction [ru

  7. Sparsity regularization for parameter identification problems

    International Nuclear Information System (INIS)

    Jin, Bangti; Maass, Peter

    2012-01-01

    The investigation of regularization schemes with sparsity promoting penalty terms has been one of the dominant topics in the field of inverse problems over the last years, and Tikhonov functionals with ℓ p -penalty terms for 1 ⩽ p ⩽ 2 have been studied extensively. The first investigations focused on regularization properties of the minimizers of such functionals with linear operators and on iteration schemes for approximating the minimizers. These results were quickly transferred to nonlinear operator equations, including nonsmooth operators and more general function space settings. The latest results on regularization properties additionally assume a sparse representation of the true solution as well as generalized source conditions, which yield some surprising and optimal convergence rates. The regularization theory with ℓ p sparsity constraints is relatively complete in this setting; see the first part of this review. In contrast, the development of efficient numerical schemes for approximating minimizers of Tikhonov functionals with sparsity constraints for nonlinear operators is still ongoing. The basic iterated soft shrinkage approach has been extended in several directions and semi-smooth Newton methods are becoming applicable in this field. In particular, the extension to more general non-convex, non-differentiable functionals by variational principles leads to a variety of generalized iteration schemes. We focus on such iteration schemes in the second part of this review. A major part of this survey is devoted to applying sparsity constrained regularization techniques to parameter identification problems for partial differential equations, which we regard as the prototypical setting for nonlinear inverse problems. Parameter identification problems exhibit different levels of complexity and we aim at characterizing a hierarchy of such problems. The operator defining these inverse problems is the parameter-to-state mapping. We first summarize some

  8. Minimal Poems Written in 1979 Minimal Poems Written in 1979

    Directory of Open Access Journals (Sweden)

    Sandra Sirangelo Maggio

    2008-04-01

    Full Text Available The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, has no title reminded me of a book I have seen a long time ago. called Truth, which had not even a single word printed inside. In either case we have a sample of how often excentricities can prove efficient means of artistic creativity, in this new literary trend known as Minimalism. The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, has no title reminded me of a book I have seen a long time ago. called Truth, which had not even a single word printed inside. In either case we have a sample of how often excentricities can prove efficient means of artistic creativity, in this new literary trend known as Minimalism.

  9. Minimization of the root of a quadratic functional under a system of affine equality constraints with application to portfolio management

    Science.gov (United States)

    Landsman, Zinoviy

    2008-10-01

    We present an explicit closed form solution of the problem of minimizing the root of a quadratic functional subject to a system of affine constraints. The result generalizes Z. Landsman, Minimization of the root of a quadratic functional under an affine equality constraint, J. Comput. Appl. Math. 2007, to appear, see sciencedirect.com/science/journal/03770427>, articles in press, where the optimization problem was solved under only one linear constraint. This is of interest for solving significant problems pertaining to financial economics as well as some classes of feasibility and optimization problems which frequently occur in tomography and other fields. The results are illustrated in the problem of optimal portfolio selection and the particular case when the expected return of finance portfolio is certain is discussed.

  10. Minimizing the Fluid Used to Induce Fracturing

    Science.gov (United States)

    Boyle, E. J.

    2015-12-01

    The less fluid injected to induce fracturing means less fluid needing to be produced before gas is produced. One method is to inject as fast as possible until the desired fracture length is obtained. Presented is an alternative injection strategy derived by applying optimal system control theory to the macroscopic mass balance. The picture is that the fracture is constant in aperture, fluid is injected at a controlled rate at the near end, and the fracture unzips at the far end until the desired length is obtained. The velocity of the fluid is governed by Darcy's law with larger permeability for flow along the fracture length. Fracture growth is monitored through micro-seismicity. Since the fluid is assumed to be incompressible, the rate at which fluid is injected is balanced by rate of fracture growth and rate of loss to bounding rock. Minimizing injected fluid loss to the bounding rock is the same as minimizing total injected fluid How to change the injection rate so as to minimize the total injected fluid is a problem in optimal control. For a given total length, the variation of the injected rate is determined by variations in overall time needed to obtain the desired fracture length, the length at any time, and the rate at which the fracture is growing at that time. Optimal control theory leads to a boundary condition and an ordinary differential equation in time whose solution is an injection protocol that minimizes the fluid used under the stated assumptions. That method is to monitor the rate at which the square of the fracture length is growing and adjust the injection rate proportionately.

  11. On the Solution of the Eigenvalue Assignment Problem for Discrete-Time Systems

    Directory of Open Access Journals (Sweden)

    El-Sayed M. E. Mostafa

    2017-01-01

    Full Text Available The output feedback eigenvalue assignment problem for discrete-time systems is considered. The problem is formulated first as an unconstrained minimization problem, where a three-term nonlinear conjugate gradient method is proposed to find a local solution. In addition, a cut to the objective function is included, yielding an inequality constrained minimization problem, where a logarithmic barrier method is proposed for finding the local solution. The conjugate gradient method is further extended to tackle the eigenvalue assignment problem for the two cases of decentralized control systems and control systems with time delay. The performance of the methods is illustrated through various test examples.

  12. Modified Approach for Optimization of Real Life Transportation Problem in Neutrosophic Environment

    Directory of Open Access Journals (Sweden)

    Akanksha Singh

    2017-01-01

    Full Text Available To the best of our knowledge, there is only one approach for solving neutrosophic cost minimization transportation problems. Since neutrosophic transportation problems are a new area of research, other researchers may be attracted to extend this approach for solving other types of neutrosophic transportation problems like neutrosophic solid transportation problems, neutrosophic time minimization transportation problems, neutrosophic transshipment problems, and so on. However, after a deep study of the existing approach, it is noticed that a mathematical incorrect assumption has been used in these existing approaches; therefore there is a need to modify these existing approaches. Keeping the same in mind, in this paper, the existing approach is modified. Furthermore, the exact results of some existing transportation problems are obtained by the modified approach.

  13. Correlates of minimal dating.

    Science.gov (United States)

    Leck, Kira

    2006-10-01

    Researchers have associated minimal dating with numerous factors. The present author tested shyness, introversion, physical attractiveness, performance evaluation, anxiety, social skill, social self-esteem, and loneliness to determine the nature of their relationships with 2 measures of self-reported minimal dating in a sample of 175 college students. For women, shyness, introversion, physical attractiveness, self-rated anxiety, social self-esteem, and loneliness correlated with 1 or both measures of minimal dating. For men, physical attractiveness, observer-rated social skill, social self-esteem, and loneliness correlated with 1 or both measures of minimal dating. The patterns of relationships were not identical for the 2 indicators of minimal dating, indicating the possibility that minimal dating is not a single construct as researchers previously believed. The present author discussed implications and suggestions for future researchers.

  14. A Systematic Procedure for the Generation of Cost-Minimized Designs

    DEFF Research Database (Denmark)

    Becker, Peter W.; Jarkler, Bjorn

    1972-01-01

    We present a procedure for the generation of cost-minimized designs of circuits and systems. Suppose a designer has decided upon the topology of his product. Also suppose he knows the cost and quality of the different grades of the N components required to implement the product. The designer...... then faces the following problem: How should he proceed to find the combination of grades that will give him the desired manufacturing yield at minimum product cost? We discuss the problem and suggest a policy by which the designer, with a reasonable computational effort, can find a set of ``good...

  15. Waste minimization successes at McGuire Nuclear Station

    International Nuclear Information System (INIS)

    Correll, J.C.; Johnson, G.T.

    1995-01-01

    McGuire Nuclear Station is a two unit, 1125 MWe PWR located 25 miles north of Charlotte, North Carolina. It is a Westinghouse Ice Condenser plant that is owned and operated by Duke Power Company. At Duke Power, open-quotes Culture Changeclose quotes is a common term that we have used to describe the incredible transformation that we are making to become a cost conscious, customer driven, highly competitive business. Nowhere has this change been more evident then in the way we process and disposed of our solid radioactive waste. With top-down management support, we have used team-based, formalized, problem solving methods and have implemented many successful waste minimization programs. Through these programs, we have dramatically increased employees close-quote awareness of the importance of waste minimization. As a result, we have been able to reduce both our burial volumes and our waste processing and disposal costs

  16. Prospects and problems of dense oxygen permeable membranes

    DEFF Research Database (Denmark)

    Hendriksen, P.V.; Larsen, P.H.; Mogensen, Mogens Bjerg

    2000-01-01

    The prospects of using mixed ionic/electronic conducting ceramics for syngas production in a catalytic membrane reactor are analysed. Problems relating to limited thermodynamic stability and poor dimensional stability of candidate materials are addressed, The consequences for these problems......, of flux improving measures like minimization of membrane thickness and minimization of the losses due to oxygen exchange over the membrane surfaces, are discussed. The analysis is conducted on two candidate materials: La0.6Sr0.4Co0.2Fe0.8O3-delta and SrFeCo0.5Ox. Finally. experimental investigations...

  17. Minimization for conditional simulation: Relationship to optimal transport

    Science.gov (United States)

    Oliver, Dean S.

    2014-05-01

    In this paper, we consider the problem of generating independent samples from a conditional distribution when independent samples from the prior distribution are available. Although there are exact methods for sampling from the posterior (e.g. Markov chain Monte Carlo or acceptance/rejection), these methods tend to be computationally demanding when evaluation of the likelihood function is expensive, as it is for most geoscience applications. As an alternative, in this paper we discuss deterministic mappings of variables distributed according to the prior to variables distributed according to the posterior. Although any deterministic mappings might be equally useful, we will focus our discussion on a class of algorithms that obtain implicit mappings by minimization of a cost function that includes measures of data mismatch and model variable mismatch. Algorithms of this type include quasi-linear estimation, randomized maximum likelihood, perturbed observation ensemble Kalman filter, and ensemble of perturbed analyses (4D-Var). When the prior pdf is Gaussian and the observation operators are linear, we show that these minimization-based simulation methods solve an optimal transport problem with a nonstandard cost function. When the observation operators are nonlinear, however, the mapping of variables from the prior to the posterior obtained from those methods is only approximate. Errors arise from neglect of the Jacobian determinant of the transformation and from the possibility of discontinuous mappings.

  18. Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.

    Science.gov (United States)

    Heald, Emerson F.

    1978-01-01

    Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)

  19. Time and timing in vehicle routing problems

    NARCIS (Netherlands)

    Jabali, O.

    2010-01-01

    The distribution of goods to a set of geographically dispersed customers is a common problem faced by carrier companies, well-known as the Vehicle Routing Problem (VRP). The VRP consists of finding an optimal set of routes that minimizes total travel times for a given number of vehicles with a fixed

  20. A new methodology for minimizing investment in the development of offshore fields

    International Nuclear Information System (INIS)

    Garcia-Diaz, J.C.; Startzman, R.; Hogg, G.L.

    1996-01-01

    The development of an offshore field is often a long, complex, and extremely expensive undertaking. The enormous amount of capital required for making investments of this type motivates one to try to optimize the development of a field. This paper provides an efficient computational method to minimize the initial investment in the development of a field. The problem of minimizing the investment in an offshore field is defined here as the problem of locating a number of offshore facilities and wells and allocating these wells to the facilities at minimum cost. Side constraints include restrictions on the total number of facilities of every type and design and various technology constraints. This problem is modeled as a 0/1 integer program. The solution method is based on an implicit enumeration scheme using efficient mathematical tools, such as Lagrangian relaxation and heuristics, to calculate good bounds and, consequently, to reduce the computation time. The solution method was implemented and tested on some typical field-development cases. Execution times were remarkably small for the size and complexity of the examples. Computational results indicate that the new methodology outperforms existing methods both in execution time and in memory required

  1. A branch-and-cut-and-price algorithm for the cumulative capacitated vehicle routing problem

    DEFF Research Database (Denmark)

    Wøhlk, Sanne; Lysgaard, Jens

    2014-01-01

    The paper considers the Cumulative Capacitated Vehicle Routing Problem (CCVRP), which is a variation of the well-known Capacitated Vehicle Routing Problem (CVRP). In this problem, the traditional objective of minimizing total distance or time traveled by the vehicles is replaced by minimizing...... the sum of arrival times at the customers. A branch-and-cut-and-price algorithm for obtaining optimal solutions to the problem is proposed. Computational results based on a set of standard CVRP benchmarks are presented....

  2. Minimal Super Technicolor

    DEFF Research Database (Denmark)

    Antola, M.; Di Chiara, S.; Sannino, F.

    2011-01-01

    We introduce novel extensions of the Standard Model featuring a supersymmetric technicolor sector (supertechnicolor). As the first minimal conformal supertechnicolor model we consider N=4 Super Yang-Mills which breaks to N=1 via the electroweak interactions. This is a well defined, economical......, between unparticle physics and Minimal Walking Technicolor. We consider also other N =1 extensions of the Minimal Walking Technicolor model. The new models allow all the standard model matter fields to acquire a mass....

  3. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction.

    Science.gov (United States)

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.

  4. Minimizing Lid Overstows in Master Stowage Plans for Container Vessels is NP-Complete

    DEFF Research Database (Denmark)

    Ajspur, Mai Lise; Jensen, Rune Møller; Guilbert, Nicolas

    Container vessel stowage is a particularly hard combinatorial problem within the shipping industry. The currently most successful approaches decompose the problem hierarchically and first generate a master plan that handle highlevel constraints and objectives such as balance and stress moments...... that it is an NP -complete problem to generate master plans that minimize the number of these lid overstows. Since any efficient approach to container vessel stowage most likely must include a master plan, the implication of this result is that future research must focus and developing good heuristics...

  5. Complexity growth in minimal massive 3D gravity

    Science.gov (United States)

    Qaemmaqami, Mohammad M.

    2018-01-01

    We study the complexity growth by using "complexity =action " (CA) proposal in the minimal massive 3D gravity (MMG) model which is proposed for resolving the bulk-boundary clash problem of topologically massive gravity (TMG). We observe that the rate of the complexity growth for Banados-Teitelboim-Zanelli (BTZ) black hole saturates the proposed bound by physical mass of the BTZ black hole in the MMG model, when the angular momentum parameter and the inner horizon of black hole goes to zero.

  6. A Linear Programming Reformulation of the Standard Quadratic Optimization Problem

    NARCIS (Netherlands)

    de Klerk, E.; Pasechnik, D.V.

    2005-01-01

    The problem of minimizing a quadratic form over the standard simplex is known as the standard quadratic optimization problem (SQO).It is NPhard, and contains the maximum stable set problem in graphs as a special case.In this note we show that the SQO problem may be reformulated as an (exponentially

  7. A linear programming approach to max-sum problem: a review.

    Science.gov (United States)

    Werner, Tomás

    2007-07-01

    The max-sum labeling problem, defined as maximizing a sum of binary (i.e., pairwise) functions of discrete variables, is a general NP-hard optimization problem with many applications, such as computing the MAP configuration of a Markov random field. We review a not widely known approach to the problem, developed by Ukrainian researchers Schlesinger et al. in 1976, and show how it contributes to recent results, most importantly, those on the convex combination of trees and tree-reweighted max-product. In particular, we review Schlesinger et al.'s upper bound on the max-sum criterion, its minimization by equivalent transformations, its relation to the constraint satisfaction problem, the fact that this minimization is dual to a linear programming relaxation of the original problem, and the three kinds of consistency necessary for optimality of the upper bound. We revisit problems with Boolean variables and supermodular problems. We describe two algorithms for decreasing the upper bound. We present an example application for structural image analysis.

  8. Minimizing embedding impact in steganography using trellis-coded quantization

    Science.gov (United States)

    Filler, Tomáš; Judas, Jan; Fridrich, Jessica

    2010-01-01

    In this paper, we propose a practical approach to minimizing embedding impact in steganography based on syndrome coding and trellis-coded quantization and contrast its performance with bounds derived from appropriate rate-distortion bounds. We assume that each cover element can be assigned a positive scalar expressing the impact of making an embedding change at that element (single-letter distortion). The problem is to embed a given payload with minimal possible average embedding impact. This task, which can be viewed as a generalization of matrix embedding or writing on wet paper, has been approached using heuristic and suboptimal tools in the past. Here, we propose a fast and very versatile solution to this problem that can theoretically achieve performance arbitrarily close to the bound. It is based on syndrome coding using linear convolutional codes with the optimal binary quantizer implemented using the Viterbi algorithm run in the dual domain. The complexity and memory requirements of the embedding algorithm are linear w.r.t. the number of cover elements. For practitioners, we include detailed algorithms for finding good codes and their implementation. Finally, we report extensive experimental results for a large set of relative payloads and for different distortion profiles, including the wet paper channel.

  9. The minimal GUT with inflaton and dark matter unification

    Science.gov (United States)

    Chen, Heng-Yu; Gogoladze, Ilia; Hu, Shan; Li, Tianjun; Wu, Lina

    2018-01-01

    Giving up the solutions to the fine-tuning problems, we propose the non-supersymmetric flipped SU(5)× U(1)_X model based on the minimal particle content principle, which can be constructed from the four-dimensional SO(10) models, five-dimensional orbifold SO(10) models, and local F-theory SO(10) models. To achieve gauge coupling unification, we introduce one pair of vector-like fermions, which form a complete SU(5)× U(1)_X representation. The proton lifetime is around 5× 10^{35} years, neutrino masses and mixing can be explained via the seesaw mechanism, baryon asymmetry can be generated via leptogenesis, and the vacuum stability problem can be solved as well. In particular, we propose that inflaton and dark matter particles can be unified to a real scalar field with Z_2 symmetry, which is not an axion and does not have the non-minimal coupling to gravity. Such a kind of scenarios can be applied to the generic scalar dark matter models. Also, we find that the vector-like particle corrections to the B_s^0 masses might be about 6.6%, while their corrections to the K^0 and B_d^0 masses are negligible.

  10. The minimal GUT with inflaton and dark matter unification

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Heng-Yu; Gogoladze, Ilia [University of Delaware, Department of Physics and Astronomy, Bartol Research Institute, Newark, DE (United States); Hu, Shan [Hubei University, Department of Physics, Faculty of Physics and Electronic Sciences, Wuhan (China); Li, Tianjun [Chinese Academy of Sciences, Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Beijing (China); University of Chinese Academy of Sciences, School of Physical Sciences, Beijing (China); Wu, Lina [Chinese Academy of Sciences, Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Beijing (China); University of Electronic Science and Technology of China, School of Physical Electronics, Chengdu (China)

    2018-01-15

    Giving up the solutions to the fine-tuning problems, we propose the non-supersymmetric flipped SU(5) x U(1){sub X} model based on the minimal particle content principle, which can be constructed from the four-dimensional SO(10) models, five-dimensional orbifold SO(10) models, and local F-theory SO(10) models. To achieve gauge coupling unification, we introduce one pair of vector-like fermions, which form a complete SU(5) x U(1){sub X} representation. The proton lifetime is around 5 x 10{sup 35} years, neutrino masses and mixing can be explained via the seesaw mechanism, baryon asymmetry can be generated via leptogenesis, and the vacuum stability problem can be solved as well. In particular, we propose that inflaton and dark matter particles can be unified to a real scalar field with Z{sub 2} symmetry, which is not an axion and does not have the non-minimal coupling to gravity. Such a kind of scenarios can be applied to the generic scalar dark matter models. Also, we find that the vector-like particle corrections to the B{sub s}{sup 0} masses might be about 6.6%, while their corrections to the K{sup 0} and B{sub d}{sup 0} masses are negligible. (orig.)

  11. Analytical solution and experimental validation of the energy management problem for fuel cell hybrid vehicles

    NARCIS (Netherlands)

    P.P.J. van den Bosch; Edwin Tazelaar; M. Grimminck; Stijn Hoppenbrouwers; Bram Veenhuizen

    2011-01-01

    The objective of an energy management strategy for fuel cell hybrid propulsion systems is to minimize the fuel needed to provide the required power demand. This minimization is defined as an optimization problem. Methods such as dynamic programming numerically solve this optimization problem.

  12. Two-Agent Scheduling to Minimize the Maximum Cost with Position-Dependent Jobs

    Directory of Open Access Journals (Sweden)

    Long Wan

    2015-01-01

    Full Text Available This paper investigates a single-machine two-agent scheduling problem to minimize the maximum costs with position-dependent jobs. There are two agents, each with a set of independent jobs, competing to perform their jobs on a common machine. In our scheduling setting, the actual position-dependent processing time of one job is characterized by variable function dependent on the position of the job in the sequence. Each agent wants to fulfil the objective of minimizing the maximum cost of its own jobs. We develop a feasible method to achieve all the Pareto optimal points in polynomial time.

  13. Search for minimal paths in modified networks

    International Nuclear Information System (INIS)

    Yeh, W.-C.

    2002-01-01

    The problem of searching for all minimal paths (MPs) in a network obtained by modifying the original network, e.g. for network expansion or reinforcement, is discussed and solved in this study. The existing best-known method to solve this problem was a straightforward approach. It needed extensive comparison and verification, and failed to solve some special but important cases. Therefore, a more efficient, intuitive and generalized method to search for all MPs without an extensive research procedure is proposed. In this presentation, first we develop an intuitive algorithm based upon the reformation of all MPs in the original network to search for all MPs in a modified network. Next, the computational complexity of the proposed algorithm is analyzed and compared with the existing methods. Finally, examples illustrate how all MPs are generated in a modified network based upon the reformation of all of the MPs in the corresponding original network

  14. Features for Exploiting Black-Box Optimization Problem Structure

    DEFF Research Database (Denmark)

    Tierney, Kevin; Malitsky, Yuri; Abell, Tinus

    2013-01-01

    landscape of BBO problems and show how an algorithm portfolio approach can exploit these general, problem indepen- dent features and outperform the utilization of any single minimization search strategy. We test our methodology on data from the GECCO Workshop on BBO Benchmarking 2012, which contains 21...

  15. The Dynamic Multi-Period Vehicle Routing Problem

    DEFF Research Database (Denmark)

    Wen, Min; Cordeau, Jean-Francois; Laporte, Gilbert

    This paper considers the Dynamic Multi-Period Vehicle Routing Problem which deals with the distribution of orders from a depot to a set of customers over a multi-period time horizon. Customer orders and their feasible service periods are dynamically revealed over time. The objectives are to minim......This paper considers the Dynamic Multi-Period Vehicle Routing Problem which deals with the distribution of orders from a depot to a set of customers over a multi-period time horizon. Customer orders and their feasible service periods are dynamically revealed over time. The objectives...... are to minimize total travel costs and customer waiting, and to balance the daily workload over the planning horizon. This problem originates from a large distributor operating in Sweden. It is modeled as a mixed integer linear program, and solved by means of a three-phase heuristic that works over a rolling...... planning horizon. The multi-objective aspect of the problem is handled through a scalar technique approach. Computational results show that our solutions improve upon those of the Swedish distributor....

  16. One-machine job-scheduling with non-constant capacity - Minimizing weighted completion times

    NARCIS (Netherlands)

    Amaddeo, H.F.; Amaddeo, H.F.; Nawijn, W.M.; van Harten, Aart

    1997-01-01

    In this paper an n-job one-machine scheduling problem is considered, in which the machine capacity is time-dependent and jobs are characterized by their work content. The objective is to minimize the sum of weighted completion times. A necessary optimality condition is presented and we discuss some

  17. Radwaste minimization successes at Duke Power Company

    International Nuclear Information System (INIS)

    Lan, C.D.; Johnson, G.T.; Groves, D.C.; Smith, T.A.

    1996-01-01

    At Duke Power Company, open-quotes Culture Changeclose quotes is a common term that we have used to describe the incredible transformation. We are becoming a cost conscious, customer driven, highly competitive business. Nowhere has this change been more evident then in the way we process and dispose of our solid radioactive waste. With top-down management support, we have used team-based, formalized problem solving methods and have implemented many successful waste minimization programs. Through these programs, we have dramatically increased employees' awareness of the importance of waste minimization. As a result, we have been able to reduce both our burial volumes and our waste processing and disposal costs. In June, 1994, we invited EPRI to conduct assessments of our waste minimization programs at Oconee and Catawba nuclear stations. Included in the assessments were in-depth looks at contamination control, an inventory of items in the plant, the volume of waste generated in the plant and how it was processed, laundry reject data, site waste-handling operations, and plant open-quotes housekeepingclose quotes routines and process. One of the most important aspects of the assessment is the open-quotes dumpster dive,close quotes which is an evaluation of site dry active waste composition by sorting through approximately fifteen bags of radioactive waste. Finally, there was an evaluation of consumable used at each site in order to gain knowledge of items that could be standardized at all stations. With EPRI recommendations, we made several changes and standardized the items used. We have made significant progress in waste reduction. We realize, however, that we are aiming at a moving target and we still have room for improvement. As the price of processing and disposal (or storage) increases, we will continue to evaluate our waste minimization programs

  18. Minimizing Mutual Couping

    DEFF Research Database (Denmark)

    2010-01-01

    Disclosed herein are techniques, systems, and methods relating to minimizing mutual coupling between a first antenna and a second antenna.......Disclosed herein are techniques, systems, and methods relating to minimizing mutual coupling between a first antenna and a second antenna....

  19. Green open location-routing problem considering economic and environmental costs

    Directory of Open Access Journals (Sweden)

    Eliana M. Toro

    2016-12-01

    Full Text Available This paper introduces a new bi-objective vehicle routing problem that integrates the Open Location Routing Problem (OLRP, recently presented in the literature, coupled with the growing need for fuel consumption minimization, named Green OLRP (G-OLRP. Open routing problems (ORP are known to be NP-hard problems, in which vehicles start from the set of existing depots and are not required to return to the starting depot after completing their service. The OLRP is a strategic-level problem involving the selection of one or many depots from a set of candidate locations and the planning of delivery radial routes from the selected depots to a set of customers. The concept of radial paths allows us to use a set of constraints focused on maintaining the radiality condition of the paths, which significantly simplifies the set of constraints associated with the connectivity and capacity requirements and provides a suitable alternative when compared with the elimination problem of sub-tours traditionally addressed in the literature. The emphasis in the paper will be placed on modeling rather than solution methods. The model proposed is formulated as a bi-objective problem, considering the minimization of operational costs and the minimization of environmental effects, and it is solved by using the epsilon constraint technique. The results illustrate that the proposed model is able to generate a set of trade-off solutions leading to interesting conclusions about the relationship between operational costs and environmental impact.

  20. Dynamic Restructuring Of Problems In Artificial Intelligence

    Science.gov (United States)

    Schwuttke, Ursula M.

    1992-01-01

    "Dynamic tradeoff evaluation" (DTE) denotes proposed method and procedure for restructuring problem-solving strategies in artificial intelligence to satisfy need for timely responses to changing conditions. Detects situations in which optimal problem-solving strategies cannot be pursued because of real-time constraints, and effects tradeoffs among nonoptimal strategies in such way to minimize adverse effects upon performance of system.

  1. A bottom-up approach to the strong CP problem

    Science.gov (United States)

    Diaz-Cruz, J. L.; Hollik, W. G.; Saldana-Salazar, U. J.

    2018-05-01

    The strong CP problem is one of many puzzles in the theoretical description of elementary particle physics that still lacks an explanation. While top-down solutions to that problem usually comprise new symmetries or fields or both, we want to present a rather bottom-up perspective. The main problem seems to be how to achieve small CP violation in the strong interactions despite the large CP violation in weak interactions. In this paper, we show that with minimal assumptions on the structure of mass (Yukawa) matrices, they do not contribute to the strong CP problem and thus we can provide a pathway to a solution of the strong CP problem within the structures of the Standard Model and no extension at the electroweak scale is needed. However, to address the flavor puzzle, models based on minimal SU(3) flavor groups leading to the proposed flavor matrices are favored. Though we refrain from an explicit UV completion of the Standard Model, we provide a simple requirement for such models not to show a strong CP problem by construction.

  2. Minimal models for axion and neutrino

    Directory of Open Access Journals (Sweden)

    Y.H. Ahn

    2016-01-01

    Full Text Available The PQ mechanism resolving the strong CP problem and the seesaw mechanism explaining the smallness of neutrino masses may be related in a way that the PQ symmetry breaking scale and the seesaw scale arise from a common origin. Depending on how the PQ symmetry and the seesaw mechanism are realized, one has different predictions on the color and electromagnetic anomalies which could be tested in the future axion dark matter search experiments. Motivated by this, we construct various PQ seesaw models which are minimally extended from the (non- supersymmetric Standard Model and thus set up different benchmark points on the axion–photon–photon coupling in comparison with the standard KSVZ and DFSZ models.

  3. Computational problems in engineering

    CERN Document Server

    Mladenov, Valeri

    2014-01-01

    This book provides readers with modern computational techniques for solving variety of problems from electrical, mechanical, civil and chemical engineering. Mathematical methods are presented in a unified manner, so they can be applied consistently to problems in applied electromagnetics, strength of materials, fluid mechanics, heat and mass transfer, environmental engineering, biomedical engineering, signal processing, automatic control and more.   • Features contributions from distinguished researchers on significant aspects of current numerical methods and computational mathematics; • Presents actual results and innovative methods that provide numerical solutions, while minimizing computing times; • Includes new and advanced methods and modern variations of known techniques that can solve difficult scientific problems efficiently.  

  4. The Multiple-Minima Problem in Protein Folding

    Science.gov (United States)

    Scheraga, Harold A.

    1991-10-01

    The conformational energy surface of a polypeptide or protein has many local minima, and conventional energy minimization procedures reach only a local minimum (near the starting point of the optimization algorithm) instead of the global minimum (the multiple-minima problem). Several procedures have been developed to surmount this problem, the most promising of which are: (a) build up procedure, (b) optimization of electrostatics, (c) Monte Carlo-plus-energy minimization, (d) electrostatically-driven Monte Carlo, (e) inclusion of distance restraints, (f) adaptive importance-sampling Monte Carlo, (g) relaxation of dimensionality, (h) pattern-recognition, and (i) diffusion equation method. These procedures have been applied to a variety of polypeptide structural problems, and the results of such computations are presented. These include the computation of the structures of open-chain and cyclic peptides, fibrous proteins and globular proteins. Present efforts are being devoted to scaling up these procedures from small polypeptides to proteins, to try to compute the three-dimensional structure of a protein from its amino sequence.

  5. Approximation algorithms for the parallel flow shop problem

    NARCIS (Netherlands)

    X. Zhang (Xiandong); S.L. van de Velde (Steef)

    2012-01-01

    textabstractWe consider the NP-hard problem of scheduling n jobs in m two-stage parallel flow shops so as to minimize the makespan. This problem decomposes into two subproblems: assigning the jobs to parallel flow shops; and scheduling the jobs assigned to the same flow shop by use of Johnson's

  6. On a Calculus Textbook Problem

    OpenAIRE

    Kitover, Arkady; Orhon, Mehmet

    2015-01-01

    We consider generalizations of a well known elementary problem. A wire of the fixed length is cut into two pieces, one piece is bent into a circle and the second one into a square. What dimensions of the circle and the square will minimize their total area?

  7. Optimal Allocation of Renewable Energy Sources for Energy Loss Minimization

    Directory of Open Access Journals (Sweden)

    Vaiju Kalkhambkar

    2017-03-01

    Full Text Available Optimal allocation of renewable distributed generation (RDG, i.e., solar and the wind in a distribution system becomes challenging due to intermittent generation and uncertainty of loads. This paper proposes an optimal allocation methodology for single and hybrid RDGs for energy loss minimization. The deterministic generation-load model integrated with optimal power flow provides optimal solutions for single and hybrid RDG. Considering the complexity of the proposed nonlinear, constrained optimization problem, it is solved by a robust and high performance meta-heuristic, Symbiotic Organisms Search (SOS algorithm. Results obtained from SOS algorithm offer optimal solutions than Genetic Algorithm (GA, Particle Swarm Optimization (PSO and Firefly Algorithm (FFA. Economic analysis is carried out to quantify the economic benefits of energy loss minimization over the life span of RDGs.

  8. About an Optimal Visiting Problem

    Energy Technology Data Exchange (ETDEWEB)

    Bagagiolo, Fabio, E-mail: bagagiol@science.unitn.it; Benetton, Michela [Unversita di Trento, Dipartimento di Matematica (Italy)

    2012-02-15

    In this paper we are concerned with the optimal control problem consisting in minimizing the time for reaching (visiting) a fixed number of target sets, in particular more than one target. Such a problem is of course reminiscent of the famous 'Traveling Salesman Problem' and brings all its computational difficulties. Our aim is to apply the dynamic programming technique in order to characterize the value function of the problem as the unique viscosity solution of a suitable Hamilton-Jacobi equation. We introduce some 'external' variables, one per target, which keep in memory whether the corresponding target is already visited or not, and we transform the visiting problem in a suitable Mayer problem. This fact allows us to overcome the lacking of the Dynamic Programming Principle for the originary problem. The external variables evolve with a hysteresis law and the Hamilton-Jacobi equation turns out to be discontinuous.

  9. ILUCG algorithm which minimizes in the Euclidean norm

    International Nuclear Information System (INIS)

    Petravic, M.; Kuo-Petravic, G.

    1978-07-01

    An algroithm is presented which solves sparse systems of linear equations of the form Ax = Y, where A is non-symmetric, by the Incomplete LU Decomposition-Conjugate Gradient (ILUCG) method. The algorithm minimizes the error in the Euclidean norm vertical bar x/sub i/ - x vertical bar 2 , where x/sub i/ is the solution vector after the i/sup th/ iteration and x the exact solution vector. The results of a test on one real problem indicate that the algorithm is likely to be competitive with the best existing algorithms of its type

  10. A criterion for flatness in minimal area metrics that define string diagrams

    International Nuclear Information System (INIS)

    Ranganathan, K.; Massachusetts Inst. of Tech., Cambridge, MA

    1992-01-01

    It has been proposed that the string diagrams of closed string field theory be defined by a minimal area problem that requires that all nontrivial homotopy curves have length greater than or equal to 2π. Consistency requires that the minimal area metric be flat in a neighbourhood of the punctures. The theorem proven in this paper, yields a criterion which if satisfied, will ensure this requirement. The theorem states roughly that the metric is flat in an open set, U if there is a unique closed curve of length 2π through every point in U and all of these closed curves are in the same free homotopy class. (orig.)

  11. Flowshop Scheduling Problems with a Position-Dependent Exponential Learning Effect

    Directory of Open Access Journals (Sweden)

    Mingbao Cheng

    2013-01-01

    Full Text Available We consider a permutation flowshop scheduling problem with a position-dependent exponential learning effect. The objective is to minimize the performance criteria of makespan and the total flow time. For the two-machine flow shop scheduling case, we show that Johnson’s rule is not an optimal algorithm for minimizing the makespan given the exponential learning effect. Furthermore, by using the shortest total processing times first (STPT rule, we construct the worst-case performance ratios for both criteria. Finally, a polynomial-time algorithm is proposed for special cases of the studied problem.

  12. Radiotherapy problem under fuzzy theoretic approach

    International Nuclear Information System (INIS)

    Ammar, E.E.; Hussein, M.L.

    2003-01-01

    A fuzzy set theoretic approach is used for radiotherapy problem. The problem is faced with two goals: the first is to maximize the fraction of surviving normal cells and the second is to minimize the fraction of surviving tumor cells. The theory of fuzzy sets has been employed to formulate and solve the problem. A linguistic variable approach is used for treating the first goal. The solutions obtained by the modified approach are always efficient and best compromise. A sensitivity analysis of the solutions to the differential weights is given

  13. Legal incentives for minimizing waste

    International Nuclear Information System (INIS)

    Clearwater, S.W.; Scanlon, J.M.

    1991-01-01

    Waste minimization, or pollution prevention, has become an integral component of federal and state environmental regulation. Minimizing waste offers many economic and public relations benefits. In addition, waste minimization efforts can also dramatically reduce potential criminal requirements. This paper addresses the legal incentives for minimizing waste under current and proposed environmental laws and regulations

  14. A Volume Constrained Variational Problem with Lower-Order Terms

    International Nuclear Information System (INIS)

    Morini, M.; Rieger, M.O.

    2003-01-01

    We study a one-dimensional variational problem with two or more level set constraints. The existence of global and local minimizers turns out to be dependent on the regularity of the energy density. A complete characterization of local minimizers and the underlying energy landscape is provided. The Γ -limit when the phases exhaust the whole domain is computed

  15. Waste minimization and pollution prevention awareness plan. Revision 1

    International Nuclear Information System (INIS)

    1994-07-01

    The purpose of this plan is to document Lawrence Livermore National Laboratory (LLNL) projections for present and future waste minimization and pollution prevention. The plan specifies those activities and methods that are or will be used to reduce the quantity and toxicity of wastes generated at the site. It is intended to satisfy Department of Energy (DOE) requirements. This Waste Minimization and Pollution Prevention Awareness Plan provides an overview of projected activities from FY 1994 through FY 1999. The plans are broken into site-wide and problem-specific activities. All directorates at LLNL have had an opportunity to contribute input, estimate budgets, and review the plan. In addition to the above, this plan records LLNL's goals for pollution prevention, regulatory drivers for those activities, assumptions on which the cost estimates are based, analyses of the strengths of the projects, and the barriers to increasing pollution prevention activities

  16. Some extensions of the discrete lotsizing and scheduling problem

    NARCIS (Netherlands)

    M. Salomon (Marc); L.G. Kroon (Leo); R. Kuik (Roelof); L.N. van Wassenhove (Luk)

    1991-01-01

    textabstractIn this paper the Discrete Lotsizing and Scheduling Problem (DLSP) is considered. DLSP relates to capacitated lotsizing as well as to job scheduling problems and is concerned with determining a feasible production schedule with minimal total costs in a single-stage manufacturing process.

  17. A Heuristic Algorithm for Constrain Single-Source Problem with Constrained Customers

    Directory of Open Access Journals (Sweden)

    S. A. Raisi Dehkordi∗

    2012-09-01

    Full Text Available The Fermat-Weber location problem is to find a point in R n that minimizes the sum of the weighted Euclidean distances from m given points in R n . In this paper we consider the Fermat-Weber problem of one new facilitiy with respect to n unknown customers in order to minimizing the sum of transportation costs between this facility and the customers. We assumed that each customer is located in a nonempty convex closed bounded subset of R n .

  18. Optimized Runge-Kutta methods with minimal dispersion and dissipation for problems arising from computational acoustics

    International Nuclear Information System (INIS)

    Tselios, Kostas; Simos, T.E.

    2007-01-01

    In this Letter a new explicit fourth-order seven-stage Runge-Kutta method with a combination of minimal dispersion and dissipation error and maximal accuracy and stability limit along the imaginary axes, is developed. This method was produced by a general function that was constructed to satisfy all the above requirements and, from which, all the existing fourth-order six-stage RK methods can be produced. The new method is more efficient than the other optimized methods, for acoustic computations

  19. Waste minimization of a process fluid through effective control under various controllers tuning

    International Nuclear Information System (INIS)

    Younas, M.; Gul, S.; Naveed, S.

    2005-01-01

    Whenever a process is disturbed either by servo system or regulatory system, the control action is applied to trace the desired point. An efficient controller setting should be selected in order to get speedy response under the pattern or constraints of quality of the product. The effective control action is desired to utilize the maximum of raw material and to minimize the waste. This is a critical problem in cases where the raw material or product is valuable and costly, e.g. pharmaceuticals. This problem has been addressed in this work on a laboratory scale plant. The plant consists of feed tank, pumps, plate and frame heat exchanger and hot water re-circulator tank. The system responses were logged with computer while the controller was tuned with Ziegler-Nichols (Z-N) and Cohen-Coon (C-C) tunings. A detailed study indicates that Ziegler-Nichols Controller tunings is better than Cohen-Coon as waste production was minimized. (author)

  20. Is non-minimal inflation eternal?

    International Nuclear Information System (INIS)

    Feng, Chao-Jun; Li, Xin-Zhou

    2010-01-01

    The possibility that the non-minimal coupling inflation could be eternal is investigated. We calculate the quantum fluctuation of the inflaton in a Hubble time and find that it has the same value as that in the minimal case in the slow-roll limit. Armed with this result, we have studied some concrete non-minimal inflationary models including the chaotic inflation and the natural inflation, in which the inflaton is non-minimally coupled to the gravity. We find that the non-minimal coupling inflation could be eternal in some parameter spaces.

  1. On the fine-tuning problem in minimal SO(10) SUSY-GUT

    International Nuclear Information System (INIS)

    Hempfling, R.

    1994-05-01

    In grand unified theories (GUT) based on SO(10) all fermions of one generation are embedded in a single representation. As a result, the top quark, the bottom quark, and the τ lepton have the same Yukawa coupling at the GUT scale. This implies a very large ratio of Higgs vacuum expectation values, tanβ≅m t /m b . In this letter we show that GUT threshold correction to the universal Higgs mass parameter can solve the fine-tuning problem associated with such large values of tan β. (orig.)

  2. A Generalized Robust Minimization Framework for Low-Rank Matrix Recovery

    Directory of Open Access Journals (Sweden)

    Wen-Ze Shao

    2014-01-01

    Full Text Available This paper considers the problem of recovering low-rank matrices which are heavily corrupted by outliers or large errors. To improve the robustness of existing recovery methods, the problem is solved by formulating it as a generalized nonsmooth nonconvex minimization functional via exploiting the Schatten p-norm (0 < p ≤1 and Lq(0 < q ≤1 seminorm. Two numerical algorithms are provided based on the augmented Lagrange multiplier (ALM and accelerated proximal gradient (APG methods as well as efficient root-finder strategies. Experimental results demonstrate that the proposed generalized approach is more inclusive and effective compared with state-of-the-art methods, either convex or nonconvex.

  3. Minimal families of curves on surfaces

    KAUST Repository

    Lubbes, Niels

    2014-11-01

    A minimal family of curves on an embedded surface is defined as a 1-dimensional family of rational curves of minimal degree, which cover the surface. We classify such minimal families using constructive methods. This allows us to compute the minimal families of a given surface.The classification of minimal families of curves can be reduced to the classification of minimal families which cover weak Del Pezzo surfaces. We classify the minimal families of weak Del Pezzo surfaces and present a table with the number of minimal families of each weak Del Pezzo surface up to Weyl equivalence.As an application of this classification we generalize some results of Schicho. We classify algebraic surfaces that carry a family of conics. We determine the minimal lexicographic degree for the parametrization of a surface that carries at least 2 minimal families. © 2014 Elsevier B.V.

  4. Hexavalent Chromium Minimization Strategy

    Science.gov (United States)

    2011-05-01

    Logistics 4 Initiative - DoD Hexavalent Chromium Minimization Non- Chrome Primer IIEXAVAJ ENT CHRO:M I~UMI CHROMIUM (VII Oil CrfVli.J CANCEfl HAnRD CD...Management Office of the Secretary of Defense Hexavalent Chromium Minimization Strategy Report Documentation Page Form ApprovedOMB No. 0704-0188...00-2011 4. TITLE AND SUBTITLE Hexavalent Chromium Minimization Strategy 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6

  5. Permanent magnet design for magnetic heat pumps using total cost minimization

    Science.gov (United States)

    Teyber, R.; Trevizoli, P. V.; Christiaanse, T. V.; Govindappa, P.; Niknia, I.; Rowe, A.

    2017-11-01

    The active magnetic regenerator (AMR) is an attractive technology for efficient heat pumps and cooling systems. The costs associated with a permanent magnet for near room temperature applications are a central issue which must be solved for broad market implementation. To address this problem, we present a permanent magnet topology optimization to minimize the total cost of cooling using a thermoeconomic cost-rate balance coupled with an AMR model. A genetic algorithm identifies cost-minimizing magnet topologies. For a fixed temperature span of 15 K and 4.2 kg of gadolinium, the optimal magnet configuration provides 3.3 kW of cooling power with a second law efficiency (ηII) of 0.33 using 16.3 kg of permanent magnet material.

  6. Inelastic scattering with Chebyshev polynomials and preconditioned conjugate gradient minimization.

    Science.gov (United States)

    Temel, Burcin; Mills, Greg; Metiu, Horia

    2008-03-27

    We describe and test an implementation, using a basis set of Chebyshev polynomials, of a variational method for solving scattering problems in quantum mechanics. This minimum error method (MEM) determines the wave function Psi by minimizing the least-squares error in the function (H Psi - E Psi), where E is the desired scattering energy. We compare the MEM to an alternative, the Kohn variational principle (KVP), by solving the Secrest-Johnson model of two-dimensional inelastic scattering, which has been studied previously using the KVP and for which other numerical solutions are available. We use a conjugate gradient (CG) method to minimize the error, and by preconditioning the CG search, we are able to greatly reduce the number of iterations necessary; the method is thus faster and more stable than a matrix inversion, as is required in the KVP. Also, we avoid errors due to scattering off of the boundaries, which presents substantial problems for other methods, by matching the wave function in the interaction region to the correct asymptotic states at the specified energy; the use of Chebyshev polynomials allows this boundary condition to be implemented accurately. The use of Chebyshev polynomials allows for a rapid and accurate evaluation of the kinetic energy. This basis set is as efficient as plane waves but does not impose an artificial periodicity on the system. There are problems in surface science and molecular electronics which cannot be solved if periodicity is imposed, and the Chebyshev basis set is a good alternative in such situations.

  7. Finite element meshing approached as a global minimization process

    Energy Technology Data Exchange (ETDEWEB)

    WITKOWSKI,WALTER R.; JUNG,JOSEPH; DOHRMANN,CLARK R.; LEUNG,VITUS J.

    2000-03-01

    The ability to generate a suitable finite element mesh in an automatic fashion is becoming the key to being able to automate the entire engineering analysis process. However, placing an all-hexahedron mesh in a general three-dimensional body continues to be an elusive goal. The approach investigated in this research is fundamentally different from any other that is known of by the authors. A physical analogy viewpoint is used to formulate the actual meshing problem which constructs a global mathematical description of the problem. The analogy used was that of minimizing the electrical potential of a system charged particles within a charged domain. The particles in the presented analogy represent duals to mesh elements (i.e., quads or hexes). Particle movement is governed by a mathematical functional which accounts for inter-particles repulsive, attractive and alignment forces. This functional is minimized to find the optimal location and orientation of each particle. After the particles are connected a mesh can be easily resolved. The mathematical description for this problem is as easy to formulate in three-dimensions as it is in two- or one-dimensions. The meshing algorithm was developed within CoMeT. It can solve the two-dimensional meshing problem for convex and concave geometries in a purely automated fashion. Investigation of the robustness of the technique has shown a success rate of approximately 99% for the two-dimensional geometries tested. Run times to mesh a 100 element complex geometry were typically in the 10 minute range. Efficiency of the technique is still an issue that needs to be addressed. Performance is an issue that is critical for most engineers generating meshes. It was not for this project. The primary focus of this work was to investigate and evaluate a meshing algorithm/philosophy with efficiency issues being secondary. The algorithm was also extended to mesh three-dimensional geometries. Unfortunately, only simple geometries were tested

  8. Discontinuity minimization for omnidirectional video projections

    Science.gov (United States)

    Alshina, Elena; Zakharchenko, Vladyslav

    2017-09-01

    Advances in display technologies both for head mounted devices and television panels demand resolution increase beyond 4K for source signal in virtual reality video streaming applications. This poses a problem of content delivery trough a bandwidth limited distribution networks. Considering a fact that source signal covers entire surrounding space investigation reviled that compression efficiency may fluctuate 40% in average depending on origin selection at the conversion stage from 3D space to 2D projection. Based on these knowledge the origin selection algorithm for video compression applications has been proposed. Using discontinuity entropy minimization function projection origin rotation may be defined to provide optimal compression results. Outcome of this research may be applied across various video compression solutions for omnidirectional content.

  9. NSGA-II algorithm for multi-objective generation expansion planning problem

    Energy Technology Data Exchange (ETDEWEB)

    Murugan, P.; Kannan, S. [Electronics and Communication Engineering Department, Arulmigu Kalasalingam College of Engineering, Krishnankoil 626190, Tamilnadu (India); Baskar, S. [Electrical Engineering Department, Thiagarajar College of Engineering, Madurai 625015, Tamilnadu (India)

    2009-04-15

    This paper presents an application of Elitist Non-dominated Sorting Genetic Algorithm version II (NSGA-II), to multi-objective generation expansion planning (GEP) problem. The GEP problem is considered as a two-objective problem. The first objective is the minimization of investment cost and the second objective is the minimization of outage cost (or maximization of reliability). To improve the performance of NSGA-II, two modifications are proposed. One modification is incorporation of Virtual Mapping Procedure (VMP), and the other is introduction of controlled elitism in NSGA-II. A synthetic test system having 5 types of candidate units is considered here for GEP for a 6-year planning horizon. The effectiveness of the proposed modifications is illustrated in detail. (author)

  10. On some fundamental properties of structural topology optimization problems

    DEFF Research Database (Denmark)

    Stolpe, Mathias

    2010-01-01

    We study some fundamental mathematical properties of discretized structural topology optimization problems. Either compliance is minimized with an upper bound on the volume of the structure, or volume is minimized with an upper bound on the compliance. The design variables are either continuous o....... The presented examples can be used as teaching material in graduate and undergraduate courses on structural topology optimization....

  11. An Approximate Proximal Bundle Method to Minimize a Class of Maximum Eigenvalue Functions

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2014-01-01

    Full Text Available We present an approximate nonsmooth algorithm to solve a minimization problem, in which the objective function is the sum of a maximum eigenvalue function of matrices and a convex function. The essential idea to solve the optimization problem in this paper is similar to the thought of proximal bundle method, but the difference is that we choose approximate subgradient and function value to construct approximate cutting-plane model to solve the above mentioned problem. An important advantage of the approximate cutting-plane model for objective function is that it is more stable than cutting-plane model. In addition, the approximate proximal bundle method algorithm can be given. Furthermore, the sequences generated by the algorithm converge to the optimal solution of the original problem.

  12. Free energy minimization to predict RNA secondary structures and computational RNA design.

    Science.gov (United States)

    Churkin, Alexander; Weinbrand, Lina; Barash, Danny

    2015-01-01

    Determining the RNA secondary structure from sequence data by computational predictions is a long-standing problem. Its solution has been approached in two distinctive ways. If a multiple sequence alignment of a collection of homologous sequences is available, the comparative method uses phylogeny to determine conserved base pairs that are more likely to form as a result of billions of years of evolution than by chance. In the case of single sequences, recursive algorithms that compute free energy structures by using empirically derived energy parameters have been developed. This latter approach of RNA folding prediction by energy minimization is widely used to predict RNA secondary structure from sequence. For a significant number of RNA molecules, the secondary structure of the RNA molecule is indicative of its function and its computational prediction by minimizing its free energy is important for its functional analysis. A general method for free energy minimization to predict RNA secondary structures is dynamic programming, although other optimization methods have been developed as well along with empirically derived energy parameters. In this chapter, we introduce and illustrate by examples the approach of free energy minimization to predict RNA secondary structures.

  13. Implementation of generalized measurements with minimal disturbance on a quantum computer

    International Nuclear Information System (INIS)

    Decker, T.; Grassl, M.

    2006-01-01

    We consider the problem of efficiently implementing a generalized measurement on a quantum computer. Using methods from representation theory, we exploit symmetries of the states we want to identify respectively symmetries of the measurement operators. In order to allow the information to be extracted sequentially, the disturbance of the quantum state due to the measurement should be minimal. (Abstract Copyright [2006], Wiley Periodicals, Inc.)

  14. Minimal and non-minimal standard models: Universality of radiative corrections

    International Nuclear Information System (INIS)

    Passarino, G.

    1991-01-01

    The possibility of describing electroweak processes by means of models with a non-minimal Higgs sector is analyzed. The renormalization procedure which leads to a set of fitting equations for the bare parameters of the lagrangian is first reviewed for the minimal standard model. A solution of the fitting equations is obtained, which correctly includes large higher-order corrections. Predictions for physical observables, notably the W boson mass and the Z O partial widths, are discussed in detail. Finally the extension to non-minimal models is described under the assumption that new physics will appear only inside the vector boson self-energies and the concept of universality of radiative corrections is introduced, showing that to a large extent they are insensitive to the details of the enlarged Higgs sector. Consequences for the bounds on the top quark mass are also discussed. (orig.)

  15. Minimal Adequate Model of Unemployment Duration in the Post-Crisis Czech Republic

    Directory of Open Access Journals (Sweden)

    Adam Čabla

    2016-03-01

    Full Text Available Unemployment is one of the leading economic problems in a developed world. The aim of this paper is to identify the differences in unemployment duration in different strata in the post-crisis Czech Republic via building a minimal adequate model, and to quantify the differences. Data from Labour Force Surveys are used and since they are interval censored in nature, proper metodology must be used. The minimal adequate model is built through the accelerated failure time modelling, maximum likelihood estimates and likelihood ratio tests. Variables at the beginning are sex, marital status, age, education, municipality size and number of persons in a household, containing altogether 29 model parameters. The minimal adequate model contains 5 parameters and differences are found between men and women, the youngest category and the rest and the university educated and the rest. The estimated expected values, variances, medians, modes and 90th percentiles are provided for all subgroups.

  16. Special cases of the quadratic shortest path problem

    NARCIS (Netherlands)

    Sotirov, Renata; Hu, Hao

    2017-01-01

    The quadratic shortest path problem (QSPP) is the problem of finding a path with prespecified start vertex s and end vertex t in a digraph such that the sum of weights of arcs and the sum of interaction costs over all pairs of arcs on the path is minimized. We first consider a variant of the QSPP

  17. Fluctuations in the site-disordered traveling salesman problem

    Energy Technology Data Exchange (ETDEWEB)

    Dean, David S [Laboratoire de Physique Theorique, UMR CNRS 5152, IRSAMC, Universite Paul Sabatier, 118 route de Narbonne, 31062 Toulouse Cedex 04 (France); Lancaster, David [Harrow School of Computer Science, University of Westminster, Harrow HA1 3TP (United Kingdom)

    2007-11-16

    We extend a previous statistical mechanical treatment of the traveling salesman problem by defining a discrete 'site-disordered' problem in which fluctuations about saddle points can be computed. The results clarify the basis of our original treatment, and illuminate but do not resolve the difficulties of taking the zero-temperature limit to obtain minimal path lengths.

  18. Solving the Little Hierarchy Problem with a Singlet and Explicit μ Terms

    International Nuclear Information System (INIS)

    Delgado, Antonio; Kolda, Christopher; Olson, J. Pocahontas; Puente, Alejandro de la

    2010-01-01

    We present a generalization of the next-to-minimal supersymmetric standard model, with an explicit μ term and a supersymmetric mass for the singlet superfield, as a route to alleviating the little hierarchy problem of the minimal supersymmetric standard model (MSSM). Though this model does not address the μ problem of the MSSM, we are able to generate masses for the lightest neutral Higgs boson up to 140 GeV with top squarks below the TeV scale, all couplings perturbative to the gauge unification scale, and with no need to fine-tune parameters in the scalar potential. This model more closely resembles the MSSM phenomenologically than the canonical next-to-minimal supersymmetric standard model.

  19. Claus sulphur recovery potential approaches 99% while minimizing cost

    Energy Technology Data Exchange (ETDEWEB)

    Berlie, E M

    1974-01-21

    In a summary of a paper presented to the fourth joint engineering conference of the American Institute of Chemical Engineers and the Canadian Society for Chemical Engineering, the Claus process is discussed in a modern setting. Some problems faced in the operation of sulfur recovery plants include (1) strict pollution control regulations; (2) design and operation of existing plants; (3) knowledge of process fundamentals; (4) performance testing; (5) specification of feed gas; (6) catalyst life; (7) instrumentation and process control; and (8) quality of feed gas. Some of the factors which must be considered in order to achieve the ultimate capability of the Claus process are listed. There is strong evidence to support the contention that plant operators are reluctant to accept new fundamental knowledge of the Claus sulfur recovery process and are not taking advantage of its inherent potential to achieve the emission standards required, to minimize cost of tail gas cleanup systems and to minimize operating costs.

  20. The Maximum Resource Bin Packing Problem

    DEFF Research Database (Denmark)

    Boyar, J.; Epstein, L.; Favrholdt, L.M.

    2006-01-01

    Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...

  1. Solving the Container Stowage Problem (CSP) using Particle Swarm Optimization (PSO)

    Science.gov (United States)

    Matsaini; Santosa, Budi

    2018-04-01

    Container Stowage Problem (CSP) is a problem of containers arrangement into ships by considering rules such as: total weight, weight of one stack, destination, equilibrium, and placement of containers on vessel. Container stowage problem is combinatorial problem and hard to solve with enumeration technique. It is an NP-Hard Problem. Therefore, to find a solution, metaheuristics is preferred. The objective of solving the problem is to minimize the amount of shifting such that the unloading time is minimized. Particle Swarm Optimization (PSO) is proposed to solve the problem. The implementation of PSO is combined with some steps which are stack position change rules, stack changes based on destination, and stack changes based on the weight type of the stacks (light, medium, and heavy). The proposed method was applied on five different cases. The results were compared to Bee Swarm Optimization (BSO) and heuristics method. PSO provided mean of 0.87% gap and time gap of 60 second. While BSO provided mean of 2,98% gap and 459,6 second to the heuristcs.

  2. Minimal Gromov-Witten rings

    International Nuclear Information System (INIS)

    Przyjalkowski, V V

    2008-01-01

    We construct an abstract theory of Gromov-Witten invariants of genus 0 for quantum minimal Fano varieties (a minimal class of varieties which is natural from the quantum cohomological viewpoint). Namely, we consider the minimal Gromov-Witten ring: a commutative algebra whose generators and relations are of the form used in the Gromov-Witten theory of Fano varieties (of unspecified dimension). The Gromov-Witten theory of any quantum minimal variety is a homomorphism from this ring to C. We prove an abstract reconstruction theorem which says that this ring is isomorphic to the free commutative ring generated by 'prime two-pointed invariants'. We also find solutions of the differential equation of type DN for a Fano variety of dimension N in terms of the generating series of one-pointed Gromov-Witten invariants

  3. Very Large-Scale Neighborhoods with Performance Guarantees for Minimizing Makespan on Parallel Machines

    NARCIS (Netherlands)

    Brueggemann, T.; Hurink, Johann L.; Vredeveld, T.; Woeginger, Gerhard

    2006-01-01

    We study the problem of minimizing the makespan on m parallel machines. We introduce a very large-scale neighborhood of exponential size (in the number of machines) that is based on a matching in a complete graph. The idea is to partition the jobs assigned to the same machine into two sets. This

  4. Regularization method for solving the inverse scattering problem

    International Nuclear Information System (INIS)

    Denisov, A.M.; Krylov, A.S.

    1985-01-01

    The inverse scattering problem for the Schroedinger radial equation consisting in determining the potential according to the scattering phase is considered. The problem of potential restoration according to the phase specified with fixed error in a finite range is solved by the regularization method based on minimization of the Tikhonov's smoothing functional. The regularization method is used for solving the problem of neutron-proton potential restoration according to the scattering phases. The determined potentials are given in the table

  5. Strategies to Minimize Antibiotic Resistance

    Directory of Open Access Journals (Sweden)

    Sang Hee Lee

    2013-09-01

    Full Text Available Antibiotic resistance can be reduced by using antibiotics prudently based on guidelines of antimicrobial stewardship programs (ASPs and various data such as pharmacokinetic (PK and pharmacodynamic (PD properties of antibiotics, diagnostic testing, antimicrobial susceptibility testing (AST, clinical response, and effects on the microbiota, as well as by new antibiotic developments. The controlled use of antibiotics in food animals is another cornerstone among efforts to reduce antibiotic resistance. All major resistance-control strategies recommend education for patients, children (e.g., through schools and day care, the public, and relevant healthcare professionals (e.g., primary-care physicians, pharmacists, and medical students regarding unique features of bacterial infections and antibiotics, prudent antibiotic prescribing as a positive construct, and personal hygiene (e.g., handwashing. The problem of antibiotic resistance can be minimized only by concerted efforts of all members of society for ensuring the continued efficiency of antibiotics.

  6. ℓ0 Gradient Minimization Based Image Reconstruction for Limited-Angle Computed Tomography.

    Directory of Open Access Journals (Sweden)

    Wei Yu

    Full Text Available In medical and industrial applications of computed tomography (CT imaging, limited by the scanning environment and the risk of excessive X-ray radiation exposure imposed to the patients, reconstructing high quality CT images from limited projection data has become a hot topic. X-ray imaging in limited scanning angular range is an effective imaging modality to reduce the radiation dose to the patients. As the projection data available in this modality are incomplete, limited-angle CT image reconstruction is actually an ill-posed inverse problem. To solve the problem, image reconstructed by conventional filtered back projection (FBP algorithm frequently results in conspicuous streak artifacts and gradual changed artifacts nearby edges. Image reconstruction based on total variation minimization (TVM can significantly reduce streak artifacts in few-view CT, but it suffers from the gradual changed artifacts nearby edges in limited-angle CT. To suppress this kind of artifacts, we develop an image reconstruction algorithm based on ℓ0 gradient minimization for limited-angle CT in this paper. The ℓ0-norm of the image gradient is taken as the regularization function in the framework of developed reconstruction model. We transformed the optimization problem into a few optimization sub-problems and then, solved these sub-problems in the manner of alternating iteration. Numerical experiments are performed to validate the efficiency and the feasibility of the developed algorithm. From the statistical analysis results of the performance evaluations peak signal-to-noise ratio (PSNR and normalized root mean square distance (NRMSD, it shows that there are significant statistical differences between different algorithms from different scanning angular ranges (p<0.0001. From the experimental results, it also indicates that the developed algorithm outperforms classical reconstruction algorithms in suppressing the streak artifacts and the gradual changed

  7. Discretized energy minimization in a wave guide with point sources

    Science.gov (United States)

    Propst, G.

    1994-01-01

    An anti-noise problem on a finite time interval is solved by minimization of a quadratic functional on the Hilbert space of square integrable controls. To this end, the one-dimensional wave equation with point sources and pointwise reflecting boundary conditions is decomposed into a system for the two propagating components of waves. Wellposedness of this system is proved for a class of data that includes piecewise linear initial conditions and piecewise constant forcing functions. It is shown that for such data the optimal piecewise constant control is the solution of a sparse linear system. Methods for its computational treatment are presented as well as examples of their applicability. The convergence of discrete approximations to the general optimization problem is demonstrated by finite element methods.

  8. Minimal Marking: A Success Story

    Science.gov (United States)

    McNeilly, Anne

    2014-01-01

    The minimal-marking project conducted in Ryerson's School of Journalism throughout 2012 and early 2013 resulted in significantly higher grammar scores in two first-year classes of minimally marked university students when compared to two traditionally marked classes. The "minimal-marking" concept (Haswell, 1983), which requires…

  9. Multi-period multi-objective electricity generation expansion planning problem with Monte-Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Tekiner, Hatice [Industrial Engineering, College of Engineering and Natural Sciences, Istanbul Sehir University, 2 Ahmet Bayman Rd, Istanbul (Turkey); Coit, David W. [Department of Industrial and Systems Engineering, Rutgers University, 96 Frelinghuysen Rd., Piscataway, NJ (United States); Felder, Frank A. [Edward J. Bloustein School of Planning and Public Policy, Rutgers University, Piscataway, NJ (United States)

    2010-12-15

    A new approach to the electricity generation expansion problem is proposed to minimize simultaneously multiple objectives, such as cost and air emissions, including CO{sub 2} and NO{sub x}, over a long term planning horizon. In this problem, system expansion decisions are made to select the type of power generation, such as coal, nuclear, wind, etc., where the new generation asset should be located, and at which time period expansion should take place. We are able to find a Pareto front for the multi-objective generation expansion planning problem that explicitly considers availability of the system components over the planning horizon and operational dispatching decisions. Monte-Carlo simulation is used to generate numerous scenarios based on the component availabilities and anticipated demand for energy. The problem is then formulated as a mixed integer linear program, and optimal solutions are found based on the simulated scenarios with a combined objective function considering the multiple problem objectives. The different objectives are combined using dimensionless weights and a Pareto front can be determined by varying these weights. The mathematical model is demonstrated on an example problem with interesting results indicating how expansion decisions vary depending on whether minimizing cost or minimizing greenhouse gas emissions or pollutants is given higher priority. (author)

  10. Minimal families of curves on surfaces

    KAUST Repository

    Lubbes, Niels

    2014-01-01

    A minimal family of curves on an embedded surface is defined as a 1-dimensional family of rational curves of minimal degree, which cover the surface. We classify such minimal families using constructive methods. This allows us to compute the minimal

  11. Unity in the problem of reducing carbon dioxide emission

    International Nuclear Information System (INIS)

    Byurzhe, R.

    1992-01-01

    Political and economical aspects of the problem of reducing discharges into the atmosphere of gases creating hotbed effect are discussed. Canadian government policy on the power production problem is considered as well as the methods of minimization gaseous wastes due to energy consumption regulation and use of safe and more pure energy sources

  12. Analysis of convergence for control problems governed by evolution ...

    African Journals Online (AJOL)

    The convergence of a scheme to minimize a class of a system of continuous optimal control problems characterized by a system of evolution equations and a system of linear inequality and equality constraints with multiplier imbedding is considered. The result is applied to some problems and the scheme is found to exhibit ...

  13. Parameter-free method for the shape optimization of stiffeners on thin-walled structures to minimize stress concentration

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yang; Shibutan, Yoji [Osaka University, Osaka (Japan); Shimoda, Masatoshi [Toyota Technological Institute, Nagoya (Japan)

    2015-04-15

    This paper presents a parameter-free shape optimization method for the strength design of stiffeners on thin-walled structures. The maximum von Mises stress is minimized and subjected to the volume constraint. The optimum design problem is formulated as a distributed-parameter shape optimization problem under the assumptions that a stiffener is varied in the in-plane direction and that the thickness is constant. The issue of nondifferentiability, which is inherent in this min-max problem, is avoided by transforming the local measure to a smooth differentiable integral functional by using the Kreisselmeier-Steinhauser function. The shape gradient functions are derived by using the material derivative method and adjoint variable method and are applied to the H{sup 1} gradient method for shells to determine the optimal free-boundary shapes. By using this method, the smooth optimal stiffener shape can be obtained without any shape design parameterization while minimizing the maximum stress. The validity of this method is verified through two practical design examples.

  14. Variations on minimal gauge-mediated supersymmetry breaking

    International Nuclear Information System (INIS)

    Dine, M.; Nir, Y.; Shirman, Y.

    1997-01-01

    We study various modifications to the minimal models of gauge-mediated supersymmetry breaking. We argue that, under reasonable assumptions, the structure of the messenger sector is rather restricted. We investigate the effects of possible mixing between messenger and ordinary squark and slepton fields and, in particular, violation of universality. We show that acceptable values for the μ and B parameters can naturally arise from discrete, possibly horizontal, symmetries. We claim that in models where the supersymmetry-breaking parameters A and B vanish at the tree level, tanβ could be large without fine-tuning. We explain how the supersymmetric CP problem is solved in such models. copyright 1997 The American Physical Society

  15. DDeveloping and solving a bi-objective joint replenishment problem under storing space constraint

    Directory of Open Access Journals (Sweden)

    ommolbanin yousefi

    2011-03-01

    Full Text Available In this research, a bi-objective joint replenishment problem has been developed and solved with the assumption of one restricted resource. The proposed model has a storing space constraint and tries to optimize two objective functions simultaneously. They include minimizing annual holding and setup costs and minimizing annual inventory investment. Then, for solving this problem, a multi-objective genetic algorithm (MOGA has been developed. In order to analyze the algorithm efficiency, its performance has been examined in solving 1600 randomly produced problems using parameters extracted from literature. The findings imply that the proposed algorithm is capable of producing a good set of Pareto optimal solutions. Finally, the application of the problem solving approach and the findings of the proposed algorithm have been illustrated for a special problem, which has been randomly produced.

  16. A comparison of particle swarm optimizations for uncapacitated multilevel lot-sizin problems

    NARCIS (Netherlands)

    Han, Y.; Kaku, I.; Tang, J.; Dellaert, N.P.; Cai, J.; Li, Y.

    2010-01-01

    The multilevel lot-sizing (MLLS) problem is a key production planning problem in the material requirement planning (MRP) system. The MLLS problem deals with determining the production lot sizes of various items appearing in the product structure over a given finite planning horizon to minimize the

  17. Waste minimization assessment procedure

    International Nuclear Information System (INIS)

    Kellythorne, L.L.

    1993-01-01

    Perry Nuclear Power Plant began developing a waste minimization plan early in 1991. In March of 1991 the plan was documented following a similar format to that described in the EPA Waste Minimization Opportunity Assessment Manual. Initial implementation involved obtaining management's commitment to support a waste minimization effort. The primary assessment goal was to identify all hazardous waste streams and to evaluate those streams for minimization opportunities. As implementation of the plan proceeded, non-hazardous waste streams routinely generated in large volumes were also evaluated for minimization opportunities. The next step included collection of process and facility data which would be useful in helping the facility accomplish its assessment goals. This paper describes the resources that were used and which were most valuable in identifying both the hazardous and non-hazardous waste streams that existed on site. For each material identified as a waste stream, additional information regarding the materials use, manufacturer, EPA hazardous waste number and DOT hazard class was also gathered. Once waste streams were evaluated for potential source reduction, recycling, re-use, re-sale, or burning for heat recovery, with disposal as the last viable alternative

  18. Westinghouse Hanford Company waste minimization actions

    International Nuclear Information System (INIS)

    Greenhalgh, W.O.

    1988-09-01

    Companies that generate hazardous waste materials are now required by national regulations to establish a waste minimization program. Accordingly, in FY88 the Westinghouse Hanford Company formed a waste minimization team organization. The purpose of the team is to assist the company in its efforts to minimize the generation of waste, train personnel on waste minimization techniques, document successful waste minimization effects, track dollar savings realized, and to publicize and administer an employee incentive program. A number of significant actions have been successful, resulting in the savings of materials and dollars. The team itself has been successful in establishing some worthwhile minimization projects. This document briefly describes the waste minimization actions that have been successful to date. 2 refs., 26 figs., 3 tabs

  19. Minimizing Banking Risk in a Lévy Process Setting

    Directory of Open Access Journals (Sweden)

    F. Gideon

    2007-01-01

    Full Text Available The primary functions of a bank are to obtain funds through deposits from external sources and to use the said funds to issue loans. Moreover, risk management practices related to the withdrawal of these bank deposits have always been of considerable interest. In this spirit, we construct Lévy process-driven models of banking reserves in order to address the problem of hedging deposit withdrawals from such institutions by means of reserves. Here reserves are related to outstanding debt and acts as a proxy for the assets held by the bank. The aforementioned modeling enables us to formulate a stochastic optimal control problem related to the minimization of reserve, depository, and intrinsic risk that are associated with the reserve process, the net cash flows from depository activity, and cumulative costs of the bank's provisioning strategy, respectively. A discussion of the main risk management issues arising from the optimization problem mentioned earlier forms an integral part of our paper. This includes the presentation of a numerical example involving a simulation of the provisions made for deposit withdrawals via treasuries and reserves.

  20. Hidden conic quadratic representation of some nonconvex quadratic optimization problems

    NARCIS (Netherlands)

    Ben-Tal, A.; den Hertog, D.

    The problem of minimizing a quadratic objective function subject to one or two quadratic constraints is known to have a hidden convexity property, even when the quadratic forms are indefinite. The equivalent convex problem is a semidefinite one, and the equivalence is based on the celebrated

  1. IMPORTANCE, Minimal Cut Sets and System Availability from Fault Tree Analysis

    International Nuclear Information System (INIS)

    Lambert, H. W.

    1987-01-01

    1 - Description of problem or function: IMPORTANCE computes various measures of probabilistic importance of basic events and minimal cut sets to a fault tree or reliability network diagram. The minimal cut sets, the failure rates and the fault duration times (i.e., the repair times) of all basic events contained in the minimal cut sets are supplied as input data. The failure and repair distributions are assumed to be exponential. IMPORTANCE, a quantitative evaluation code, then determines the probability of the top event and computes the importance of minimal cut sets and basic events by a numerical ranking. Two measures are computed. The first describes system behavior at one point in time; the second describes sequences of failures that cause the system to fail in time. All measures are computed assuming statistical independence of basic events. In addition, system unavailability and expected number of system failures are computed by the code. 2 - Method of solution: Seven measures of basic event importance and two measures of cut set importance can be computed. Birnbaum's measure of importance (i.e., the partial derivative) and the probability of the top event are computed using the min cut upper bound. If there are no replicated events in the minimal cut sets, then the min cut upper bound is exact. If basic events are replicated in the minimal cut sets, then based on experience the min cut upper bound is accurate if the probability of the top event is less than 0.1. Simpson's rule is used in computing the time-integrated measures of importance. Newton's method for approximating the roots of an equation is employed in the options where the importance measures are computed as a function of the probability of the top event, and a shell sort puts the output in descending order of importance

  2. DC Control Effort Minimized for Magnetic-Bearing-Supported Shaft

    Science.gov (United States)

    Brown, Gerald V.

    2001-01-01

    A magnetic-bearing-supported shaft may have a number of concentricity and alignment problems. One of these involves the relationship of the position sensors, the centerline of the backup bearings, and the magnetic center of the magnetic bearings. For magnetic bearings with permanent magnet biasing, the average control current for a given control axis that is not bearing the shaft weight will be minimized if the shaft is centered, on average over a revolution, at the magnetic center of the bearings. That position may not yield zero sensor output or center the shaft in the backup bearing clearance. The desired shaft position that gives zero average current can be achieved if a simple additional term is added to the control law. Suppose that the instantaneous control currents from each bearing are available from measurements and can be input into the control computer. If each control current is integrated with a very small rate of accumulation and the result is added to the control output, the shaft will gradually move to a position where the control current averages to zero over many revolutions. This will occur regardless of any offsets of the position sensor inputs. At that position, the average control effort is minimized in comparison to other possible locations of the shaft. Nonlinearities of the magnetic bearing are minimized at that location as well.

  3. Microbiological and radiobiological studies on the hygienic quality of minimally processed food

    Energy Technology Data Exchange (ETDEWEB)

    Abu El-Nour, S. A. M. [National Center for Radiation Research and Technology, Atomic Energy Authority, Cairo (Egypt)

    2007-07-01

    In the past, there have been three traditional forms of food trading; fresh, canned and frozen foods. In recent years, a fourth form called {sup m}inimally processed food has been developed to respond to an emerging consumer demand for convenient, high-quality and preservative-free products with appearance of fresh characteristics, while being less severely processed (Saracino et al., 1991). Minimally processed food can be used as ready-to-eat, ready-to-use, or ready-to-cook products. They are stored and marketed under refrigeration conditions (Dignan, 1994). Minimally processed food products were developed in 1980's and now they are produced in many advanced and some developing countries. In Egypt, great amounts of minimally processed vegetables are now produced and commercially sold in certain supermarkets. They include fresh-cut lettuce, packaged mixed vegetables salad, shredded carrots, sliced carrots, shredded cabbage (white and red), fresh-cut green beans, mixed peas with diced carrots, mafa spanish, okra, watermelon, pumpkin, garlic, artichoke, celery, parsley, etc. However, there is an increasing interest to offer some other minimally processed vegetables and some types of fresh-cut fruits that can be used as ready-to-eat or ready-to-use. Preparation steps of minimally processed fruit and vegetable products which may include peeling, slicing, shredding, etc save labor and time for the purchasers, meanwhile removal of waste material during processing reduce transport costs. In addition, the production of such products will make year-round availability of almost all vegetables and fruits possible in fresh form around the world (Baldwin et al., 1995). However, preparation steps of such products increase the native enzymatic activity and the possibility of microbial contamination. Therefore, these products have short shelf-life and this is considered one of the foremost challenging problems in the commercialization of minimally processed foods particularly

  4. Minimal but non-minimal inflation and electroweak symmetry breaking

    Energy Technology Data Exchange (ETDEWEB)

    Marzola, Luca [National Institute of Chemical Physics and Biophysics,Rävala 10, 10143 Tallinn (Estonia); Institute of Physics, University of Tartu,Ravila 14c, 50411 Tartu (Estonia); Racioppi, Antonio [National Institute of Chemical Physics and Biophysics,Rävala 10, 10143 Tallinn (Estonia)

    2016-10-07

    We consider the most minimal scale invariant extension of the standard model that allows for successful radiative electroweak symmetry breaking and inflation. The framework involves an extra scalar singlet, that plays the rôle of the inflaton, and is compatibile with current experimental bounds owing to the non-minimal coupling of the latter to gravity. This inflationary scenario predicts a very low tensor-to-scalar ratio r≈10{sup −3}, typical of Higgs-inflation models, but in contrast yields a scalar spectral index n{sub s}≃0.97 which departs from the Starobinsky limit. We briefly discuss the collider phenomenology of the framework.

  5. Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem

    Science.gov (United States)

    Rahmalia, Dinita

    2017-08-01

    Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.

  6. Application of lean six sigma to waste minimization in cigarette paper industry

    Science.gov (United States)

    Syahputri, K.; Sari, R. M.; Anizar; Tarigan, I. R.; Siregar, I.

    2018-02-01

    The cigarette paper industry is one of the industry that is always experiencing increasing demand from consumers. Consumer expectations for the products produced also increased both in terms of quality and quantity. The company continuously improves the quality of its products by trying to minimize nonconformity, waste and improve the efficiency of the whole production process of the company. In this cigarette industry, there is a disability whose value is above the company’s defect tolerance that is 10% of the production amount per month. Another problem also occurs in the production time is too long due to the many activities that are not value added (non value added activities) on the production floor. To overcome this problem, it is necessary to improve the production process of cigarette paper and minimize production time by reducing non value added activities. Repairs done with Lean Six Sigma. Lean Six Sigma is a combination of Lean and Six Sigma concept with DMAIC method (Define, Measure, Analyze, Improve, Control). With this Lean approach, obtained total production time of 1479.13 minutes proposal with cycle efficiency process increased by 12.64%.

  7. Ruled Laguerre minimal surfaces

    KAUST Repository

    Skopenkov, Mikhail

    2011-10-30

    A Laguerre minimal surface is an immersed surface in ℝ 3 being an extremal of the functional ∫ (H 2/K-1)dA. In the present paper, we prove that the only ruled Laguerre minimal surfaces are up to isometry the surfaces ℝ (φλ) = (Aφ, Bφ, Cφ + D cos 2φ) + λ(sin φ, cos φ, 0), where A,B,C,D ε ℝ are fixed. To achieve invariance under Laguerre transformations, we also derive all Laguerre minimal surfaces that are enveloped by a family of cones. The methodology is based on the isotropic model of Laguerre geometry. In this model a Laguerre minimal surface enveloped by a family of cones corresponds to a graph of a biharmonic function carrying a family of isotropic circles. We classify such functions by showing that the top view of the family of circles is a pencil. © 2011 Springer-Verlag.

  8. An analogue of Morse theory for planar linear networks and the generalized Steiner problem

    International Nuclear Information System (INIS)

    Karpunin, G A

    2000-01-01

    A study is made of the generalized Steiner problem: the problem of finding all the locally minimal networks spanning a given boundary set (terminal set). It is proposed to solve this problem by using an analogue of Morse theory developed here for planar linear networks. The space K of all planar linear networks spanning a given boundary set is constructed. The concept of a critical point and its index is defined for the length function l of a planar linear network. It is shown that locally minimal networks are local minima of l on K and are critical points of index 1. The theorem is proved that the sum of the indices of all the critical points is equal to χ(K)=1. This theorem is used to find estimates for the number of locally minimal networks spanning a given boundary set

  9. The regular indefinite linear-quadratic problem with linear endpoint constraints

    NARCIS (Netherlands)

    Soethoudt, J.M.; Trentelman, H.L.

    1989-01-01

    This paper deals with the infinite horizon linear-quadratic problem with indefinite cost. Given a linear system, a quadratic cost functional and a subspace of the state space, we consider the problem of minimizing the cost functional over all inputs for which the state trajectory converges to that

  10. Distributed Solutions for Loosely Coupled Feasibility Problems Using Proximal Splitting Methods

    DEFF Research Database (Denmark)

    Pakazad, Sina Khoshfetrat; Andersen, Martin Skovgaard; Hansson, Anders

    2014-01-01

    In this paper,we consider convex feasibility problems (CFPs) where the underlying sets are loosely coupled, and we propose several algorithms to solve such problems in a distributed manner. These algorithms are obtained by applying proximal splitting methods to convex minimization reformulations ...

  11. Global Analysis of Minimal Surfaces

    CERN Document Server

    Dierkes, Ulrich; Tromba, Anthony J

    2010-01-01

    Many properties of minimal surfaces are of a global nature, and this is already true for the results treated in the first two volumes of the treatise. Part I of the present book can be viewed as an extension of these results. For instance, the first two chapters deal with existence, regularity and uniqueness theorems for minimal surfaces with partially free boundaries. Here one of the main features is the possibility of 'edge-crawling' along free parts of the boundary. The third chapter deals with a priori estimates for minimal surfaces in higher dimensions and for minimizers of singular integ

  12. Minimal Surfaces for Hitchin Representations

    DEFF Research Database (Denmark)

    Li, Qiongling; Dai, Song

    2018-01-01

    . In this paper, we investigate the properties of immersed minimal surfaces inside symmetric space associated to a subloci of Hitchin component: $q_n$ and $q_{n-1}$ case. First, we show that the pullback metric of the minimal surface dominates a constant multiple of the hyperbolic metric in the same conformal...... class and has a strong rigidity property. Secondly, we show that the immersed minimal surface is never tangential to any flat inside the symmetric space. As a direct corollary, the pullback metric of the minimal surface is always strictly negatively curved. In the end, we find a fully decoupled system...

  13. Robust dynamical pattern formation from a multifunctional minimal genetic circuit

    Directory of Open Access Journals (Sweden)

    Carrera Javier

    2010-04-01

    Full Text Available Abstract Background A practical problem during the analysis of natural networks is their complexity, thus the use of synthetic circuits would allow to unveil the natural mechanisms of operation. Autocatalytic gene regulatory networks play an important role in shaping the development of multicellular organisms, whereas oscillatory circuits are used to control gene expression under variable environments such as the light-dark cycle. Results We propose a new mechanism to generate developmental patterns and oscillations using a minimal number of genes. For this, we design a synthetic gene circuit with an antagonistic self-regulation to study the spatio-temporal control of protein expression. Here, we show that our minimal system can behave as a biological clock or memory, and it exhibites an inherent robustness due to a quorum sensing mechanism. We analyze this property by accounting for molecular noise in an heterogeneous population. We also show how the period of the oscillations is tunable by environmental signals, and we study the bifurcations of the system by constructing different phase diagrams. Conclusions As this minimal circuit is based on a single transcriptional unit, it provides a new mechanism based on post-translational interactions to generate targeted spatio-temporal behavior.

  14. A Hybrid Multiobjective Evolutionary Approach for Flexible Job-Shop Scheduling Problems

    Directory of Open Access Journals (Sweden)

    Jian Xiong

    2012-01-01

    Full Text Available This paper addresses multiobjective flexible job-shop scheduling problem (FJSP with three simultaneously considered objectives: minimizing makespan, minimizing total workload, and minimizing maximal workload. A hybrid multiobjective evolutionary approach (H-MOEA is developed to solve the problem. According to the characteristic of FJSP, a modified crowding distance measure is introduced to maintain the diversity of individuals. In the proposed H-MOEA, well-designed chromosome representation and genetic operators are developed for FJSP. Moreover, a local search procedure based on critical path theory is incorporated in H-MOEA to improve the convergence ability of the algorithm. Experiment results on several well-known benchmark instances demonstrate the efficiency and stability of the proposed algorithm. The comparison with other recently published approaches validates that H-MOEA can obtain Pareto-optimal solutions with better quality and/or diversity.

  15. Minimal Webs in Riemannian Manifolds

    DEFF Research Database (Denmark)

    Markvorsen, Steen

    2008-01-01

    For a given combinatorial graph $G$ a {\\it geometrization} $(G, g)$ of the graph is obtained by considering each edge of the graph as a $1-$dimensional manifold with an associated metric $g$. In this paper we are concerned with {\\it minimal isometric immersions} of geometrized graphs $(G, g......)$ into Riemannian manifolds $(N^{n}, h)$. Such immersions we call {\\em{minimal webs}}. They admit a natural 'geometric' extension of the intrinsic combinatorial discrete Laplacian. The geometric Laplacian on minimal webs enjoys standard properties such as the maximum principle and the divergence theorems, which...... are of instrumental importance for the applications. We apply these properties to show that minimal webs in ambient Riemannian spaces share several analytic and geometric properties with their smooth (minimal submanifold) counterparts in such spaces. In particular we use appropriate versions of the divergence...

  16. Waste minimization handbook, Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Boing, L.E.; Coffey, M.J.

    1995-12-01

    This technical guide presents various methods used by industry to minimize low-level radioactive waste (LLW) generated during decommissioning and decontamination (D and D) activities. Such activities generate significant amounts of LLW during their operations. Waste minimization refers to any measure, procedure, or technique that reduces the amount of waste generated during a specific operation or project. Preventive waste minimization techniques implemented when a project is initiated can significantly reduce waste. Techniques implemented during decontamination activities reduce the cost of decommissioning. The application of waste minimization techniques is not limited to D and D activities; it is also useful during any phase of a facility`s life cycle. This compendium will be supplemented with a second volume of abstracts of hundreds of papers related to minimizing low-level nuclear waste. This second volume is expected to be released in late 1996.

  17. Waste minimization handbook, Volume 1

    International Nuclear Information System (INIS)

    Boing, L.E.; Coffey, M.J.

    1995-12-01

    This technical guide presents various methods used by industry to minimize low-level radioactive waste (LLW) generated during decommissioning and decontamination (D and D) activities. Such activities generate significant amounts of LLW during their operations. Waste minimization refers to any measure, procedure, or technique that reduces the amount of waste generated during a specific operation or project. Preventive waste minimization techniques implemented when a project is initiated can significantly reduce waste. Techniques implemented during decontamination activities reduce the cost of decommissioning. The application of waste minimization techniques is not limited to D and D activities; it is also useful during any phase of a facility's life cycle. This compendium will be supplemented with a second volume of abstracts of hundreds of papers related to minimizing low-level nuclear waste. This second volume is expected to be released in late 1996

  18. Proper project planning helps minimize overruns and delays

    International Nuclear Information System (INIS)

    Donnelly, G.; Cooney, D.J.

    1994-01-01

    This paper describes planning methods to help minimize cost overruns during the construction of oil and gas pipelines. These steps include background data collection methods, field surveys, determining preliminary pipeline routes, regulatory agency pre-application meetings, and preliminary engineering. Methods for planning also include preliminary aerial mapping, biological assessments, cultural resources investigations, wetlands delineation, geotechnical investigations, and environmental audits. Identification of potential problems can allow for rerouting of the pipeline or remediation processes before they are raised during the permitting process. By coordinating these events from the very beginning, significant cost savings will result that prevent having to rebudget for them after the permitting process starts

  19. Topology optimization of flow problems

    DEFF Research Database (Denmark)

    Gersborg, Allan Roulund

    2007-01-01

    This thesis investigates how to apply topology optimization using the material distribution technique to steady-state viscous incompressible flow problems. The target design applications are fluid devices that are optimized with respect to minimizing the energy loss, characteristic properties...... transport in 2D Stokes flow. Using Stokes flow limits the range of applications; nonetheless, the thesis gives a proof-of-concept for the application of the method within fluid dynamic problems and it remains of interest for the design of microfluidic devices. Furthermore, the thesis contributes...... at the Technical University of Denmark. Large topology optimization problems with 2D and 3D Stokes flow modeling are solved with direct and iterative strategies employing the parallelized Sun Performance Library and the OpenMP parallelization technique, respectively....

  20. Asymptotically optimum multialternative sequential procedures for discernment of processes minimizing average length of observations

    Science.gov (United States)

    Fishman, M. M.

    1985-01-01

    The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.

  1. A column generation approach to a carpentry cutting stock problem ...

    African Journals Online (AJOL)

    The carpentry sector like any other industry is faced with a cutting stock problem to minimize incurred waste. The main purpose of this project was to develop a mathematical model which will solve the cutting stock problem using column generation approach for Ashtons Company in Chinhoyi. The interview method was ...

  2. The Liner Shipping Routing and Scheduling Problem Under Environmental Considerations

    DEFF Research Database (Denmark)

    Dithmer, Philip; Reinhardt, Line Blander; Kontovas, Christos

    2017-01-01

    This paper deals with the Liner Shipping Routing and Scheduling Problem (LSRSP), which consists of designing the time schedule for a vessel to visit a fixed set of ports while minimizing costs. We extend the classical problem to include the external cost of ship air emissions and we present some...

  3. Algorithm for the Stochastic Generalized Transportation Problem

    Directory of Open Access Journals (Sweden)

    Marcin Anholcer

    2012-01-01

    Full Text Available The equalization method for the stochastic generalized transportation problem has been presented. The algorithm allows us to find the optimal solution to the problem of minimizing the expected total cost in the generalized transportation problem with random demand. After a short introduction and literature review, the algorithm is presented. It is a version of the method proposed by the author for the nonlinear generalized transportation problem. It is shown that this version of the method generates a sequence of solutions convergent to the KKT point. This guarantees the global optimality of the obtained solution, as the expected cost functions are convex and twice differentiable. The computational experiments performed for test problems of reasonable size show that the method is fast. (original abstract

  4. The worldwide "wildfire" problem.

    Science.gov (United States)

    Gill, A Malcolm; Stephens, Scott L; Cary, Geoffrey J

    2013-03-01

    The worldwide "wildfire" problem is headlined by the loss of human lives and homes, but it applies generally to any adverse effects of unplanned fires, as events or regimes, on a wide range of environmental, social, and economic assets. The problem is complex and contingent, requiring continual attention to the changing circumstances of stakeholders, landscapes, and ecosystems; it occurs at a variety of temporal and spatial scales. Minimizing adverse outcomes involves controlling fires and fire regimes, increasing the resistance of assets to fires, locating or relocating assets away from the path of fires, and, as a probability of adverse impacts often remains, assisting recovery in the short-term while promoting the adaptation of societies in the long-term. There are short- and long-term aspects to each aspect of minimization. Controlling fires and fire regimes may involve fire suppression and fuel treatments such as prescribed burning or non-fire treatments but also addresses issues associated with unwanted fire starts like arson. Increasing the resistance of assets can mean addressing the design and construction materials of a house or the use of personal protective equipment. Locating or relocating assets can mean leaving an area about to be impacted by fire or choosing a suitable place to live; it can also mean the planning of land use. Assisting recovery and promoting adaptation can involve insuring assets and sharing responsibility for preparedness for an event. There is no single, simple, solution. Perverse outcomes can occur. The number of minimizing techniques used, and the breadth and depth of their application, depends on the geographic mix of asset types. Premises for policy consideration are presented.

  5. Management problems of this restricted zone around Chernobyl

    International Nuclear Information System (INIS)

    Kholosha, V.; Sobotovich, E.; Proscura, N.; Kozakov, S.; Korchagin, P.

    1996-01-01

    In this brief report are consider the main problems of minimization of the consequences of the accident and management of actions provided at present in the Chernobyl zone at the territory of Ukraine in decade retrospect

  6. Mixed-waste minimization activities in the nuclear weapons complex

    International Nuclear Information System (INIS)

    Marchetti, J.A.; Suffern, J.S.

    1991-01-01

    Over the past 40 years, the US Department of Energy (DOE) and the nuclear weapons complex have successfully executed their mission of providing the country with a strong nuclear deterrent. Now, however, they must attain another mission at the same time: to eliminate or greatly reduce the environmental, safety, and health problems in the complex. Mixed-waste minimization activities have taken place in 11 of the complex production plants and laboratories: the Pinellas plant, the Mount plant, the Kansas City plant, the Y-12 plant, the Rocky Flats plant, the Savannah River Site (SRS), the Savannah River Site (SRS), the Pantex plant, the Nevada Test Site, Sandia National Laboratories, Los Alamos National Laboratory, and the Lawrence Livermore National Laboratory. The mixed-waste minimization opportunities that have been implemented to date by the production facilities are different from those that have been implemented by the laboratories. Areas of opportunity at the plants involve the following activities: (1) process design or improvement; (2) substitution of materials; (3) waste segregation; (4) recycling; and (5) administrative controls

  7. A Simulated Annealing-Based Heuristic Algorithm for Job Shop Scheduling to Minimize Lateness

    Directory of Open Access Journals (Sweden)

    Rui Zhang

    2013-04-01

    Full Text Available A decomposition-based optimization algorithm is proposed for solving large job shop scheduling problems with the objective of minimizing the maximum lateness. First, we use the constraint propagation theory to derive the orientation of a portion of disjunctive arcs. Then we use a simulated annealing algorithm to find a decomposition policy which satisfies the maximum number of oriented disjunctive arcs. Subsequently, each subproblem (corresponding to a subset of operations as determined by the decomposition policy is successively solved with a simulated annealing algorithm, which leads to a feasible solution to the original job shop scheduling problem. Computational experiments are carried out for adapted benchmark problems, and the results show the proposed algorithm is effective and efficient in terms of solution quality and time performance.

  8. Optimal blood glucose level control using dynamic programming based on minimal Bergman model

    Science.gov (United States)

    Rettian Anggita Sari, Maria; Hartono

    2018-03-01

    The purpose of this article is to simulate the glucose dynamic and the insulin kinetic of diabetic patient. The model used in this research is a non-linear Minimal Bergman model. Optimal control theory is then applied to formulate the problem in order to determine the optimal dose of insulin in the treatment of diabetes mellitus such that the glucose level is in the normal range for some specific time range. The optimization problem is solved using dynamic programming. The result shows that dynamic programming is quite reliable to represent the interaction between glucose and insulin levels in diabetes mellitus patient.

  9. Annealed Demon Algorithms Solving the Environmental / Economic Dispatch Problem

    Directory of Open Access Journals (Sweden)

    Aristidis VLACHOS

    2013-06-01

    Full Text Available This paper presents an efficient and reliable Annealed Demon (AD algorithm for the Environmental/Economic Dispatch (EEB problem. The EED problem is a multi-objective non-linear optimization problem with constraints. This problem is one of the fundamentals issues in power system operation. The system of generation associates thermal generators and emissions which involves sulphur oxides (SO2 and nitrogen oxides (NOx. The aim is to minimize total fuel cost of the system and control emission. The proposed AD algorithm is applied for EED of a simple power system.

  10. Two-stage optimization in a transportation problem

    CSIR Research Space (South Africa)

    Stewart, TJ

    1979-01-01

    Full Text Available A study of the economic distribution of maize throughout South Africa is reported. Although the problem of minimizing total transportation costs in such a situation is a classical one, and its solution is well known, there was in this case a high...

  11. Decoding Problem Gamblers' Signals: A Decision Model for Casino Enterprises.

    Science.gov (United States)

    Ifrim, Sandra

    2015-12-01

    The aim of the present study is to offer a validated decision model for casino enterprises. The model enables those users to perform early detection of problem gamblers and fulfill their ethical duty of social cost minimization. To this end, the interpretation of casino customers' nonverbal communication is understood as a signal-processing problem. Indicators of problem gambling recommended by Delfabbro et al. (Identifying problem gamblers in gambling venues: final report, 2007) are combined with Viterbi algorithm into an interdisciplinary model that helps decoding signals emitted by casino customers. Model output consists of a historical path of mental states and cumulated social costs associated with a particular client. Groups of problem and non-problem gamblers were simulated to investigate the model's diagnostic capability and its cost minimization ability. Each group consisted of 26 subjects and was subsequently enlarged to 100 subjects. In approximately 95% of the cases, mental states were correctly decoded for problem gamblers. Statistical analysis using planned contrasts revealed that the model is relatively robust to the suppression of signals performed by casino clientele facing gambling problems as well as to misjudgments made by staff regarding the clients' mental states. Only if the last mentioned source of error occurs in a very pronounced manner, i.e. judgment is extremely faulty, cumulated social costs might be distorted.

  12. Job shop scheduling model for non-identic machine with fixed delivery time to minimize tardiness

    Science.gov (United States)

    Kusuma, K. K.; Maruf, A.

    2016-02-01

    Scheduling non-identic machines problem with low utilization characteristic and fixed delivery time are frequent in manufacture industry. This paper propose a mathematical model to minimize total tardiness for non-identic machines in job shop environment. This model will be categorized as an integer linier programming model and using branch and bound algorithm as the solver method. We will use fixed delivery time as main constraint and different processing time to process a job. The result of this proposed model shows that the utilization of production machines can be increase with minimal tardiness using fixed delivery time as constraint.

  13. Ruled Laguerre minimal surfaces

    KAUST Repository

    Skopenkov, Mikhail; Pottmann, Helmut; Grohs, Philipp

    2011-01-01

    A Laguerre minimal surface is an immersed surface in ℝ 3 being an extremal of the functional ∫ (H 2/K-1)dA. In the present paper, we prove that the only ruled Laguerre minimal surfaces are up to isometry the surfaces ℝ (φλ) = (Aφ, Bφ, Cφ + D cos 2φ

  14. An analytical guidance law of planetary landing mission by minimizing the control effort expenditure

    International Nuclear Information System (INIS)

    Afshari, Hamed Hossein; Novinzadeh, Alireza Basohbat; Roshanian, Jafar

    2009-01-01

    An optimal trajectory design of a module for the planetary landing problem is achieved by minimizing the control effort expenditure. Using the calculus of variations theorem, the control variable is expressed as a function of costate variables, and the problem is converted into a two-point boundary-value problem. To solve this problem, the performance measure is approximated by employing a trigonometric series and subsequently, the optimal control and state trajectories are determined. To validate the accuracy of the proposed solution, a numerical method of the steepest descent is utilized. The main objective of this paper is to present a novel analytic guidance law of the planetary landing mission by optimizing the control effort expenditure. Finally, an example of a lunar landing mission is demonstrated to examine the results of this solution in practical situations

  15. Bed with Integrated Personalized Ventilation for Minimizing Cross Infection

    DEFF Research Database (Denmark)

    Nielsen, Peter V.; Jiang, Hao; Polak, Marcin

    2007-01-01

    of air to the whole room to ensure a dilution of airborne infection. Personalized ventilation has proven to be a very efficient system to protect people from cross infection because clean air is supplied direct to the breathing zone. Most designs of personalized ventilation are based on a supply jet....... The problem with those systems is the fact that the jet entrains air from the surroundings and, therefore, reduces the amount of fresh air which reaches the breathing zone. The entrainment is minimized in the system discussed here, especially when the source of clean air is located in the boundary layer close...

  16. Minimizing transient influence in WHPA delineation: An optimization approach for optimal pumping rate schemes

    Science.gov (United States)

    Rodriguez-Pretelin, A.; Nowak, W.

    2017-12-01

    For most groundwater protection management programs, Wellhead Protection Areas (WHPAs) have served as primarily protection measure. In their delineation, the influence of time-varying groundwater flow conditions is often underestimated because steady-state assumptions are commonly made. However, it has been demonstrated that temporary variations lead to significant changes in the required size and shape of WHPAs. Apart from natural transient groundwater drivers (e.g., changes in the regional angle of flow direction and seasonal natural groundwater recharge), anthropogenic causes such as transient pumping rates are of the most influential factors that require larger WHPAs. We hypothesize that WHPA programs that integrate adaptive and optimized pumping-injection management schemes can counter transient effects and thus reduce the additional areal demand in well protection under transient conditions. The main goal of this study is to present a novel management framework that optimizes pumping schemes dynamically, in order to minimize the impact triggered by transient conditions in WHPA delineation. For optimizing pumping schemes, we consider three objectives: 1) to minimize the risk of pumping water from outside a given WHPA, 2) to maximize the groundwater supply and 3) to minimize the involved operating costs. We solve transient groundwater flow through an available transient groundwater and Lagrangian particle tracking model. The optimization problem is formulated as a dynamic programming problem. Two different optimization approaches are explored: I) the first approach aims for single-objective optimization under objective (1) only. The second approach performs multiobjective optimization under all three objectives where compromise pumping rates are selected from the current Pareto front. Finally, we look for WHPA outlines that are as small as possible, yet allow the optimization problem to find the most suitable solutions.

  17. Effective Iterated Greedy Algorithm for Flow-Shop Scheduling Problems with Time lags

    Science.gov (United States)

    ZHAO, Ning; YE, Song; LI, Kaidian; CHEN, Siyu

    2017-05-01

    Flow shop scheduling problem with time lags is a practical scheduling problem and attracts many studies. Permutation problem(PFSP with time lags) is concentrated but non-permutation problem(non-PFSP with time lags) seems to be neglected. With the aim to minimize the makespan and satisfy time lag constraints, efficient algorithms corresponding to PFSP and non-PFSP problems are proposed, which consist of iterated greedy algorithm for permutation(IGTLP) and iterated greedy algorithm for non-permutation (IGTLNP). The proposed algorithms are verified using well-known simple and complex instances of permutation and non-permutation problems with various time lag ranges. The permutation results indicate that the proposed IGTLP can reach near optimal solution within nearly 11% computational time of traditional GA approach. The non-permutation results indicate that the proposed IG can reach nearly same solution within less than 1% computational time compared with traditional GA approach. The proposed research combines PFSP and non-PFSP together with minimal and maximal time lag consideration, which provides an interesting viewpoint for industrial implementation.

  18. Null polygonal Wilson loops and minimal surfaces in Anti-de-Sitter space

    International Nuclear Information System (INIS)

    Alday, Luis F.; Maldacena, Juan

    2009-01-01

    We consider minimal surfaces in three dimensional anti-de-Sitter space that end at the AdS boundary on a polygon given by a sequence of null segments. The problem can be reduced to a certain generalized Sinh-Gordon equation and to SU(2) Hitchin equations. We describe in detail the mathematical problem that needs to be solved. This problem is mathematically the same as the one studied by Gaiotto, Moore and Neitzke in the context of the moduli space of certain supersymmetric theories. Using their results we can find the explicit answer for the area of a surface that ends on an eight-sided polygon. Via the gauge/gravity duality this can also be interpreted as a certain eight-gluon scattering amplitude at strong coupling. In addition, we give fairly explicit solutions for regular polygons.

  19. Minimal access surgery of pediatric inguinal hernias: a review.

    Science.gov (United States)

    Saranga Bharathi, Ramanathan; Arora, Manu; Baskaran, Vasudevan

    2008-08-01

    Inguinal hernia is a common problem among children, and herniotomy has been its standard of care. Laparoscopy, which gained a toehold initially in the management of pediatric inguinal hernia (PIH), has managed to steer world opinion against routine contralateral groin exploration by precise detection of contralateral patencies. Besides detection, its ability to repair simultaneously all forms of inguinal hernias (indirect, direct, combined, recurrent, and incarcerated) together with contralateral patencies has cemented its role as a viable alternative to conventional repair. Numerous minimally invasive techniques for addressing PIH have mushroomed in the past two decades. These techniques vary considerably in their approaches to the internal ring (intraperitoneal, extraperitoneal), use of ports (three, two, one), endoscopic instruments (two, one, or none), sutures (absorbable, nonabsorbable), and techniques of knotting (intracorporeal, extracorporeal). In addition to the surgeons' experience and the merits/limitations of individual techniques, it is the nature of the defect that should govern the choice of technique. The emerging techniques show a trend toward increasing use of extracorporeal knotting and diminishing use of working ports and endoscopic instruments. These favor wider adoption of minimal access surgery in addressing PIH by surgeons, irrespective of their laparoscopic skills and experience. Growing experience, wider adoption, decreasing complications, and increasing advantages favor emergence of minimal access surgery as the gold standard for the treatment of PIH in the future. This article comprehensively reviews the laparoscopic techniques of addressing PIH.

  20. Y-12 Plant waste minimization strategy

    International Nuclear Information System (INIS)

    Kane, M.A.

    1987-01-01

    The 1984 Amendments to the Resource Conservation and Recovery Act (RCRA) mandate that waste minimization be a major element of hazardous waste management. In response to this mandate and the increasing costs for waste treatment, storage, and disposal, the Oak Ridge Y-12 Plant developed a waste minimization program to encompass all types of wastes. Thus, waste minimization has become an integral part of the overall waste management program. Unlike traditional approaches, waste minimization focuses on controlling waste at the beginning of production instead of the end. This approach includes: (1) substituting nonhazardous process materials for hazardous ones, (2) recycling or reusing waste effluents, (3) segregating nonhazardous waste from hazardous and radioactive waste, and (4) modifying processes to generate less waste or less toxic waste. An effective waste minimization program must provide the appropriate incentives for generators to reduce their waste and provide the necessary support mechanisms to identify opportunities for waste minimization. This presentation focuses on the Y-12 Plant's strategy to implement a comprehensive waste minimization program. This approach consists of four major program elements: (1) promotional campaign, (2) process evaluation for waste minimization opportunities, (3) waste generation tracking system, and (4) information exchange network. The presentation also examines some of the accomplishments of the program and issues which need to be resolved

  1. Evolutionary heuristic for makespan minimization in no-idle flow shop production systems - doi: 10.4025/actascitechnol.v35i2.12534

    Directory of Open Access Journals (Sweden)

    Marcelo Seido Nagano

    2013-04-01

    Full Text Available This paper deals with no-idle flow shop scheduling problem with the objective of minimizing makespan. A new hybrid metaheuristic is proposed for the scheduling problem solution. The proposed method is compared with the best method reported in the literature. Experimental results show that the new method provides better solutions regarding the solution quality to set of problems evaluated.  

  2. Minimal open strings

    International Nuclear Information System (INIS)

    Hosomichi, Kazuo

    2008-01-01

    We study FZZT-branes and open string amplitudes in (p, q) minimal string theory. We focus on the simplest boundary changing operators in two-matrix models, and identify the corresponding operators in worldsheet theory through the comparison of amplitudes. Along the way, we find a novel linear relation among FZZT boundary states in minimal string theory. We also show that the boundary ground ring is realized on physical open string operators in a very simple manner, and discuss its use for perturbative computation of higher open string amplitudes.

  3. Minimal Composite Inflation

    DEFF Research Database (Denmark)

    Channuie, Phongpichit; Jark Joergensen, Jakob; Sannino, Francesco

    2011-01-01

    We investigate models in which the inflaton emerges as a composite field of a four dimensional, strongly interacting and nonsupersymmetric gauge theory featuring purely fermionic matter. We show that it is possible to obtain successful inflation via non-minimal coupling to gravity, and that the u......We investigate models in which the inflaton emerges as a composite field of a four dimensional, strongly interacting and nonsupersymmetric gauge theory featuring purely fermionic matter. We show that it is possible to obtain successful inflation via non-minimal coupling to gravity...

  4. The Effects of the Tractor and Semitrailer Routing Problem on Mitigation of Carbon Dioxide Emissions

    Directory of Open Access Journals (Sweden)

    Hongqi Li

    2013-01-01

    Full Text Available The incorporation of CO2 emissions minimization in the vehicle routing problem (VRP is of critical importance to enterprise practice. Focusing on the tractor and semitrailer routing problem with full truckloads between any two terminals of the network, this paper proposes a mathematical programming model with the objective of minimizing CO2 emissions per ton-kilometer. A simulated annealing (SA algorithm is given to solve practical-scale problems. To evaluate the performance of the proposed algorithm, a lower bound is developed. Computational experiments on various problems generated randomly and a realistic instance are conducted. The results show that the proposed methods are effective and the algorithm can provide reasonable solutions within an acceptable computational time.

  5. ITEM-QM solutions for EM problems in image reconstruction exemplary for the Compton Camera

    CERN Document Server

    Pauli, Josef; Anton, G

    2002-01-01

    Imaginary time expectation maximation (ITEM), a new algorithm for expectation maximization problems based on the quantum mechanics energy minimalization via imaginary (euclidian) time evolution is presented. Both (the algorithm as well as the implementation (http://www.johannes-pauli.de/item/index.html) are published under the terms of General GNU public License (http://www.gnu.org/copyleft/gpl.html). Due to its generality ITEM is applicable to various image reconstruction problems like CT, PET, SPECT, NMR, Compton Camera, tomosynthesis as well as any other energy minimization problem. The choice of the optimal ITEM Hamiltonian is discussed and numerical results are presented for the Compton Camera.

  6. Microbiological and radiobiological studies on the hygienic quality of minimally processed food

    International Nuclear Information System (INIS)

    Abu El-Nour, S. A. M.

    2007-01-01

    In the past, there have been three traditional forms of food trading; fresh, canned and frozen foods. In recent years, a fourth form called m inimally processed food has been developed to respond to an emerging consumer demand for convenient, high-quality and preservative-free products with appearance of fresh characteristics, while being less severely processed (Saracino et al., 1991). Minimally processed food can be used as ready-to-eat, ready-to-use, or ready-to-cook products. They are stored and marketed under refrigeration conditions (Dignan, 1994). Minimally processed food products were developed in 1980's and now they are produced in many advanced and some developing countries. In Egypt, great amounts of minimally processed vegetables are now produced and commercially sold in certain supermarkets. They include fresh-cut lettuce, packaged mixed vegetables salad, shredded carrots, sliced carrots, shredded cabbage (white and red), fresh-cut green beans, mixed peas with diced carrots, mafa spanish, okra, watermelon, pumpkin, garlic, artichoke, celery, parsley, etc. However, there is an increasing interest to offer some other minimally processed vegetables and some types of fresh-cut fruits that can be used as ready-to-eat or ready-to-use. Preparation steps of minimally processed fruit and vegetable products which may include peeling, slicing, shredding, etc save labor and time for the purchasers, meanwhile removal of waste material during processing reduce transport costs. In addition, the production of such products will make year-round availability of almost all vegetables and fruits possible in fresh form around the world (Baldwin et al., 1995). However, preparation steps of such products increase the native enzymatic activity and the possibility of microbial contamination. Therefore, these products have short shelf-life and this is considered one of the foremost challenging problems in the commercialization of minimally processed foods particularly fresh

  7. Minimal abdominal incisions

    Directory of Open Access Journals (Sweden)

    João Carlos Magi

    2017-04-01

    Full Text Available Minimally invasive procedures aim to resolve the disease with minimal trauma to the body, resulting in a rapid return to activities and in reductions of infection, complications, costs and pain. Minimally incised laparotomy, sometimes referred to as minilaparotomy, is an example of such minimally invasive procedures. The aim of this study is to demonstrate the feasibility and utility of laparotomy with minimal incision based on the literature and exemplifying with a case. The case in question describes reconstruction of the intestinal transit with the use of this incision. Male, young, HIV-positive patient in a late postoperative of ileotiflectomy, terminal ileostomy and closing of the ascending colon by an acute perforating abdomen, due to ileocolonic tuberculosis. The barium enema showed a proximal stump of the right colon near the ileostomy. The access to the cavity was made through the orifice resulting from the release of the stoma, with a lateral-lateral ileo-colonic anastomosis with a 25 mm circular stapler and manual closure of the ileal stump. These surgeries require their own tactics, such as rigor in the lysis of adhesions, tissue traction, and hemostasis, in addition to requiring surgeon dexterity – but without the need for investments in technology; moreover, the learning curve is reported as being lower than that for videolaparoscopy. Laparotomy with minimal incision should be considered as a valid and viable option in the treatment of surgical conditions. Resumo: Procedimentos minimamente invasivos visam resolver a doença com o mínimo de trauma ao organismo, resultando em retorno rápido às atividades, reduções nas infecções, complicações, custos e na dor. A laparotomia com incisão mínima, algumas vezes referida como minilaparotomia, é um exemplo desses procedimentos minimamente invasivos. O objetivo deste trabalho é demonstrar a viabilidade e utilidade das laparotomias com incisão mínima com base na literatura e

  8. A comparison of lower bounds for the symmetric circulant traveling salesman problem

    NARCIS (Netherlands)

    de Klerk, E.; Dobre, C.

    2011-01-01

    When the matrix of distances between cities is symmetric and circulant, the traveling salesman problem (TSP) reduces to the so-called symmetric circulant traveling salesman problem (SCTSP), that has applications in the design of reconfigurable networks, and in minimizing wallpaper waste. The

  9. Homogenization of variational inequalities for obstacle problems

    International Nuclear Information System (INIS)

    Sandrakov, G V

    2005-01-01

    Results on the convergence of solutions of variational inequalities for obstacle problems are proved. The variational inequalities are defined by a non-linear monotone operator of the second order with periodic rapidly oscillating coefficients and a sequence of functions characterizing the obstacles. Two-scale and macroscale (homogenized) limiting variational inequalities are obtained. Derivation methods for such inequalities are presented. Connections between the limiting variational inequalities and two-scale and macroscale minimization problems are established in the case of potential operators.

  10. Applications of functional analysis to optimal control problems

    International Nuclear Information System (INIS)

    Mizukami, K.

    1976-01-01

    Some basic concepts in functional analysis, a general norm, the Hoelder inequality, functionals and the Hahn-Banach theorem are described; a mathematical formulation of two optimal control problems is introduced by the method of functional analysis. The problem of time-optimal control systems with both norm constraints on control inputs and on state variables at discrete intermediate times is formulated as an L-problem in the theory of moments. The simplex method is used for solving a non-linear minimizing problem inherent in the functional analysis solution to this problem. Numerical results are presented for a train operation. The second problem is that of optimal control of discrete linear systems with quadratic cost functionals. The problem is concerned with the case of unconstrained control and fixed endpoints. This problem is formulated in terms of norms of functionals on suitable Banach spaces. (author)

  11. Stability of Einstein static universe in gravity theory with a non-minimal derivative coupling

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Qihong [Hunan Normal University, Department of Physics and Synergetic Innovation Center for Quantum Effects and Applications, Changsha, Hunan (China); Zunyi Normal College, School of Physics and Electronic Science, Zunyi (China); Wu, Puxun [Hunan Normal University, Department of Physics and Synergetic Innovation Center for Quantum Effects and Applications, Changsha, Hunan (China); Peking University, Center for High Energy Physics, Beijing (China); Yu, Hongwei [Hunan Normal University, Department of Physics and Synergetic Innovation Center for Quantum Effects and Applications, Changsha, Hunan (China)

    2018-01-15

    The emergent mechanism provides a possible way to resolve the big-bang singularity problem by assuming that our universe originates from the Einstein static (ES) state. Thus, the existence of a stable ES solution becomes a very crucial prerequisite for the emergent scenario. In this paper, we study the stability of an ES universe in gravity theory with a non-minimal coupling between the kinetic term of a scalar field and the Einstein tensor. We find that the ES solution is stable under both scalar and tensor perturbations when the model parameters satisfy certain conditions, which indicates that the big-bang singularity can be avoided successfully by the emergent mechanism in the non-minimally kinetic coupled gravity. (orig.)

  12. Stability of Einstein static universe in gravity theory with a non-minimal derivative coupling

    Science.gov (United States)

    Huang, Qihong; Wu, Puxun; Yu, Hongwei

    2018-01-01

    The emergent mechanism provides a possible way to resolve the big-bang singularity problem by assuming that our universe originates from the Einstein static (ES) state. Thus, the existence of a stable ES solution becomes a very crucial prerequisite for the emergent scenario. In this paper, we study the stability of an ES universe in gravity theory with a non-minimal coupling between the kinetic term of a scalar field and the Einstein tensor. We find that the ES solution is stable under both scalar and tensor perturbations when the model parameters satisfy certain conditions, which indicates that the big-bang singularity can be avoided successfully by the emergent mechanism in the non-minimally kinetic coupled gravity.

  13. Low dose CT reconstruction via L1 norm dictionary learning using alternating minimization algorithm and balancing principle.

    Science.gov (United States)

    Wu, Junfeng; Dai, Fang; Hu, Gang; Mou, Xuanqin

    2018-04-18

    Excessive radiation exposure in computed tomography (CT) scans increases the chance of developing cancer and has become a major clinical concern. Recently, statistical iterative reconstruction (SIR) with l0-norm dictionary learning regularization has been developed to reconstruct CT images from the low dose and few-view dataset in order to reduce radiation dose. Nonetheless, the sparse regularization term adopted in this approach is l0-norm, which cannot guarantee the global convergence of the proposed algorithm. To address this problem, in this study we introduced the l1-norm dictionary learning penalty into SIR framework for low dose CT image reconstruction, and developed an alternating minimization algorithm to minimize the associated objective function, which transforms CT image reconstruction problem into a sparse coding subproblem and an image updating subproblem. During the image updating process, an efficient model function approach based on balancing principle is applied to choose the regularization parameters. The proposed alternating minimization algorithm was evaluated first using real projection data of a sheep lung CT perfusion and then using numerical simulation based on sheep lung CT image and chest image. Both visual assessment and quantitative comparison using terms of root mean square error (RMSE) and structural similarity (SSIM) index demonstrated that the new image reconstruction algorithm yielded similar performance with l0-norm dictionary learning penalty and outperformed the conventional filtered backprojection (FBP) and total variation (TV) minimization algorithms.

  14. The shape gradient of the least-squares objective functional in optimal shape design problems of radiative heat transfer

    International Nuclear Information System (INIS)

    Rukolaine, Sergey A.

    2010-01-01

    Optimal shape design problems of steady-state radiative heat transfer are considered. The optimal shape design problem (in the three-dimensional space) is formulated as an inverse one, i.e., in the form of an operator equation of the first kind with respect to a surface to be optimized. The operator equation is reduced to a minimization problem via a least-squares objective functional. The minimization problem has to be solved numerically. Gradient minimization methods need the gradient of a functional to be minimized. In this paper the shape gradient of the least-squares objective functional is derived with the help of the shape sensitivity analysis and adjoint problem method. In practice a surface to be optimized may be (or, most likely, is to be) given in a parametric form by a finite number of parameters. In this case the objective functional is, in fact, a function in a finite-dimensional space and the shape gradient becomes an ordinary gradient. The gradient of the objective functional, in the case that the surface to be optimized is given in a finite-parametric form, is derived from the shape gradient. A particular case, that a surface to be optimized is a 'two-dimensional' polyhedral one, is considered. The technique, developed in the paper, is applied to a synthetic problem of designing a 'two-dimensional' radiant enclosure.

  15. Minimal intervention dentistry II: part 6. Microscope and microsurgical techniques in periodontics.

    Science.gov (United States)

    Sitbon, Y; Attathom, T

    2014-05-01

    Different aspects of treatment for periodontal diseases or gingival problems require rigorous diagnostics. Magnification tools and microsurgical instruments, combined with minimally invasive techniques can provide the best solutions in such cases. Relevance of treatments, duration of healing, reduction of pain and post-operative scarring have the potential to be improved for patients through such techniques. This article presents an overview of the use of microscopy in periodontics, still in the early stages of development.

  16. On bi-criteria two-stage transportation problem: a case study

    Directory of Open Access Journals (Sweden)

    Ahmad MURAD

    2010-01-01

    Full Text Available The study of the optimum distribution of goods between sources and destinations is one of the important topics in projects economics. This importance comes as a result of minimizing the transportation cost, deterioration, time, etc. The classical transportation problem constitutes one of the major areas of application for linear programming. The aim of this problem is to obtain the optimum distribution of goods from different sources to different destinations which minimizes the total transportation cost. From the practical point of view, the transportation problems may differ from the classical form. It may contain one or more objective function, one or more stage to transport, one or more type of commodity with one or more means of transport. The aim of this paper is to construct an optimization model for transportation problem for one of mill-stones companies. The model is formulated as a bi-criteria two-stage transportation problem with a special structure depending on the capacities of suppliers, warehouses and requirements of the destinations. A solution algorithm is introduced to solve this class of bi-criteria two-stage transportation problem to obtain the set of non-dominated extreme points and the efficient solutions accompanied with each one that enables the decision maker to choose the best one. The solution algorithm mainly based on the fruitful application of the methods for treating transportation problems, theory of duality of linear programming and the methods of solving bi-criteria linear programming problems.

  17. Incorporating Workflow Interference in Facility Layout Design: The Quartic Assignment Problem

    OpenAIRE

    Wen-Chyuan Chiang; Panagiotis Kouvelis; Timothy L. Urban

    2002-01-01

    Although many authors have noted the importance of minimizing workflow interference in facility layout design, traditional layout research tends to focus on minimizing the distance-based transportation cost. This paper formalizes the concept of workflow interference from a facility layout perspective. A model, formulated as a quartic assignment problem, is developed that explicitly considers the interference of workflow. Optimal and heuristic solution methodologies are developed and evaluated.

  18. Experimental Investigation into Vibration Characteristics for Damage Minimization in a Lapping Process

    Directory of Open Access Journals (Sweden)

    J. Suwatthikul

    2016-01-01

    Full Text Available Lapping machines are used in a hard disk rough lapping process where a workpiece (a wafer row bar is locked with a robot arm and rubbed on a lap plate. In this process, the lap plate’s condition and lifetime are among important concerned factors. The lifetime can be too short due to the plate being accidentally scratched by the workpiece during lapping. This problem leads to undesired consequences such as machine downtime and excessive plate material usage. This paper presents an experimental investigation into vibration characteristics of passed and failed lapping scenarios and discusses a potential solution to minimize the serious damage so-called “plate scratch” which intermittently occurs in such process. The experimental results show that, by in situ monitoring vibration and utilizing artificial intelligence, damage minimization can be possible.

  19. Running non-minimal inflation with stabilized inflaton potential

    Energy Technology Data Exchange (ETDEWEB)

    Okada, Nobuchika; Raut, Digesh [University of Alabama, Department of Physics and Astronomy, Alabama (United States)

    2017-04-15

    In the context of the Higgs model involving gauge and Yukawa interactions with the spontaneous gauge symmetry breaking, we consider λφ{sup 4} inflation with non-minimal gravitational coupling, where the Higgs field is identified as the inflaton. Since the inflaton quartic coupling is very small, once quantum corrections through the gauge and Yukawa interactions are taken into account, the inflaton effective potential most likely becomes unstable. In order to avoid this problem, we need to impose stability conditions on the effective inflaton potential, which lead to not only non-trivial relations amongst the particle mass spectrum of the model, but also correlations between the inflationary predictions and the mass spectrum. For concrete discussion, we investigate the minimal B-L extension of the standard model with identification of the B-L Higgs field as the inflaton. The stability conditions for the inflaton effective potential fix the mass ratio amongst the B-L gauge boson, the right-handed neutrinos and the inflaton. This mass ratio also correlates with the inflationary predictions. In other words, if the B-L gauge boson and the right-handed neutrinos are discovered in the future, their observed mass ratio provides constraints on the inflationary predictions. (orig.)

  20. Minimal Flavour Violation and Beyond

    CERN Document Server

    Isidori, Gino

    2012-01-01

    We review the formulation of the Minimal Flavour Violation (MFV) hypothesis in the quark sector, as well as some "variations on a theme" based on smaller flavour symmetry groups and/or less minimal breaking terms. We also review how these hypotheses can be tested in B decays and by means of other flavour-physics observables. The phenomenological consequences of MFV are discussed both in general terms, employing a general effective theory approach, and in the specific context of the Minimal Supersymmetric extension of the SM.

  1. 3D overlapped grouping Ga for optimum 2D guillotine cutting stock problem

    Directory of Open Access Journals (Sweden)

    Maged R. Rostom

    2014-09-01

    Full Text Available The cutting stock problem (CSP is one of the significant optimization problems in operations research and has gained a lot of attention for increasing efficiency in industrial engineering, logistics and manufacturing. In this paper, new methodologies for optimally solving the cutting stock problem are presented. A modification is proposed to the existing heuristic methods with a hybrid new 3-D overlapped grouping Genetic Algorithm (GA for nesting of two-dimensional rectangular shapes. The objective is the minimization of the wastage of the sheet material which leads to maximizing material utilization and the minimization of the setup time. The model and its results are compared with real life case study from a steel workshop in a bus manufacturing factory. The effectiveness of the proposed approach is shown by comparing and shop testing of the optimized cutting schedules. The results reveal its superiority in terms of waste minimization comparing to the current cutting schedules. The whole procedure can be completed in a reasonable amount of time by the developed optimization program.

  2. On the statistics of the minimal solution of a linear Diophantine equation and uniform distribution of the real part of orbits in hyperbolic spaces

    DEFF Research Database (Denmark)

    Risager, Morten S.; Rudnick, Zeev

    We study a variant of a problem considered by Dinaburg and Sinai on the statistics of the minimal solution to a linear Diophantine equation. We show that the signed ratio between the Euclidean norms of the minimal solution and the coefficient vector is uniformly distributed modulo one. We reduce ...

  3. The dynamic multi-period vehicle routing problem

    DEFF Research Database (Denmark)

    Wen, Min; Cordeau, Jean-Francois; Laporte, Gilbert

    2010-01-01

    are to minimize total travel costs and customer waiting, and to balance the daily workload over the planning horizon. This problem originates from a large distributor operating in Sweden. It is modeled as a mixed integer linear program, and solved by means of a three-phase heuristic that works over a rolling...... planning horizon. The multi-objective aspect of the problem is handled through a scalar technique approach. Computational results show that the proposed approach can yield high quality solutions within reasonable running times....

  4. Mixed waste and waste minimization: The effect of regulations and waste minimization on the laboratory

    International Nuclear Information System (INIS)

    Dagan, E.B.; Selby, K.B.

    1993-08-01

    The Hanford Site is located in the State of Washington and is subject to state and federal environmental regulations that hamper waste minimization efforts. This paper addresses the negative effect of these regulations on waste minimization and mixed waste issues related to the Hanford Site. Also, issues are addressed concerning the regulations becoming more lenient. In addition to field operations, the Hanford Site is home to the Pacific Northwest Laboratory which has many ongoing waste minimization activities of particular interest to laboratories

  5. An efficient method for minimizing a convex separable logarithmic function subject to a convex inequality constraint or linear equality constraint

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available We consider the problem of minimizing a convex separable logarithmic function over a region defined by a convex inequality constraint or linear equality constraint, and two-sided bounds on the variables (box constraints. Such problems are interesting from both theoretical and practical point of view because they arise in some mathematical programming problems as well as in various practical problems such as problems of production planning and scheduling, allocation of resources, decision making, facility location problems, and so forth. Polynomial algorithms are proposed for solving problems of this form and their convergence is proved. Some examples and results of numerical experiments are also presented.

  6. Optimal Wafer Cutting in Shuttle Layout Problems

    DEFF Research Database (Denmark)

    Nisted, Lasse; Pisinger, David; Altman, Avri

    2011-01-01

    . The shuttle layout problem is frequently solved in two phases: first, a floorplan of the shuttle is generated. Then, a cutting plan is found which minimizes the overall number of wafers needed to satisfy the demand of each die type. Since some die types require special production technologies, only compatible...

  7. Flaxion: a minimal extension to solve puzzles in the standard model

    Energy Technology Data Exchange (ETDEWEB)

    Ema, Yohei [Department of Physics,The University of Tokyo, Tokyo 133-0033 (Japan); Hamaguchi, Koichi; Moroi, Takeo; Nakayama, Kazunori [Department of Physics,The University of Tokyo, Tokyo 133-0033 (Japan); Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU),University of Tokyo, Kashiwa 277-8583 (Japan)

    2017-01-23

    We propose a minimal extension of the standard model which includes only one additional complex scalar field, flavon, with flavor-dependent global U(1) symmetry. It not only explains the hierarchical flavor structure in the quark and lepton sector (including neutrino sector), but also solves the strong CP problem by identifying the CP-odd component of the flavon as the QCD axion, which we call flaxion. Furthermore, the flaxion model solves the cosmological puzzles in the standard model, i.e., origin of dark matter, baryon asymmetry of the universe, and inflation. We show that the radial component of the flavon can play the role of inflaton without isocurvature nor domain wall problems. The dark matter abundance can be explained by the flaxion coherent oscillation, while the baryon asymmetry of the universe is generated through leptogenesis.

  8. A dynamic programming algorithm for the space allocation and aisle positioning problem

    DEFF Research Database (Denmark)

    Bodnar, Peter; Lysgaard, Jens

    2014-01-01

    The space allocation and aisle positioning problem (SAAPP) in a material handling system with gravity flow racks is the problem of minimizing the total number of replenishments over a period subject to practical constraints related to the need for aisles granting safe and easy access to storage...

  9. SOLVING FLOWSHOP SCHEDULING PROBLEMS USING A DISCRETE AFRICAN WILD DOG ALGORITHM

    Directory of Open Access Journals (Sweden)

    M. K. Marichelvam

    2013-04-01

    Full Text Available The problem of m-machine permutation flowshop scheduling is considered in this paper. The objective is to minimize the makespan. The flowshop scheduling problem is a typical combinatorial optimization problem and has been proved to be strongly NP-hard. Hence, several heuristics and meta-heuristics were addressed by the researchers. In this paper, a discrete African wild dog algorithm is applied for solving the flowshop scheduling problems. Computational results using benchmark problems show that the proposed algorithm outperforms many other algorithms addressed in the literature.

  10. Taxonomic minimalism.

    Science.gov (United States)

    Beattle, A J; Oliver, I

    1994-12-01

    Biological surveys are in increasing demand while taxonomic resources continue to decline. How much formal taxonomy is required to get the job done? The answer depends on the kind of job but it is possible that taxonomic minimalism, especially (1) the use of higher taxonomic ranks, (2) the use of morphospecies rather than species (as identified by Latin binomials), and (3) the involvement of taxonomic specialists only for training and verification, may offer advantages for biodiversity assessment, environmental monitoring and ecological research. As such, formal taxonomy remains central to the process of biological inventory and survey but resources may be allocated more efficiently. For example, if formal Identification is not required, resources may be concentrated on replication and increasing sample sizes. Taxonomic minimalism may also facilitate the inclusion in these activities of important but neglected groups, especially among the invertebrates, and perhaps even microorganisms. Copyright © 1994. Published by Elsevier Ltd.

  11. Minimizing waste in environmental restoration

    International Nuclear Information System (INIS)

    Thuot, J.R.; Moos, L.

    1996-01-01

    Environmental restoration, decontamination and decommissioning, and facility dismantlement projects are not typically known for their waste minimization and pollution prevention efforts. Typical projects are driven by schedules and milestones with little attention given to cost or waste minimization. Conventional wisdom in these projects is that the waste already exists and cannot be reduced or minimized; however, there are significant areas where waste and cost can be reduced by careful planning and execution. Waste reduction can occur in three ways: beneficial reuse or recycling, segregation of waste types, and reducing generation of secondary waste

  12. Cooperative Content Distribution over Wireless Networks for Energy and Delay Minimization

    KAUST Repository

    Atat, Rachad

    2012-06-01

    Content distribution with mobile-to-mobile cooperation is studied. Data is sent to mobile terminals on a long range link then the terminals exchange the content using an appropriate short range wireless technology. Unicasting and multicasting are investigated, both on the long range and short range links. Energy minimization is formulated as an optimization problem for each scenario, and the optimal solutions are determined in closed form. Moreover, the schemes are applied in public safety vehicular networks, where Long Term Evolution (LTE) network is used for the long range link, while IEEE 802.11 p is considered for inter-vehicle collaboration on the short range links. Finally, relay-based multicasting is applied in high speed trains for energy and delay minimization. Results show that cooperative schemes outperform non-cooperative ones and other previous related work in terms of energy and delay savings. Furthermore, practical implementation aspects of the proposed methods are also discussed.

  13. Data-Driven Problems in Elasticity

    Science.gov (United States)

    Conti, S.; Müller, S.; Ortiz, M.

    2018-01-01

    We consider a new class of problems in elasticity, referred to as Data-Driven problems, defined on the space of strain-stress field pairs, or phase space. The problem consists of minimizing the distance between a given material data set and the subspace of compatible strain fields and stress fields in equilibrium. We find that the classical solutions are recovered in the case of linear elasticity. We identify conditions for convergence of Data-Driven solutions corresponding to sequences of approximating material data sets. Specialization to constant material data set sequences in turn establishes an appropriate notion of relaxation. We find that relaxation within this Data-Driven framework is fundamentally different from the classical relaxation of energy functions. For instance, we show that in the Data-Driven framework the relaxation of a bistable material leads to material data sets that are not graphs.

  14. The scalar curvature problem on the four dimensional half sphere

    CERN Document Server

    Ben-Ayed, M; El-Mehdi, K

    2003-01-01

    In this paper, we consider the problem of prescribing the scalar curvature under minimal boundary conditions on the standard four dimensional half sphere. We provide an Euler-Hopf type criterion for a given function to be a scalar curvature for some metric conformal to the standard one. Our proof involves the study of critical points at infinity of the associated variational problem.

  15. A Harmonic Algorithm for the 3D Strip Packing Problem

    NARCIS (Netherlands)

    N. Bansal (Nikhil); X. Han; K. Iwama; M. Sviridenko; G. Zhang (Guochuan)

    2013-01-01

    htmlabstractIn the three-dimensional (3D) strip packing problem, we are given a set of 3D rectangular items and a 3D box $B$. The goal is to pack all the items in $B$ such that the height of the packing is minimized. We consider the most basic version of the problem, where the items must be packed

  16. Solving project scheduling problems by minimum cut computations

    NARCIS (Netherlands)

    Möhring, R.H.; Schulz, A.S.; Stork, F.; Uetz, Marc Jochen

    In project scheduling, a set of precedence-constrained jobs has to be scheduled so as to minimize a given objective. In resource-constrained project scheduling, the jobs additionally compete for scarce resources. Due to its universality, the latter problem has a variety of applications in

  17. Quantization of the minimal and non-minimal vector field in curved space

    OpenAIRE

    Toms, David J.

    2015-01-01

    The local momentum space method is used to study the quantized massive vector field (the Proca field) with the possible addition of non-minimal terms. Heat kernel coefficients are calculated and used to evaluate the divergent part of the one-loop effective action. It is shown that the naive expression for the effective action that one would write down based on the minimal coupling case needs modification. We adopt a Faddeev-Jackiw method of quantization and consider the case of an ultrastatic...

  18. Solving the Dial-a-Ride Problem using Genetic algorithms

    DEFF Research Database (Denmark)

    Bergvinsdottir, Kristin Berg; Larsen, Jesper; Jørgensen, Rene Munk

    In the Dial-a-Ride problem (DARP) customers send transportation requests to an operator. A request consists of a specified pickup location and destination location along with a desired departure or arrival time and demand. The aim of DARP is to minimize transportation cost while satisfying custom...... routing problems for the vehicles using a routing heuristic. The algorithm is implemented in Java and tested on publicly available data sets....

  19. Optimal recombination in genetic algorithms for combinatorial optimization problems: Part II

    Directory of Open Access Journals (Sweden)

    Eremeev Anton V.

    2014-01-01

    Full Text Available This paper surveys results on complexity of the optimal recombination problem (ORP, which consists in finding the best possible offspring as a result of a recombination operator in a genetic algorithm, given two parent solutions. In Part II, we consider the computational complexity of ORPs arising in genetic algorithms for problems on permutations: the Travelling Salesman Problem, the Shortest Hamilton Path Problem and the Makespan Minimization on Single Machine and some other related problems. The analysis indicates that the corresponding ORPs are NP-hard, but solvable by faster algorithms, compared to the problems they are derived from.

  20. Minimal residual cone-beam reconstruction with attenuation correction in SPECT

    International Nuclear Information System (INIS)

    La, Valerie; Grangeat, Pierre

    1998-01-01

    This paper presents an iterative method based on the minimal residual algorithm for tomographic attenuation compensated reconstruction from attenuated cone-beam projections given the attenuation distribution. Unlike conjugate-gradient based reconstruction techniques, the proposed minimal residual based algorithm solves directly a quasisymmetric linear system, which is a preconditioned system. Thus it avoids the use of normal equations, which improves the convergence rate. Two main contributions are introduced. First, a regularization method is derived for quasisymmetric problems, based on a Tikhonov-Phillips regularization applied to the factorization of the symmetric part of the system matrix. This regularization is made spatially adaptive to avoid smoothing the region of interest. Second, our existing reconstruction algorithm for attenuation correction in parallel-beam geometry is extended to cone-beam geometry. A circular orbit is considered. Two preconditioning operators are proposed: the first one is Grangeat's inversion formula and the second one is Feldkamp's inversion formula. Experimental results obtained on simulated data are presented and the shadow zone effect on attenuated data is illustrated. (author)

  1. A canned food scheduling problem with batch due date

    Science.gov (United States)

    Chung, Tsui-Ping; Liao, Ching-Jong; Smith, Milton

    2014-09-01

    This article considers a canned food scheduling problem where jobs are grouped into several batches. Jobs can be sent to the next operation only when all the jobs in the same batch have finished their processing, i.e. jobs in a batch, have a common due date. This batch due date problem is quite common in canned food factories, but there is no efficient heuristic to solve the problem. The problem can be formulated as an identical parallel machine problem with batch due date to minimize the total tardiness. Since the problem is NP hard, two heuristics are proposed to find the near-optimal solution. Computational results comparing the effectiveness and efficiency of the two proposed heuristics with an existing heuristic are reported and discussed.

  2. Sludge minimization technologies - an overview

    Energy Technology Data Exchange (ETDEWEB)

    Oedegaard, Hallvard

    2003-07-01

    The management of wastewater sludge from wastewater treatment plants represents one of the major challenges in wastewater treatment today. The cost of the sludge treatment amounts to more that the cost of the liquid in many cases. Therefore the focus on and interest in sludge minimization is steadily increasing. In the paper an overview is given for sludge minimization (sludge mass reduction) options. It is demonstrated that sludge minimization may be a result of reduced production of sludge and/or disintegration processes that may take place both in the wastewater treatment stage and in the sludge stage. Various sludge disintegration technologies for sludge minimization are discussed, including mechanical methods (focusing on stirred ball-mill, high-pressure homogenizer, ultrasonic disintegrator), chemical methods (focusing on the use of ozone), physical methods (focusing on thermal and thermal/chemical hydrolysis) and biological methods (focusing on enzymatic processes). (author)

  3. Relativized problems with abelian phase group in topological dynamics.

    Science.gov (United States)

    McMahon, D

    1976-04-01

    Let (X, T) be the equicontinuous minimal transformation group with X = pi(infinity)Z(2), the Cantor group, and S = [unk](infinity)Z(2) endowed with the discrete topology acting on X by right multiplication. For any countable group T we construct a function F:X x S --> T such that if (Y, T) is a minimal transformation group, then (X x Y, S) is a minimal transformation group with the action defined by (x, y)s = [xs, yF(x, s)]. If (W, T) is a minimal transformation group and varphi:(Y, T) --> (W, T) is a homomorphism, then identity x varphi:(X x Y, S) --> (X x W, S) is a homomorphism and has many of the same properties that varphi has. For this reason, one may assume that the phase group is abelian (or S) without loss of generality for many relativized problems in topological dynamics.

  4. Minimal and careful processing

    OpenAIRE

    Nielsen, Thorkild

    2004-01-01

    In several standards, guidelines and publications, organic food processing is strongly associated with "minimal processing" and "careful processing". The term "minimal processing" is nowadays often used in the general food processing industry and described in literature. The term "careful processing" is used more specifically within organic food processing but is not yet clearly defined. The concept of carefulness seems to fit very well with the processing of organic foods, especially if it i...

  5. Analysis of the matrix structure of the Nuclear Weapons Complex waste minimization and hazard reduction program

    International Nuclear Information System (INIS)

    Churnetski, S.R.

    1991-01-01

    Two of the primary goals of this program in waste minimization that the major waste problems facing the Nuclear Weapons Complex (NWC) are being addressed systematically and to prevent duplication of effort by forming an integrated approach across the complex. Production, disposal, and the hazards of both the wastes and the in-process chemicals used were to be studied. The eight waste streams chosen (electroplating, miscellaneous, mixed, plutonium, polymers, solvents, tritium, and uranium) were deemed to be the most serious problems facing the Nuclear Weapons Complex

  6. Non-minimal Wu-Yang monopole

    International Nuclear Information System (INIS)

    Balakin, A.B.; Zayats, A.E.

    2007-01-01

    We discuss new exact spherically symmetric static solutions to non-minimally extended Einstein-Yang-Mills equations. The obtained solution to the Yang-Mills subsystem is interpreted as a non-minimal Wu-Yang monopole solution. We focus on the analysis of two classes of the exact solutions to the gravitational field equations. Solutions of the first class belong to the Reissner-Nordstroem type, i.e., they are characterized by horizons and by the singularity at the point of origin. The solutions of the second class are regular ones. The horizons and singularities of a new type, the non-minimal ones, are indicated

  7. The Simultaneous Vehicle Scheduling and Passenger Service Problem with Flexible Dwell Times

    DEFF Research Database (Denmark)

    Fonseca, Joao Filipe Paiva; Larsen, Allan; van der Hurk, Evelien

    In this talk, we deal with a generalization of the well-known Vehicle Scheduling Problem(VSP) that we call Simultaneous Vehicle Scheduling and Passenger Service Problem with Flexible Dwell Times (SVSPSP-FDT). The SVSPSP-FDT generalizes the VSP because the original timetables of the trips can...... be changed (i.e., shifted and stretched) in order to minimize a new objective function that aims at minimizing the operational costs plus the waiting times of the passengers at transfer points. Contrary to most generalizations of the VSP, the SVSPSP-FDT establishes the possibility of changing trips' dwell...... times at important transfer points based on expected passenger ows. We introduce a compact mixed integer linear formulation of the SVSPSP-FDT able to address small instances. We also present a meta-heuristic approach to solve medium/large instances of the problem. The e ectiveness of the proposed...

  8. A hybrid algorithm for solving inverse problems in elasticity

    Directory of Open Access Journals (Sweden)

    Barabasz Barbara

    2014-12-01

    Full Text Available The paper offers a new approach to handling difficult parametric inverse problems in elasticity and thermo-elasticity, formulated as global optimization ones. The proposed strategy is composed of two phases. In the first, global phase, the stochastic hp-HGS algorithm recognizes the basins of attraction of various objective minima. In the second phase, the local objective minimizers are closer approached by steepest descent processes executed singly in each basin of attraction. The proposed complex strategy is especially dedicated to ill-posed problems with multimodal objective functionals. The strategy offers comparatively low computational and memory costs resulting from a double-adaptive technique in both forward and inverse problem domains. We provide a result on the Lipschitz continuity of the objective functional composed of the elastic energy and the boundary displacement misfits with respect to the unknown constitutive parameters. It allows common scaling of the accuracy of solving forward and inverse problems, which is the core of the introduced double-adaptive technique. The capability of the proposed method of finding multiple solutions is illustrated by a computational example which consists in restoring all feasible Young modulus distributions minimizing an objective functional in a 3D domain of a photo polymer template obtained during step and flash imprint lithography.

  9. Waste Minimization and Pollution Prevention Awareness Plan

    International Nuclear Information System (INIS)

    1994-04-01

    The purpose of this plan is to document Lawrence Livermore National Laboratory (LLNL) projections for present and future waste minimization and pollution prevention. The plan specifies those activities and methods that are or will be used to reduce the quantity and toxicity of wastes generated at the site. It is intended to satisfy Department of Energy (DOE) requirements. This Plan provides an overview of projected activities from FY 1994 through FY 1999. The plans are broken into site-wide and problem-specific activities. All directorates at LLNL have had an opportunity to contribute input, to estimate budget, and to review the plan. In addition to the above, this plan records LLNL's goals for pollution prevention, regulatory drivers for those activities, assumptions on which the cost estimates are based, analyses of the strengths of the projects, and the barriers to increasing pollution prevention activities

  10. Error minimizing algorithms for nearest eighbor classifiers

    Energy Technology Data Exchange (ETDEWEB)

    Porter, Reid B [Los Alamos National Laboratory; Hush, Don [Los Alamos National Laboratory; Zimmer, G. Beate [TEXAS A& M

    2011-01-03

    Stack Filters define a large class of discrete nonlinear filter first introd uced in image and signal processing for noise removal. In recent years we have suggested their application to classification problems, and investigated their relationship to other types of discrete classifiers such as Decision Trees. In this paper we focus on a continuous domain version of Stack Filter Classifiers which we call Ordered Hypothesis Machines (OHM), and investigate their relationship to Nearest Neighbor classifiers. We show that OHM classifiers provide a novel framework in which to train Nearest Neighbor type classifiers by minimizing empirical error based loss functions. We use the framework to investigate a new cost sensitive loss function that allows us to train a Nearest Neighbor type classifier for low false alarm rate applications. We report results on both synthetic data and real-world image data.

  11. Optimal experiment design for quantum state tomography: Fair, precise, and minimal tomography

    International Nuclear Information System (INIS)

    Nunn, J.; Smith, B. J.; Puentes, G.; Walmsley, I. A.; Lundeen, J. S.

    2010-01-01

    Given an experimental setup and a fixed number of measurements, how should one take data to optimally reconstruct the state of a quantum system? The problem of optimal experiment design (OED) for quantum state tomography was first broached by Kosut et al.[R. Kosut, I. Walmsley, and H. Rabitz, e-print arXiv:quant-ph/0411093 (2004)]. Here we provide efficient numerical algorithms for finding the optimal design, and analytic results for the case of 'minimal tomography'. We also introduce the average OED, which is independent of the state to be reconstructed, and the optimal design for tomography (ODT), which minimizes tomographic bias. Monte Carlo simulations confirm the utility of our results for qubits. Finally, we adapt our approach to deal with constrained techniques such as maximum-likelihood estimation. We find that these are less amenable to optimization than cruder reconstruction methods, such as linear inversion.

  12. UPGMA and the normalized equidistant minimum evolution problem

    OpenAIRE

    Moulton, Vincent; Spillner, Andreas; Wu, Taoyang

    2017-01-01

    UPGMA (Unweighted Pair Group Method with Arithmetic Mean) is a widely used clustering method. Here we show that UPGMA is a greedy heuristic for the normalized equidistant minimum evolution (NEME) problem, that is, finding a rooted tree that minimizes the minimum evolution score relative to the dissimilarity matrix among all rooted trees with the same leaf-set in which all leaves have the same distance to the root. We prove that the NEME problem is NP-hard. In addition, we present some heurist...

  13. Wilson loops in minimal surfaces

    International Nuclear Information System (INIS)

    Drukker, Nadav; Gross, David J.; Ooguri, Hirosi

    1999-01-01

    The AdS/CFT correspondence suggests that the Wilson loop of the large N gauge theory with N = 4 supersymmetry in 4 dimensions is described by a minimal surface in AdS 5 x S 5 . The authors examine various aspects of this proposal, comparing gauge theory expectations with computations of minimal surfaces. There is a distinguished class of loops, which the authors call BPS loops, whose expectation values are free from ultra-violet divergence. They formulate the loop equation for such loops. To the extent that they have checked, the minimal surface in AdS 5 x S 5 gives a solution of the equation. The authors also discuss the zig-zag symmetry of the loop operator. In the N = 4 gauge theory, they expect the zig-zag symmetry to hold when the loop does not couple the scalar fields in the supermultiplet. They will show how this is realized for the minimal surface

  14. Wilson loops and minimal surfaces

    International Nuclear Information System (INIS)

    Drukker, Nadav; Gross, David J.; Ooguri, Hirosi

    1999-01-01

    The AdS-CFT correspondence suggests that the Wilson loop of the large N gauge theory with N=4 supersymmetry in four dimensions is described by a minimal surface in AdS 5 xS 5 . We examine various aspects of this proposal, comparing gauge theory expectations with computations of minimal surfaces. There is a distinguished class of loops, which we call BPS loops, whose expectation values are free from ultraviolet divergence. We formulate the loop equation for such loops. To the extent that we have checked, the minimal surface in AdS 5 xS 5 gives a solution of the equation. We also discuss the zigzag symmetry of the loop operator. In the N=4 gauge theory, we expect the zigzag symmetry to hold when the loop does not couple the scalar fields in the supermultiplet. We will show how this is realized for the minimal surface. (c) 1999 The American Physical Society

  15. A trim-loss minimization in a produce-handling vehicle production plant

    Directory of Open Access Journals (Sweden)

    Apichai Ritvirool

    2007-01-01

    Full Text Available How to cut out the required pieces from raw materials by minimizing waste is a trim-loss problem. The integer linear programming (ILP model was developed to solve this problem. In addition, this ILPmodel could be used for planning an order over some future time period. Time horizon of ordering raw material including weekly, monthly, quarterly, and annually could be planned to reduce the trim loss. Thenumerical examples using an industrial case study of a produce-handling vehicle production plant were presented to illustrate how the proposed ILP model could be applied to actual systems and the types ofinformation that was obtained relative to implementation. The results showed that the proposed ILP model can be used as a decision support tool for selecting time horizon of order planning and cutting patterns todecrease material cost and waste from cutting raw material.

  16. Minimizing Broadcast Expenses in Clustered Ad-hoc Networks

    Directory of Open Access Journals (Sweden)

    S. Zeeshan Hussain

    2018-01-01

    Full Text Available One way to minimize the broadcast expenses of routing protocols is to cluster the network. In clustered ad-hoc networks, all resources can be managed easily by resolving scalability issues. However, blind query broadcast is a major issue that leads to the broadcast storm problem in clustered ad-hoc networks. This query broadcast is done to carry out the route-search task that leads to the unnecessary propagation of route-query even after route has been found. Hence, this query propagation poses the problem of congestion in the network. In particular this motivates us to propose a query-control technique in such networks which works based on broadcast repealing. A huge amount of work has been devoted to propose the query control broadcasting techniques. However, such techniques used in traditional broadcasting mechanisms need to be properly extended for use in the cluster based routing architecture. In this paper, query-control technique is proposed for cluster based routing technique to reduce the broadcast expenses. Finally, we report some experiments which compare the proposed technique to other commonly used techniques including standard one-class AODV that follows TTL-sequence based broadcasting technique.

  17. An Exploration of Healthcare Inventory and Lean Management in Minimizing Medical Supply Waste in Healthcare Organizations

    Science.gov (United States)

    Hicks, Rodney

    2013-01-01

    The purpose of this study was to understand how lean thinking and inventory management technology minimize expired medical supply waste in healthcare organizations. This study was guided by Toyota's theory of lean and Mintzberg's theory of management development to explain why the problem of medical supply waste exists. Government…

  18. Topological gravity with minimal matter

    International Nuclear Information System (INIS)

    Li Keke

    1991-01-01

    Topological minimal matter, obtained by twisting the minimal N = 2 supeconformal field theory, is coupled to two-dimensional topological gravity. The free field formulation of the coupled system allows explicit representations of BRST charge, physical operators and their correlation functions. The contact terms of the physical operators may be evaluated by extending the argument used in a recent solution of topological gravity without matter. The consistency of the contact terms in correlation functions implies recursion relations which coincide with the Virasoro constraints derived from the multi-matrix models. Topological gravity with minimal matter thus provides the field theoretic description for the multi-matrix models of two-dimensional quantum gravity. (orig.)

  19. Minimizing waste in environmental restoration

    International Nuclear Information System (INIS)

    Moos, L.; Thuot, J.R.

    1996-01-01

    Environmental restoration, decontamination and decommissioning and facility dismantelment projects are not typically known for their waste minimization and pollution prevention efforts. Typical projects are driven by schedules and milestones with little attention given to cost or waste minimization. Conventional wisdom in these projects is that the waste already exists and cannot be reduced or minimized. In fact, however, there are three significant areas where waste and cost can be reduced. Waste reduction can occur in three ways: beneficial reuse or recycling; segregation of waste types; and reducing generation of secondary waste. This paper will discuss several examples of reuse, recycle, segregation, and secondary waste reduction at ANL restoration programs

  20. A branch-and-cut algorithm for the capacitated profitable tour problem

    DEFF Research Database (Denmark)

    Jepsen, Mads Kehlet; Petersen, Bjørn; Spoorendonk, Simon

    2014-01-01

    This paper considers the Capacitated Profitable Tour Problem (CPTP) which is a special case of the Elementary Shortest Path Problem with Resource Constraints (ESPPRC). The CPTP belongs to the group of problems known as traveling salesman problems with profits. In CPTP each customer is associated...... with a profit and a demand and the objective is to find a capacitated tour (rooted in a depot node) that minimizes the total travel distance minus the profit of the visited customers. The CPTP can be recognized as the sub-problem in many column generation applications, where it is traditionally solved through...

  1. Decomposing Large Inverse Problems with an Augmented Lagrangian Approach: Application to Joint Inversion of Body-Wave Travel Times and Surface-Wave Dispersion Measurements

    Science.gov (United States)

    Reiter, D. T.; Rodi, W. L.

    2015-12-01

    Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.

  2. A Practical and Robust Execution Time-Frame Procedure for the Multi-Mode Resource-Constrained Project Scheduling Problem with Minimal and Maximal Time Lags

    Directory of Open Access Journals (Sweden)

    Angela Hsiang-Ling Chen

    2016-09-01

    Full Text Available Modeling and optimizing organizational processes, such as the one represented by the Resource-Constrained Project Scheduling Problem (RCPSP, improve outcomes. Based on assumptions and simplification, this model tackles the allocation of resources so that organizations can continue to generate profits and reinvest in future growth. Nonetheless, despite all of the research dedicated to solving the RCPSP and its multi-mode variations, there is no standardized procedure that can guide project management practitioners in their scheduling tasks. This is mainly because many of the proposed approaches are either based on unrealistic/oversimplified scenarios or they propose solution procedures not easily applicable or even feasible in real-life situations. In this study, we solve a more true-to-life and complex model, Multimode RCPSP with minimal and maximal time lags (MRCPSP/max. The complexity of the model solved is presented, and the practicality of the proposed approach is justified depending on only information that is available for every project regardless of its industrial context. The results confirm that it is possible to determine a robust makespan and to calculate an execution time-frame with gaps lower than 11% between their lower and upper bounds. In addition, in many instances, the solved lower bound obtained was equal to the best-known optimum.

  3. Method for solving the periodic problem for integro-differential equations

    Directory of Open Access Journals (Sweden)

    Snezhana G. Hristova

    1989-05-01

    Full Text Available In the paper a monotone-iterative method for approximate finding a couple of minimal and maximal quasisolutions of the periodic problem for a system of integro-differential equations of Volterra type is justified.

  4. Periodic Heterogeneous Vehicle Routing Problem With Driver Scheduling

    Science.gov (United States)

    Mardiana Panggabean, Ellis; Mawengkang, Herman; Azis, Zainal; Filia Sari, Rina

    2018-01-01

    The paper develops a model for the optimal management of logistic delivery of a given commodity. The company has different type of vehicles with different capacity to deliver the commodity for customers. The problem is then called Periodic Heterogeneous Vehicle Routing Problem (PHVRP). The goal is to schedule the deliveries according to feasible combinations of delivery days and to determine the scheduling of fleet and driver and routing policies of the vehicles. The objective is to minimize the sum of the costs of all routes over the planning horizon. We propose a combined approach of heuristic algorithm and exact method to solve the problem.

  5. Approximation Algorithm for a Heterogeneous Vehicle Routing Problem

    Directory of Open Access Journals (Sweden)

    Jungyun Bae

    2015-08-01

    Full Text Available This article addresses a fundamental path planning problem which aims to route a collection of heterogeneous vehicles such that each target location is visited by some vehicle and the sum of the travel costs of the vehicles is minimal. Vehicles are heterogeneous as the cost of traveling between any two locations depends on the type of the vehicle. Algorithms are developed for this path planning problem with bounds on the quality of the solutions produced by the algorithms. Computational results show that high quality solutions can be obtained for the path planning problem involving four vehicles and 40 targets using the proposed approach.

  6. [Minimally invasive approach for cervical spondylotic radiculopathy].

    Science.gov (United States)

    Ding, Liang; Sun, Taicun; Huang, Yonghui

    2010-01-01

    To summarize the recent minimally invasive approach for cervical spondylotic radiculopathy (CSR). The recent literature at home and abroad concerning minimally invasive approach for CSR was reviewed and summarized. There were two techniques of minimally invasive approach for CSR at present: percutaneous puncture techniques and endoscopic techniques. The degenerate intervertebral disc was resected or nucleolysis by percutaneous puncture technique if CSR was caused by mild or moderate intervertebral disc herniations. The cervical microendoscopic discectomy and foraminotomy was an effective minimally invasive approach which could provide a clear view. The endoscopy techniques were suitable to treat CSR caused by foraminal osteophytes, lateral disc herniations, local ligamentum flavum thickening and spondylotic foraminal stenosis. The minimally invasive procedure has the advantages of simple handling, minimally invasive and low incidence of complications. But the scope of indications is relatively narrow at present.

  7. The delivery dispatching problem with time windows for urban consolidation centers

    OpenAIRE

    van Heeswijk, W.J.A.; Mes, Martijn R.K.; Schutten, Johannes M.J.

    2015-01-01

    This paper addresses the dispatch decision problem faced by an urban consolidation center. The center receives orders according to a stochastic arrival process, and dispatches them for the last-mile distribution in batches. The operator of the center aims to fi nd the cost-minimizing consolidation policy, depending on the orders at hand, pre-announced orders, and stochastic arrivals. We present this problem as a variant of the Delivery Dispatching Problem that includes dispatch windows, and m...

  8. Guidelines for mixed waste minimization

    International Nuclear Information System (INIS)

    Owens, C.

    1992-02-01

    Currently, there is no commercial mixed waste disposal available in the United States. Storage and treatment for commercial mixed waste is limited. Host States and compacts region officials are encouraging their mixed waste generators to minimize their mixed wastes because of management limitations. This document provides a guide to mixed waste minimization

  9. Minimal rates for lepton flavour violation from supersymmetric leptogenesis

    International Nuclear Information System (INIS)

    Ibarra, A; Simonetto, C

    2010-01-01

    The see-saw is a very attractive model for neutrino mass generation in particular in association with supersymmetry as a solution to the hierarchy problem. Under the plausible assumptions of hierarchical neutrino Yukawa eigenvalues and the absence of cancellations, we derive an upper bound on the lightest right-handed neutrino mass from the non-observation of μ → eγ and μ-e conversion in nuclei. The ongoing experiment MEG as well as the planned experiments Mu2e, COMET and PRISM/PRIME will improve this bound if no evidence of lepton flavour violation is found. We lastly comment on the possibility of ruling out minimal leptogenesis if these experiments find no signal.

  10. ITOUGH2 sample problems

    International Nuclear Information System (INIS)

    Finsterle, S.

    1997-11-01

    This report contains a collection of ITOUGH2 sample problems. It complements the ITOUGH2 User's Guide [Finsterle, 1997a], and the ITOUGH2 Command Reference [Finsterle, 1997b]. ITOUGH2 is a program for parameter estimation, sensitivity analysis, and uncertainty propagation analysis. It is based on the TOUGH2 simulator for non-isothermal multiphase flow in fractured and porous media [Preuss, 1987, 1991a]. The report ITOUGH2 User's Guide [Finsterle, 1997a] describes the inverse modeling framework and provides the theoretical background. The report ITOUGH2 Command Reference [Finsterle, 1997b] contains the syntax of all ITOUGH2 commands. This report describes a variety of sample problems solved by ITOUGH2. Table 1.1 contains a short description of the seven sample problems discussed in this report. The TOUGH2 equation-of-state (EOS) module that needs to be linked to ITOUGH2 is also indicated. Each sample problem focuses on a few selected issues shown in Table 1.2. ITOUGH2 input features and the usage of program options are described. Furthermore, interpretations of selected inverse modeling results are given. Problem 1 is a multipart tutorial, describing basic ITOUGH2 input files for the main ITOUGH2 application modes; no interpretation of results is given. Problem 2 focuses on non-uniqueness, residual analysis, and correlation structure. Problem 3 illustrates a variety of parameter and observation types, and describes parameter selection strategies. Problem 4 compares the performance of minimization algorithms and discusses model identification. Problem 5 explains how to set up a combined inversion of steady-state and transient data. Problem 6 provides a detailed residual and error analysis. Finally, Problem 7 illustrates how the estimation of model-related parameters may help compensate for errors in that model

  11. Minimal string theories and integrable hierarchies

    Science.gov (United States)

    Iyer, Ramakrishnan

    Well-defined, non-perturbative formulations of the physics of string theories in specific minimal or superminimal model backgrounds can be obtained by solving matrix models in the double scaling limit. They provide us with the first examples of completely solvable string theories. Despite being relatively simple compared to higher dimensional critical string theories, they furnish non-perturbative descriptions of interesting physical phenomena such as geometrical transitions between D-branes and fluxes, tachyon condensation and holography. The physics of these theories in the minimal model backgrounds is succinctly encoded in a non-linear differential equation known as the string equation, along with an associated hierarchy of integrable partial differential equations (PDEs). The bosonic string in (2,2m-1) conformal minimal model backgrounds and the type 0A string in (2,4 m) superconformal minimal model backgrounds have the Korteweg-de Vries system, while type 0B in (2,4m) backgrounds has the Zakharov-Shabat system. The integrable PDE hierarchy governs flows between backgrounds with different m. In this thesis, we explore this interesting connection between minimal string theories and integrable hierarchies further. We uncover the remarkable role that an infinite hierarchy of non-linear differential equations plays in organizing and connecting certain minimal string theories non-perturbatively. We are able to embed the type 0A and 0B (A,A) minimal string theories into this single framework. The string theories arise as special limits of a rich system of equations underpinned by an integrable system known as the dispersive water wave hierarchy. We find that there are several other string-like limits of the system, and conjecture that some of them are type IIA and IIB (A,D) minimal string backgrounds. We explain how these and several other string-like special points arise and are connected. In some cases, the framework endows the theories with a non

  12. Waste minimization at Chalk River Laboratories

    Energy Technology Data Exchange (ETDEWEB)

    Kranz, P.; Wong, P.C.F. [Atomic Energy of Canada Limited, Chalk River, ON (Canada)

    2011-07-01

    Waste minimization supports Atomic Energy of Canada Limited (AECL) Environment Policy with regard to pollution prevention and has positive impacts on the environment, human health and safety, and economy. In accordance with the principle of pollution prevention, the quantities and degree of hazard of wastes requiring storage or disposition at facilities within or external to AECL sites shall be minimized, following the principles of Prevent, Reduce, Reuse, and Recycle, to the extent practical. Waste minimization is an important element in the Waste Management Program. The Waste Management Program has implemented various initiatives for waste minimization since 2007. The key initiatives have focused on waste reduction, segregation and recycling, and included: 1) developed waste minimization requirements and recycling procedure to establish the framework for applying the Waste Minimization Hierarchy; 2) performed waste minimization assessments for the facilities, which generate significant amounts of waste, to identify the opportunities for waste reduction and assist the waste generators to develop waste reduction targets and action plans to achieve the targets; 3) implemented the colour-coded, standardized waste and recycling containers to enhance waste segregation; 4) established partnership with external agents for recycling; 5) extended the likely clean waste and recyclables collection to selected active areas; 6) provided on-going communications to promote waste reduction and increase awareness for recycling; and 7) continually monitored performance, with respect to waste minimization, to identify opportunities for improvement and to communicate these improvements. After implementation of waste minimization initiatives at CRL, the solid waste volume generated from routine operations at CRL has significantly decreased, while the amount of recyclables diverted from the onsite landfill has significantly increased since 2007. The overall refuse volume generated at

  13. Waste minimization at Chalk River Laboratories

    International Nuclear Information System (INIS)

    Kranz, P.; Wong, P.C.F.

    2011-01-01

    Waste minimization supports Atomic Energy of Canada Limited (AECL) Environment Policy with regard to pollution prevention and has positive impacts on the environment, human health and safety, and economy. In accordance with the principle of pollution prevention, the quantities and degree of hazard of wastes requiring storage or disposition at facilities within or external to AECL sites shall be minimized, following the principles of Prevent, Reduce, Reuse, and Recycle, to the extent practical. Waste minimization is an important element in the Waste Management Program. The Waste Management Program has implemented various initiatives for waste minimization since 2007. The key initiatives have focused on waste reduction, segregation and recycling, and included: 1) developed waste minimization requirements and recycling procedure to establish the framework for applying the Waste Minimization Hierarchy; 2) performed waste minimization assessments for the facilities, which generate significant amounts of waste, to identify the opportunities for waste reduction and assist the waste generators to develop waste reduction targets and action plans to achieve the targets; 3) implemented the colour-coded, standardized waste and recycling containers to enhance waste segregation; 4) established partnership with external agents for recycling; 5) extended the likely clean waste and recyclables collection to selected active areas; 6) provided on-going communications to promote waste reduction and increase awareness for recycling; and 7) continually monitored performance, with respect to waste minimization, to identify opportunities for improvement and to communicate these improvements. After implementation of waste minimization initiatives at CRL, the solid waste volume generated from routine operations at CRL has significantly decreased, while the amount of recyclables diverted from the onsite landfill has significantly increased since 2007. The overall refuse volume generated at

  14. Minimal Brain Damage/Dysfunction (MBD) en de ontwikkeling van de wetenschappelijke kinderstudie in Nederland, ca. 1950-1990

    NARCIS (Netherlands)

    Bakker, Nelleke

    2014-01-01

    This paper discusses the reception in the Netherlands of Minimal Brain Damage/Dysfunction (MBD) and related labels for normally gifted children with learning disabilities and behavioural problems by child scientists of all sorts from the 1950s up to the late 1980s, when MBD was replaced with

  15. COLLAGE-BASED INVERSE PROBLEMS FOR IFSM WITH ENTROPY MAXIMIZATION AND SPARSITY CONSTRAINTS

    Directory of Open Access Journals (Sweden)

    Herb Kunze

    2013-11-01

    Full Text Available We consider the inverse problem associated with IFSM: Given a target function f, find an IFSM, such that its invariant fixed point f is sufficiently close to f in the Lp distance. In this paper, we extend the collage-based method developed by Forte and Vrscay (1995 along two different directions. We first search for a set of mappings that not only minimizes the collage error but also maximizes the entropy of the dynamical system. We then include an extra term in the minimization process which takes into account the sparsity of the set of mappings. In this new formulation, the minimization of collage error is treated as multi-criteria problem: we consider three different and conflicting criteria i.e., collage error, entropy and sparsity. To solve this multi-criteria program we proceed by scalarization and we reduce the model to a single-criterion program by combining all objective functions with different trade-off weights. The results of some numerical computations are presented. Numerical studies indicate that a maximum entropy principle exists for this approximation problem, i.e., that the suboptimal solutions produced by collage coding can be improved at least slightly by adding a maximum entropy criterion.

  16. Non-minimal inflation revisited

    International Nuclear Information System (INIS)

    Nozari, Kourosh; Shafizadeh, Somayeh

    2010-01-01

    We reconsider an inflationary model that inflaton field is non-minimally coupled to gravity. We study the parameter space of the model up to the second (and in some cases third) order of the slow-roll parameters. We calculate inflation parameters in both Jordan and Einstein frames, and the results are compared in these two frames and also with observations. Using the recent observational data from combined WMAP5+SDSS+SNIa datasets, we study constraints imposed on our model parameters, especially the non-minimal coupling ξ.

  17. Identification problems in linear transformation system

    International Nuclear Information System (INIS)

    Delforge, Jacques.

    1975-01-01

    An attempt was made to solve the theoretical and numerical difficulties involved in the identification problem relative to the linear part of P. Delattre's theory of transformation systems. The theoretical difficulties are due to the very important problem of the uniqueness of the solution, which must be demonstrated in order to justify the value of the solution found. Simple criteria have been found when measurements are possible on all the equivalence classes, but the problem remains imperfectly solved when certain evolution curves are unknown. The numerical difficulties are of two kinds: a slow convergence of iterative methods and a strong repercussion of numerical and experimental errors on the solution. In the former case a fast convergence was obtained by transformation of the parametric space, while in the latter it was possible, from sensitivity functions, to estimate the errors, to define and measure the conditioning of the identification problem then to minimize this conditioning as a function of the experimental conditions [fr

  18. Optimizing investment fund allocation using vehicle routing problem framework

    Science.gov (United States)

    Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah Rozita

    2014-07-01

    The objective of investment is to maximize total returns or minimize total risks. To determine the optimum order of investment, vehicle routing problem method is used. The method which is widely used in the field of resource distribution shares almost similar characteristics with the problem of investment fund allocation. In this paper we describe and elucidate the concept of using vehicle routing problem framework in optimizing the allocation of investment fund. To better illustrate these similarities, sectorial data from FTSE Bursa Malaysia is used. Results show that different values of utility for risk-averse investors generate the same investment routes.

  19. Perturbation-Based Regularization for Signal Estimation in Linear Discrete Ill-posed Problems

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Al-Naffouri, Tareq Y.

    2016-01-01

    Estimating the values of unknown parameters from corrupted measured data faces a lot of challenges in ill-posed problems. In such problems, many fundamental estimation methods fail to provide a meaningful stabilized solution. In this work, we propose a new regularization approach and a new regularization parameter selection approach for linear least-squares discrete ill-posed problems. The proposed approach is based on enhancing the singular-value structure of the ill-posed model matrix to acquire a better solution. Unlike many other regularization algorithms that seek to minimize the estimated data error, the proposed approach is developed to minimize the mean-squared error of the estimator which is the objective in many typical estimation scenarios. The performance of the proposed approach is demonstrated by applying it to a large set of real-world discrete ill-posed problems. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods in most cases. In addition, the approach also enjoys the lowest runtime and offers the highest level of robustness amongst all the tested benchmark regularization methods.

  20. Perturbation-Based Regularization for Signal Estimation in Linear Discrete Ill-posed Problems

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2016-11-29

    Estimating the values of unknown parameters from corrupted measured data faces a lot of challenges in ill-posed problems. In such problems, many fundamental estimation methods fail to provide a meaningful stabilized solution. In this work, we propose a new regularization approach and a new regularization parameter selection approach for linear least-squares discrete ill-posed problems. The proposed approach is based on enhancing the singular-value structure of the ill-posed model matrix to acquire a better solution. Unlike many other regularization algorithms that seek to minimize the estimated data error, the proposed approach is developed to minimize the mean-squared error of the estimator which is the objective in many typical estimation scenarios. The performance of the proposed approach is demonstrated by applying it to a large set of real-world discrete ill-posed problems. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods in most cases. In addition, the approach also enjoys the lowest runtime and offers the highest level of robustness amongst all the tested benchmark regularization methods.

  1. On the Application of Iterative Methods of Nondifferentiable Optimization to Some Problems of Approximation Theory

    Directory of Open Access Journals (Sweden)

    Stefan M. Stefanov

    2014-01-01

    Full Text Available We consider the data fitting problem, that is, the problem of approximating a function of several variables, given by tabulated data, and the corresponding problem for inconsistent (overdetermined systems of linear algebraic equations. Such problems, connected with measurement of physical quantities, arise, for example, in physics, engineering, and so forth. A traditional approach for solving these two problems is the discrete least squares data fitting method, which is based on discrete l2-norm. In this paper, an alternative approach is proposed: with each of these problems, we associate a nondifferentiable (nonsmooth unconstrained minimization problem with an objective function, based on discrete l1- and/or l∞-norm, respectively; that is, these two norms are used as proximity criteria. In other words, the problems under consideration are solved by minimizing the residual using these two norms. Respective subgradients are calculated, and a subgradient method is used for solving these two problems. The emphasis is on implementation of the proposed approach. Some computational results, obtained by an appropriate iterative method, are given at the end of the paper. These results are compared with the results, obtained by the iterative gradient method for the corresponding “differentiable” discrete least squares problems, that is, approximation problems based on discrete l2-norm.

  2. The Gribov problem in the frame of stochastic quantization

    Energy Technology Data Exchange (ETDEWEB)

    Parrinello, C. (Rome-1 Univ. (Italy). Dipt. di Fisica)

    1990-09-01

    We review the Gribov problem in the Landau gauge, from the point of view of stochastic quantization, and briefly sketch a numerical investigation based on a minimization algorithm, with the purpose of collecting wide information about Gribov copies within the first Gribov horizon. (orig.).

  3. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    Science.gov (United States)

    Almutairy, Meznah; Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  4. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    Directory of Open Access Journals (Sweden)

    Meznah Almutairy

    Full Text Available Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  5. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches

    Science.gov (United States)

    Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method. PMID:29389989

  6. Point source reconstruction principle of linear inverse problems

    International Nuclear Information System (INIS)

    Terazono, Yasushi; Matani, Ayumu; Fujimaki, Norio; Murata, Tsutomu

    2010-01-01

    Exact point source reconstruction for underdetermined linear inverse problems with a block-wise structure was studied. In a block-wise problem, elements of a source vector are partitioned into blocks. Accordingly, a leadfield matrix, which represents the forward observation process, is also partitioned into blocks. A point source is a source having only one nonzero block. An example of such a problem is current distribution estimation in electroencephalography and magnetoencephalography, where a source vector represents a vector field and a point source represents a single current dipole. In this study, the block-wise norm, a block-wise extension of the l p -norm, was defined as the family of cost functions of the inverse method. The main result is that a set of three conditions was found to be necessary and sufficient for block-wise norm minimization to ensure exact point source reconstruction for any leadfield matrix that admit such reconstruction. The block-wise norm that satisfies the conditions is the sum of the cost of all the observations of source blocks, or in other words, the block-wisely extended leadfield-weighted l 1 -norm. Additional results are that minimization of such a norm always provides block-wisely sparse solutions and that its solutions form cones in source space

  7. Null-polygonal minimal surfaces in AdS4 from perturbed W minimal models

    International Nuclear Information System (INIS)

    Hatsuda, Yasuyuki; Ito, Katsushi; Satoh, Yuji

    2012-11-01

    We study the null-polygonal minimal surfaces in AdS 4 , which correspond to the gluon scattering amplitudes/Wilson loops in N=4 super Yang-Mills theory at strong coupling. The area of the minimal surfaces with n cusps is characterized by the thermodynamic Bethe ansatz (TBA) integral equations or the Y-system of the homogeneous sine-Gordon model, which is regarded as the SU(n-4) 4 /U(1) n-5 generalized parafermion theory perturbed by the weight-zero adjoint operators. Based on the relation to the TBA systems of the perturbed W minimal models, we solve the TBA equations by using the conformal perturbation theory, and obtain the analytic expansion of the remainder function around the UV/regular-polygonal limit for n = 6 and 7. We compare the rescaled remainder function for n=6 with the two-loop one, to observe that they are close to each other similarly to the AdS 3 case.

  8. Solar system tests for realistic f(T) models with non-minimal torsion-matter coupling

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Rui-Hui; Zhai, Xiang-Hua; Li, Xin-Zhou [Shanghai Normal University, Shanghai United Center for Astrophysics (SUCA), Shanghai (China)

    2017-08-15

    In the previous paper, we have constructed two f(T) models with non-minimal torsion-matter coupling extension, which are successful in describing the evolution history of the Universe including the radiation-dominated era, the matter-dominated era, and the present accelerating expansion. Meantime, the significant advantage of these models is that they could avoid the cosmological constant problem of ΛCDM. However, the non-minimal coupling between matter and torsion will affect the tests of the Solar system. In this paper, we study the effects of the Solar system in these models, including the gravitation redshift, geodetic effect and perihelion precession. We find that Model I can pass all three of the Solar system tests. For Model II, the parameter is constrained by the uncertainties of the planets' estimated perihelion precessions. (orig.)

  9. 3D motion analysis via energy minimization

    Energy Technology Data Exchange (ETDEWEB)

    Wedel, Andreas

    2009-10-16

    This work deals with 3D motion analysis from stereo image sequences for driver assistance systems. It consists of two parts: the estimation of motion from the image data and the segmentation of moving objects in the input images. The content can be summarized with the technical term machine visual kinesthesia, the sensation or perception and cognition of motion. In the first three chapters, the importance of motion information is discussed for driver assistance systems, for machine vision in general, and for the estimation of ego motion. The next two chapters delineate on motion perception, analyzing the apparent movement of pixels in image sequences for both a monocular and binocular camera setup. Then, the obtained motion information is used to segment moving objects in the input video. Thus, one can clearly identify the thread from analyzing the input images to describing the input images by means of stationary and moving objects. Finally, I present possibilities for future applications based on the contents of this thesis. Previous work in each case is presented in the respective chapters. Although the overarching issue of motion estimation from image sequences is related to practice, there is nothing as practical as a good theory (Kurt Lewin). Several problems in computer vision are formulated as intricate energy minimization problems. In this thesis, motion analysis in image sequences is thoroughly investigated, showing that splitting an original complex problem into simplified sub-problems yields improved accuracy, increased robustness, and a clear and accessible approach to state-of-the-art motion estimation techniques. In Chapter 4, optical flow is considered. Optical flow is commonly estimated by minimizing the combined energy, consisting of a data term and a smoothness term. These two parts are decoupled, yielding a novel and iterative approach to optical flow. The derived Refinement Optical Flow framework is a clear and straight-forward approach to

  10. Non-minimally coupled tachyon field in teleparallel gravity

    Energy Technology Data Exchange (ETDEWEB)

    Fazlpour, Behnaz [Department of Physics, Babol Branch, Islamic Azad University, Shariati Street, Babol (Iran, Islamic Republic of); Banijamali, Ali, E-mail: b.fazlpour@umz.ac.ir, E-mail: a.banijamali@nit.ac.ir [Department of Basic Sciences, Babol University of Technology, Shariati Street, Babol (Iran, Islamic Republic of)

    2015-04-01

    We perform a full investigation on dynamics of a new dark energy model in which the four-derivative of a non-canonical scalar field (tachyon) is non-minimally coupled to the vector torsion. Our analysis is done in the framework of teleparallel equivalent of general relativity which is based on torsion instead of curvature. We show that in our model there exists a late-time scaling attractor (point P{sub 4}), corresponding to an accelerating universe with the property that dark energy and dark matter densities are of the same order. Such a point can help to alleviate the cosmological coincidence problem. Existence of this point is the most significant difference between our model and another model in which a canonical scalar field (quintessence) is used instead of tachyon field.

  11. Non-minimally coupled tachyon field in teleparallel gravity

    International Nuclear Information System (INIS)

    Fazlpour, Behnaz; Banijamali, Ali

    2015-01-01

    We perform a full investigation on dynamics of a new dark energy model in which the four-derivative of a non-canonical scalar field (tachyon) is non-minimally coupled to the vector torsion. Our analysis is done in the framework of teleparallel equivalent of general relativity which is based on torsion instead of curvature. We show that in our model there exists a late-time scaling attractor (point P 4 ), corresponding to an accelerating universe with the property that dark energy and dark matter densities are of the same order. Such a point can help to alleviate the cosmological coincidence problem. Existence of this point is the most significant difference between our model and another model in which a canonical scalar field (quintessence) is used instead of tachyon field

  12. Heuristics for no-wait flow shop scheduling problem

    Directory of Open Access Journals (Sweden)

    Kewal Krishan Nailwal

    2016-09-01

    Full Text Available No-wait flow shop scheduling refers to continuous flow of jobs through different machines. The job once started should have the continuous processing through the machines without wait. This situation occurs when there is a lack of an intermediate storage between the processing of jobs on two consecutive machines. The problem of no-wait with the objective of minimizing makespan in flow shop scheduling is NP-hard; therefore the heuristic algorithms are the key to solve the problem with optimal solution or to approach nearer to optimal solution in simple manner. The paper describes two heuristics, one constructive and an improvement heuristic algorithm obtained by modifying the constructive one for sequencing n-jobs through m-machines in a flow shop under no-wait constraint with the objective of minimizing makespan. The efficiency of the proposed heuristic algorithms is tested on 120 Taillard’s benchmark problems found in the literature against the NEH under no-wait and the MNEH heuristic for no-wait flow shop problem. The improvement heuristic outperforms all heuristics on the Taillard’s instances by improving the results of NEH by 27.85%, MNEH by 22.56% and that of the proposed constructive heuristic algorithm by 24.68%. To explain the computational process of the proposed algorithm, numerical illustrations are also given in the paper. Statistical tests of significance are done in order to draw the conclusions.

  13. Measures for simultaneous minimization of alkali related operating problems, Phase 2; Aatgaerder foer samtidig minimering av alkalirelaterade driftproblem, Etapp 2. Ramprogram

    Energy Technology Data Exchange (ETDEWEB)

    Gyllenhammar, Marianne; Herstad Svaerd, Solvie; Davidsson, Kent; Aamand, Lars-Erik; Steenari, Britt-Marie; Folkeson, Nicklas; Pettersson, Jesper; Svensson, Jan-Erik; Boss, Anna; Johansson, Linda; Kassman, Haakan

    2007-12-15

    Combustion of an increasing amount of biofuel and waste woods has resulted in certain environmental advantages, including decreased emissions of fossil CO{sub 2}, SO{sub 2} and metals. On the other hand, a number of chloride and alkali related operational problems have occurred which are related to combustion of these fuels. Alkali related operational problems have been studied in a project consisting of two parts. The overall scope has been to characterise the operational problems and to study measures to minimise them. The first part was reported in Vaermeforsk report 997. In part two, additional measures have been included in the test plan and initial corrosion has been studied linked to the different measures. The tests have also in part two been carried out at the 12 MW CFB boiler at Chalmers. The effect of the selected measures has been investigated concerning both deposit formation and bed agglomeration, and at the same time emissions and other operational conditions were characterised. The second part of the project has among other things focused on: To investigate measures which decrease the content of alkali and chloride in the deposits, and consequently decrease the risk for corrosion (by investigating the initial corrosion). Focus was also on trying to explain favourable effects. To investigate if it is possible to combine a rather low dosage of kaolin and injection of ammonium sulphate. This was done in order to reduce both bed agglomeration and problems from deposits during combustion of fuels rich in chlorine. To investigate if co-combustion with sewage sludge, de-inking sludge or peat with high ash content, could give similar advantages as conventional additives. Investigate if ash from PFBC (coal ash and dolomite) is possible to use as an alternative bed material. In the reference case, straw pellets were co-combusted together with wood pellets. This fuel mixture gave high alkali and chlorine contents. Alkali was in surplus of chlorine. The

  14. Minimalism and Speakers’ Intuitions

    Directory of Open Access Journals (Sweden)

    Matías Gariazzo

    2011-08-01

    Full Text Available Minimalism proposes a semantics that does not account for speakers’ intuitions about the truth conditions of a range of sentences or utterances. Thus, a challenge for this view is to offer an explanation of how its assignment of semantic contents to these sentences is grounded in their use. Such an account was mainly offered by Soames, but also suggested by Cappelen and Lepore. The article criticizes this explanation by presenting four kinds of counterexamples to it, and arrives at the conclusion that minimalism has not successfully answered the above-mentioned challenge.

  15. Financial strategies for minimizing corporate income taxes under Brazil's new global tax system

    OpenAIRE

    Limberg, Stephen T.; Robison, John R.; Schadewald, Michael S.

    1997-01-01

    In 1996, Brazil adopted a worldwide income tax system for corporations. This system represents a fundamental change in how the Brazílian government treats multinational transactions and the tax minimizing strategies relevant to businesses. In this article, we describe the conceptual basis for worldwide tax systems and the problem of double taxation that they create. Responses to double taxation by both the governments and the priva te sector are considered. Namely, the imperfect mechanisms de...

  16. Scheduling with Learning Effects and/or Time-Dependent Processing Times to Minimize the Weighted Number of Tardy Jobs on a Single Machine

    Directory of Open Access Journals (Sweden)

    Jianbo Qian

    2013-01-01

    Full Text Available We consider single machine scheduling problems with learning/deterioration effects and time-dependent processing times, with due date assignment consideration, and our objective is to minimize the weighted number of tardy jobs. By reducing all versions of the problem to an assignment problem, we solve them in O(n4 time. For some important special cases, the time complexity can be improved to be O(n2 using dynamic programming techniques.

  17. The re-emergence of the minimal running shoe.

    Science.gov (United States)

    Davis, Irene S

    2014-10-01

    The running shoe has gone through significant changes since its inception. The purpose of this paper is to review these changes, the majority of which have occurred over the past 50 years. Running footwear began as very minimal, then evolved to become highly cushioned and supportive. However, over the past 5 years, there has been a reversal of this trend, with runners seeking more minimal shoes that allow their feet more natural motion. This abrupt shift toward footwear without cushioning and support has led to reports of injuries associated with minimal footwear. In response to this, the running footwear industry shifted again toward the development of lightweight, partial minimal shoes that offer some support and cushioning. In this paper, studies comparing the mechanics between running in minimal, partial minimal, and traditional shoes are reviewed. The implications for injuries in all 3 conditions are examined. The use of minimal footwear in other populations besides runners is discussed. Finally, areas for future research into minimal footwear are suggested.

  18. Assessing and minimizing contamination in time of flight based validation data

    Science.gov (United States)

    Lennox, Kristin P.; Rosenfield, Paul; Blair, Brenton; Kaplan, Alan; Ruz, Jaime; Glenn, Andrew; Wurtz, Ronald

    2017-10-01

    Time of flight experiments are the gold standard method for generating labeled training and testing data for the neutron/gamma pulse shape discrimination problem. As the popularity of supervised classification methods increases in this field, there will also be increasing reliance on time of flight data for algorithm development and evaluation. However, time of flight experiments are subject to various sources of contamination that lead to neutron and gamma pulses being mislabeled. Such labeling errors have a detrimental effect on classification algorithm training and testing, and should therefore be minimized. This paper presents a method for identifying minimally contaminated data sets from time of flight experiments and estimating the residual contamination rate. This method leverages statistical models describing neutron and gamma travel time distributions and is easily implemented using existing statistical software. The method produces a set of optimal intervals that balance the trade-off between interval size and nuisance particle contamination, and its use is demonstrated on a time of flight data set for Cf-252. The particular properties of the optimal intervals for the demonstration data are explored in detail.

  19. Grading Homework to Emphasize Problem-Solving Process Skills

    Science.gov (United States)

    Harper, Kathleen A.

    2012-01-01

    This article describes a grading approach that encourages students to employ particular problem-solving skills. Some strengths of this method, called "process-based grading," are that it is easy to implement, requires minimal time to grade, and can be used in conjunction with either an online homework delivery system or paper-based homework.

  20. Inference with minimal Gibbs free energy in information field theory

    International Nuclear Information System (INIS)

    Ensslin, Torsten A.; Weig, Cornelius

    2010-01-01

    Non-linear and non-Gaussian signal inference problems are difficult to tackle. Renormalization techniques permit us to construct good estimators for the posterior signal mean within information field theory (IFT), but the approximations and assumptions made are not very obvious. Here we introduce the simple concept of minimal Gibbs free energy to IFT, and show that previous renormalization results emerge naturally. They can be understood as being the Gaussian approximation to the full posterior probability, which has maximal cross information with it. We derive optimized estimators for three applications, to illustrate the usage of the framework: (i) reconstruction of a log-normal signal from Poissonian data with background counts and point spread function, as it is needed for gamma ray astronomy and for cosmography using photometric galaxy redshifts, (ii) inference of a Gaussian signal with unknown spectrum, and (iii) inference of a Poissonian log-normal signal with unknown spectrum, the combination of (i) and (ii). Finally we explain how Gaussian knowledge states constructed by the minimal Gibbs free energy principle at different temperatures can be combined into a more accurate surrogate of the non-Gaussian posterior.

  1. 10 CFR 20.1406 - Minimization of contamination.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Minimization of contamination. 20.1406 Section 20.1406... License Termination § 20.1406 Minimization of contamination. (a) Applicants for licenses, other than early... procedures for operation will minimize, to the extent practicable, contamination of the facility and the...

  2. Periodic Solutions for Circular Restricted -Body Problems

    Directory of Open Access Journals (Sweden)

    Xiaoxiao Zhao

    2013-01-01

    Full Text Available For circular restricted -body problems, we study the motion of a sufficiently small mass point (called the zero mass point in the plane of equal masses located at the vertices of a regular polygon. By using variational minimizing methods, for some , we prove the existence of the noncollision periodic solution for the zero mass point with some fixed wingding number.

  3. The gravitino-overproduction problem in inflaton decay

    International Nuclear Information System (INIS)

    Kawasaki, M.; Yanagida, T.T.; Tokyo Univ.

    2006-11-01

    We show that the gravitino-overproduction problem is prevalent among inflation models in supergravity. An inflaton field generically acquires (effective) non-vanishing auxiliary field, if the Kaehler potential is non-minimal. The inflaton field then decays into a pair of the gravitinos, thereby severely constraining many of the inflation models especially in the case of the gravity-mediated SUSY breaking. (orig.)

  4. A minimal architecture for joint action

    DEFF Research Database (Denmark)

    Vesper, Cordula; Butterfill, Stephen; Knoblich, Günther

    2010-01-01

    What kinds of processes and representations make joint action possible? In this paper we suggest a minimal architecture for joint action that focuses on representations, action monitoring and action prediction processes, as well as ways of simplifying coordination. The architecture spells out...... minimal requirements for an individual agent to engage in a joint action. We discuss existing evidence in support of the architecture as well as open questions that remain to be empirically addressed. In addition, we suggest possible interfaces between the minimal architecture and other approaches...... to joint action. The minimal architecture has implications for theorizing about the emergence of joint action, for human-machine interaction, and for understanding how coordination can be facilitated by exploiting relations between multiple agents’ actions and between actions and the environment....

  5. Minimal Walking Technicolor

    DEFF Research Database (Denmark)

    Foadi, Roshan; Frandsen, Mads Toudal; A. Ryttov, T.

    2007-01-01

    Different theoretical and phenomenological aspects of the Minimal and Nonminimal Walking Technicolor theories have recently been studied. The goal here is to make the models ready for collider phenomenology. We do this by constructing the low energy effective theory containing scalars......, pseudoscalars, vector mesons and other fields predicted by the minimal walking theory. We construct their self-interactions and interactions with standard model fields. Using the Weinberg sum rules, opportunely modified to take into account the walking behavior of the underlying gauge theory, we find...... interesting relations for the spin-one spectrum. We derive the electroweak parameters using the newly constructed effective theory and compare the results with the underlying gauge theory. Our analysis is sufficiently general such that the resulting model can be used to represent a generic walking technicolor...

  6. Minimal Marking: A Success Story

    Directory of Open Access Journals (Sweden)

    Anne McNeilly

    2014-11-01

    Full Text Available The minimal-marking project conducted in Ryerson’s School of Journalism throughout 2012 and early 2013 resulted in significantly higher grammar scores in two first-year classes of minimally marked university students when compared to two traditionally marked classes. The “minimal-marking” concept (Haswell, 1983, which requires dramatically more student engagement, resulted in more successful learning outcomes for surface-level knowledge acquisition than the more traditional approach of “teacher-corrects-all.” Results suggest it would be effective, not just for grammar, punctuation, and word usage, the objective here, but for any material that requires rote-memory learning, such as the Associated Press or Canadian Press style rules used by news publications across North America.

  7. Investigation of Free Particle Propagator with Generalized Uncertainty Problem

    International Nuclear Information System (INIS)

    Hassanabadi, H.; Ghobakhloo, F.

    2016-01-01

    We consider the Schrödinger equation with a generalized uncertainty principle for a free particle. We then transform the problem into a second-order ordinary differential equation and thereby obtain the corresponding propagator. The result of ordinary quantum mechanics is recovered for vanishing minimal length parameter.

  8. Sexual Problems of Nigerian Adolescents and their Implications for ...

    African Journals Online (AJOL)

    It was discussed that school counselors should introduce sex counselling where discussions would centre on how to minimize the effects of unhealthy sexual behaviour among youths. Keywords: Sexuality; Problems; Adolescents; Nigeria The Nigerian Journal of Guidance and Counselling Vol. 11 (1) 2006: pp. 70-75 ...

  9. Ant colony optimization techniques for the hamiltonian p-median problem

    Directory of Open Access Journals (Sweden)

    M. Zohrehbandian

    2010-12-01

    Full Text Available Location-Routing problems involve locating a number of facilitiesamong candidate sites and establishing delivery routes to a set of users in such a way that the total system cost is minimized. A special case of these problems is Hamiltonian p-Median problem (HpMP. This research applies the metaheuristic method of ant colony optimization (ACO to solve the HpMP. Modifications are made to the ACO algorithm used to solve the traditional vehicle routing problem (VRP in order to allow the search of the optimal solution of the HpMP. Regarding this metaheuristic algorithm a computational experiment is reported as well.

  10. Integrating packing and distribution problems and optimization through mathematical programming

    Directory of Open Access Journals (Sweden)

    Fabio Miguel

    2016-06-01

    Full Text Available This paper analyzes the integration of two combinatorial problems that frequently arise in production and distribution systems. One is the Bin Packing Problem (BPP problem, which involves finding an ordering of some objects of different volumes to be packed into the minimal number of containers of the same or different size. An optimal solution to this NP-Hard problem can be approximated by means of meta-heuristic methods. On the other hand, we consider the Capacitated Vehicle Routing Problem with Time Windows (CVRPTW, which is a variant of the Travelling Salesman Problem (again a NP-Hard problem with extra constraints. Here we model those two problems in a single framework and use an evolutionary meta-heuristics to solve them jointly. Furthermore, we use data from a real world company as a test-bed for the method introduced here.

  11. On the complexity of container stowage planning problems

    DEFF Research Database (Denmark)

    Tierney, Kevin; Pacino, Dario; Jensen, Rune Møller

    2014-01-01

    The optimization of container ship and depot operations embeds the kk-shift problem, in which containers must be stowed in stacks such that at most kk containers must be removed in order to reach containers below them. We first solve an open problem introduced by Avriel et al. (2000) by showing...... that changing from uncapacitated to capacitated stacks reduces the complexity of this problem from NP-complete to polynomial. We then examine the complexity of the current state-of-the-art abstraction of container ship stowage planning, wherein containers and slots are grouped together. To do this, we define...... the hatch overstow problem, in which a set of containers are placed on top of the hatches of a container ship such that the number of containers that are stowed on hatches that must be accessed is minimized. We show that this problem is NP-complete by a reduction from the set-covering problem, which means...

  12. The problem 7 forming triangular geometric line field

    Directory of Open Access Journals (Sweden)

    Travush Vladimir Iljich

    2016-01-01

    Full Text Available Investigated a method of formation of triangular networks in the field. Delivered conditions the problem of locating a triangular network in the area. The criterion for assessing the effectiveness of the solution of the problem is the minimum number of sizes of the dome elements, the possibility of pre-assembly and pre-stressing. The solution of the problem of one embodiment of a triangular network of accommodation in a compatible spherical triangle and, accordingly, on the sphere. Optimization of triangular geometric network on a sphere on the criterion of minimum sizes of elements can be solved by placing the system in an irregular hexagon inscribed in a circle of minimal size, maximum regular hexagons.

  13. PENGARUH PENGGUNAAN LAPISAN EDIBEL (EDIBLE COATING, KALSIUM KLORIDA, DAN KEMASAN PLASTIK TERHADAP MUTU NANAS (Ananas comosus Merr. TEROLAH MINIMAL

    Directory of Open Access Journals (Sweden)

    Indera Sakti Nasution

    2012-06-01

    Full Text Available The problem that often occurs before consuming fresh pineapple is it takes a long time to peel the pineapple. Minimal processing of pineapple is one of the solutions for practical use by consumers who would like to consume it fresh. However, minimally processed pineapple will be easily damaged and has short shelf life. The aims of this study are to determine the quality of minimally processed pineapple coated with edible coating, effect of calcium chloride dipping, as well as plastic packaging at low temperatures storage. Combination of Cassava starch and glycerol used as edible coating for pineapple dipped in CaCl2for 1 minute and 2 minutes, respectively. Products were packaged using polyethylene, polypropylene, and without packaging. It is obtained that dipping the product in CaCl2 for 2 minutes and packaging it using polypropylene (plastic can prolong the shelf life of minimally processed pineapple stored at 5°C up to 8 days.

  14. Solving the gas transmission problem with consideration of the compressors

    OpenAIRE

    Bakhouya, Bouchra; de Wolf, Daniel

    2008-01-01

    In [7], De Wolf and Smeers consider the problem of the gas distribution through a network of pipelines. The problem was formulated as a cost minimization subject to nonlinear flow-pressure relations, material balances and pressure bounds. This model does not reflect any more the current situation on the gas market. Today, the transportation and gas buying functions are separated. This work considers the new situation for the transportation company. The objective for the tran...

  15. The Adjoint Method for the Inverse Problem of Option Pricing

    Directory of Open Access Journals (Sweden)

    Shou-Lei Wang

    2014-01-01

    Full Text Available The estimation of implied volatility is a typical PDE inverse problem. In this paper, we propose the TV-L1 model for identifying the implied volatility. The optimal volatility function is found by minimizing the cost functional measuring the discrepancy. The gradient is computed via the adjoint method which provides us with an exact value of the gradient needed for the minimization procedure. We use the limited memory quasi-Newton algorithm (L-BFGS to find the optimal and numerical examples shows the effectiveness of the presented method.

  16. On a variational principle for shape optimization and elliptic free boundary problems

    Directory of Open Access Journals (Sweden)

    Raúl B. González De Paz

    2009-02-01

    Full Text Available A variational principle for several free boundary value problems using a relaxation approach is presented. The relaxed Energy functional is concave and it is defined on a convex set, so that the minimizing points are characteristic functions of sets. As a consequence of the first order optimality conditions, it is shown that the corresponding sets are domains bounded by free boundaries, so that the equivalence of the solution of the relaxed problem with the solution of several free boundary value problem is proved. Keywords: Calculus of variations, optimization, free boundary problems.

  17. Technology applications for radioactive waste minimization

    International Nuclear Information System (INIS)

    Devgun, J.S.

    1994-01-01

    The nuclear power industry has achieved one of the most successful examples of waste minimization. The annual volume of low-level radioactive waste shipped for disposal per reactor has decreased to approximately one-fifth the volume about a decade ago. In addition, the curie content of the total waste shipped for disposal has decreased. This paper will discuss the regulatory drivers and economic factors for waste minimization and describe the application of technologies for achieving waste minimization for low-level radioactive waste with examples from the nuclear power industry

  18. Transience and capacity of minimal submanifolds

    DEFF Research Database (Denmark)

    Markvorsen, Steen; Palmer, V.

    2003-01-01

    We prove explicit lower bounds for the capacity of annular domains of minimal submanifolds P-m in ambient Riemannian spaces N-n with sectional curvatures bounded from above. We characterize the situations in which the lower bounds for the capacity are actually attained. Furthermore we apply...... these bounds to prove that Brownian motion defined on a complete minimal submanifold is transient when the ambient space is a negatively curved Hadamard-Cartan manifold. The proof stems directly from the capacity bounds and also covers the case of minimal submanifolds of dimension m > 2 in Euclidean spaces....

  19. On a Minimum Problem in Smectic Elastomers

    International Nuclear Information System (INIS)

    Buonsanti, Michele; Giovine, Pasquale

    2008-01-01

    Smectic elastomers are layered materials exhibiting a solid-like elastic response along the layer normal and a rubbery one in the plane. Balance equations for smectic elastomers are derived from the general theory of continua with constrained microstructure. In this work we investigate a very simple minimum problem based on multi-well potentials where the microstructure is taken into account. The set of polymeric strains minimizing the elastic energy contains a one-parameter family of simple strain associated with a micro-variation of the degree of freedom. We develop the energy functional through two terms, the first one nematic and the second one considering the tilting phenomenon; after, by developing in the rubber elasticity framework, we minimize over the tilt rotation angle and extract the engineering stress

  20. On the isoperimetric rigidity of extrinsic minimal balls

    DEFF Research Database (Denmark)

    Markvorsen, Steen; Palmer, V.

    2003-01-01

    We consider an m-dimensional minimal submanifold P and a metric R-sphere in the Euclidean space R-n. If the sphere has its center p on P, then it will cut out a well defined connected component of P which contains this center point. We call this connected component an extrinsic minimal R-ball of P....... The quotient of the volume of the extrinsic ball and the volume of its boundary is not larger than the corresponding quotient obtained in the space form standard situation, where the minimal submanifold is the totally geodesic linear subspace R-m. Here we show that if the minimal submanifold has dimension...... larger than 3, if P is not too curved along the boundary of an extrinsic minimal R-ball, and if the inequality alluded to above is an equality for the extrinsic minimal ball, then the minimal submanifold is totally geodesic....

  1. A New Approach to the Container Positioning Problem

    DEFF Research Database (Denmark)

    Ahmt, Jonas; Sigtenbjerggaard, Jonas Skott; Lusby, Richard Martin

    Programming model that not only improves on earlier attempts at this problem, but also better reflects reality. In particular, the proposed model adopts a preference to reshuffle containers in line with a just-in-time concept, as it is assumed that data is more accurate the closer to a container’s scheduled...... for solving larger instances of the problem. We show that this new formulation drastically outperforms previous attempts at the problem through a direct comparison on instances available in the literature. Furthermore, we also show that the rolling horizon based heuristic can further reduce the solution time......In this paper the Container Positioning Problem is revisited. This problem arises at busy container terminals and requires one to minimize the use of block cranes in handling the containers that must wait at the terminal until their next means of transportation. We propose a new Mixed Integer...

  2. Safety control and minimization of radioactive wastes

    International Nuclear Information System (INIS)

    Wang Jinming; Rong Feng; Li Jinyan; Wang Xin

    2010-01-01

    Compared with the developed countries, the safety control and minimization of the radwastes in China are under-developed. The research of measures for the safety control and minimization of the radwastes is very important for the safety control of the radwastes, and the reduction of the treatment and disposal cost and environment radiation hazards. This paper has systematically discussed the safety control and the minimization of the radwastes produced in the nuclear fuel circulation, nuclear technology applications and the process of decommission of nuclear facilities, and has provided some measures and methods for the safety control and minimization of the radwastes. (authors)

  3. The OTD Robotics Waste Minimization Program

    International Nuclear Information System (INIS)

    Couture, S.A.

    1992-04-01

    The danger to human health and safety posed by exposure to transuranic (TRU) and Pu contaminated materials necessitates remote processing in confined environments. Currently these operations are carried out in gloveboxes and hot-cells by human operators using lead- lined gloves or teleoperated manipulators. Protective clothing worn by operators during gloved operations has contributed significantly to the waste problems currently facing site remediators. The DOE Environmental Restoration and Waste Management (ER/WM) Program is in the process of developing and demonstrating technologies to assist in the remediation of sites that have accumulated wastes generated using these processes over the past five decades. Recognizing that continued use of existing production, recovery and waste treatment systems will compound the remediation problem, DOE has made a commitment to waste minimization. To reduce waste generation during weapons production and waste processing operations, automated processes are being developed and demonstrated for use in future DOE processing facilities as part of OTD's Robotics Technology Development Program. These technologies are currently being applied to pyrochemical processing systems to demonstrate conversion of plutonium oxide to metal. However, these technologies are expected to have applications in a variety of waste processing systems including those used to treat high-level tank wastes, buried wastes requiring remote processing, mixed wastes, and unknown hazardous materials. In addition to reducing the future waste burden of DOE, automated processes are an effective way to comply with existing and anticipated federal, state, and local regulations related to personal health and safety and the health of the environment

  4. Order Batching in Warehouses by Minimizing Total Tardiness: A Hybrid Approach of Weighted Association Rule Mining and Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Amir Hossein Azadnia

    2013-01-01

    Full Text Available One of the cost-intensive issues in managing warehouses is the order picking problem which deals with the retrieval of items from their storage locations in order to meet customer requests. Many solution approaches have been proposed in order to minimize traveling distance in the process of order picking. However, in practice, customer orders have to be completed by certain due dates in order to avoid tardiness which is neglected in most of the related scientific papers. Consequently, we proposed a novel solution approach in order to minimize tardiness which consists of four phases. First of all, weighted association rule mining has been used to calculate associations between orders with respect to their due date. Next, a batching model based on binary integer programming has been formulated to maximize the associations between orders within each batch. Subsequently, the order picking phase will come up which used a Genetic Algorithm integrated with the Traveling Salesman Problem in order to identify the most suitable travel path. Finally, the Genetic Algorithm has been applied for sequencing the constructed batches in order to minimize tardiness. Illustrative examples and comparisons are presented to demonstrate the proficiency and solution quality of the proposed approach.

  5. Segmentation of Synchrotron Radiation micro-Computed Tomography Images using Energy Minimization via Graph Cuts

    International Nuclear Information System (INIS)

    Meneses, Anderson A.M.; Giusti, Alessandro; Almeida, André P. de; Nogueira, Liebert; Braz, Delson; Almeida, Carlos E. de; Barroso, Regina C.

    2012-01-01

    The research on applications of segmentation algorithms to Synchrotron Radiation X-Ray micro-Computed Tomography (SR-μCT) is an open problem, due to the interesting and well-known characteristics of SR images, such as the phase contrast effect. The Energy Minimization via Graph Cuts (EMvGC) algorithm represents state-of-art segmentation algorithm, presenting an enormous potential of application in SR-μCT imaging. We describe the application of the algorithm EMvGC with swap move for the segmentation of bone images acquired at the ELETTRA Laboratory (Trieste, Italy). - Highlights: ► Microstructures of Wistar rats' ribs are investigated with Synchrotron Radiation μCT imaging. ► The present work is part of a research on the effects of radiotherapy on the thoracic region. ► Application of the Energy Minimization via Graph Cuts algorithm for segmentation is described.

  6. Automated problem generation in Learning Management Systems: a tutorial

    Directory of Open Access Journals (Sweden)

    Jaime Romero

    2016-07-01

    Full Text Available The benefits of solving problems have been widely acknowledged by literature. Its implementation in e–learning platforms can make easier its management and the learning process itself. However, its implementation can also become a very time–consuming task, particularly when the number of problems to generate is high. In this tutorial we describe a methodology that we have developed aiming to alleviate the workload of producing a great deal of problems in Moodle for an undergraduate business course. This methodology follows a six-step process and allows evaluating student’s skills in problem solving, minimizes plagiarism behaviors and provides immediate feedback. We expect this tutorial encourage other educators to apply our six steps process, thus benefiting themselves and their students of its advantages.

  7. Simulation and Analysis of Converging Shock Wave Test Problems

    Energy Technology Data Exchange (ETDEWEB)

    Ramsey, Scott D. [Los Alamos National Laboratory; Shashkov, Mikhail J. [Los Alamos National Laboratory

    2012-06-21

    Results and analysis pertaining to the simulation of the Guderley converging shock wave test problem (and associated code verification hydrodynamics test problems involving converging shock waves) in the LANL ASC radiation-hydrodynamics code xRAGE are presented. One-dimensional (1D) spherical and two-dimensional (2D) axi-symmetric geometric setups are utilized and evaluated in this study, as is an instantiation of the xRAGE adaptive mesh refinement capability. For the 2D simulations, a 'Surrogate Guderley' test problem is developed and used to obviate subtleties inherent to the true Guderley solution's initialization on a square grid, while still maintaining a high degree of fidelity to the original problem, and minimally straining the general credibility of associated analysis and conclusions.

  8. Approximate k-NN delta test minimization method using genetic algorithms: Application to time series

    CERN Document Server

    Mateo, F; Gadea, Rafael; Sovilj, Dusan

    2010-01-01

    In many real world problems, the existence of irrelevant input variables (features) hinders the predictive quality of the models used to estimate the output variables. In particular, time series prediction often involves building large regressors of artificial variables that can contain irrelevant or misleading information. Many techniques have arisen to confront the problem of accurate variable selection, including both local and global search strategies. This paper presents a method based on genetic algorithms that intends to find a global optimum set of input variables that minimize the Delta Test criterion. The execution speed has been enhanced by substituting the exact nearest neighbor computation by its approximate version. The problems of scaling and projection of variables have been addressed. The developed method works in conjunction with MATLAB's Genetic Algorithm and Direct Search Toolbox. The goodness of the proposed methodology has been evaluated on several popular time series examples, and also ...

  9. Minimal string theory is logarithmic

    International Nuclear Information System (INIS)

    Ishimoto, Yukitaka; Yamaguchi, Shun-ichi

    2005-01-01

    We study the simplest examples of minimal string theory whose worldsheet description is the unitary (p,q) minimal model coupled to two-dimensional gravity ( Liouville field theory). In the Liouville sector, we show that four-point correlation functions of 'tachyons' exhibit logarithmic singularities, and that the theory turns out to be logarithmic. The relation with Zamolodchikov's logarithmic degenerate fields is also discussed. Our result holds for generic values of (p,q)

  10. TOKMINA, Toroidal Magnetic Field Minimization for Tokamak Fusion Reactor. TOKMINA-2, Total Power for Tokamak Fusion Reactor

    International Nuclear Information System (INIS)

    Hatch, A.J.

    1975-01-01

    1 - Description of problem or function: TOKMINA finds the minimum magnetic field, Bm, required at the toroidal coil of a Tokamak type fusion reactor when the input is beta(ratio of plasma pressure to magnetic pressure), q(Kruskal-Shafranov plasma stability factor), and y(ratio of plasma radius to vacuum wall radius: rp/rw) and arrays of PT (total thermal power from both d-t and tritium breeding reactions), Pw (wall loading or power flux) and TB (thickness of blanket), following the method of Golovin, et al. TOKMINA2 finds the total power, PT, of such a fusion reactor, given a specified magnetic field, Bm, at the toroidal coil. 2 - Method of solution: TOKMINA: the aspect ratio(a) is minimized, giving a minimum value for Bm. TOKMINA2: a search is made for PT; the value of PT which minimizes Bm to the required value within 50 Gauss is chosen. 3 - Restrictions on the complexity of the problem: Input arrays presently are dimensioned at 20. This restriction can be overcome by changing a dimension card

  11. Fast multi-dimensional NMR by minimal sampling

    Science.gov (United States)

    Kupče, Ēriks; Freeman, Ray

    2008-03-01

    A new scheme is proposed for very fast acquisition of three-dimensional NMR spectra based on minimal sampling, instead of the customary step-wise exploration of all of evolution space. The method relies on prior experiments to determine accurate values for the evolving frequencies and intensities from the two-dimensional 'first planes' recorded by setting t1 = 0 or t2 = 0. With this prior knowledge, the entire three-dimensional spectrum can be reconstructed by an additional measurement of the response at a single location (t1∗,t2∗) where t1∗ and t2∗ are fixed values of the evolution times. A key feature is the ability to resolve problems of overlap in the acquisition dimension. Applied to a small protein, agitoxin, the three-dimensional HNCO spectrum is obtained 35 times faster than systematic Cartesian sampling of the evolution domain. The extension to multi-dimensional spectroscopy is outlined.

  12. Modifying gummy smile: a minimally invasive approach.

    Science.gov (United States)

    Abdullah, Walid Ahmed; Khalil, Hesham S; Alhindi, Maryam M; Marzook, Hamdy

    2014-11-01

    Excessive gingival display is a problem that can be managed by variety of procedures. These procedures include non-surgical and surgical methods. The underlying cause of gummy smile can affect the type of procedure to be selected. Most patients prefer minimally invasive procedures with outstanding results. The authors describe a minimally invasive lip repositioning technique for management of gummy smile. Twelve patients (10 females, 2 males) with gingival display of 4 mm or more were operated under local anesthesia using a modified lip repositioning technique. Patients were followed up for 1, 3, 6 and 12 months and gingival display was measured at each follow up visit. The gingival mucosa was dissected and levator labii superioris and depressor septi muscles were freed and repositioned in a lower position. The levator labii superioris muscles were pulled in a lower position using circumdental sutures for 10 days. Both surgeon's and patient's satisfaction of surgical outcome was recorded at each follow-up visit. At early stage of follow-up the main complaints of patients were the feeling of tension in the upper lip and circum oral area, mild pain which was managed with analgesics. One month postoperatively, the gingival display in all patients was recorded to be between 2 and 4 mm with a mean of (2.6 mm). Patient satisfaction records after 1 month showed that 10 patients were satisfied with the results. Three months postoperatively, the gingival display in all patients was recorded and found to be between 2 and 5 mm with a mean of 3 mm. Patient satisfaction records showed that 8 patients were satisfied with the results as they gave scores between. Surgeon's satisfaction at three months follow up showed that the surgeons were satisfied in 8 patients. The same results were found in the 6 and 12 months follow-up periods without any changes. Complete relapse was recorded only in one case at the third postoperative month. This study showed that the proposed lip

  13. Solving the Dial-a-Ride Problem using Genetic Algorithms

    DEFF Research Database (Denmark)

    Jørgensen, Rene Munk; Larsen, Jesper; Bergvinsdottir, Kristin Berg

    2007-01-01

    In the Dial-a-Ride problem (DARP), customers request transportation from an operator. A request consists of a specified pickup location and destination location along with a desired departure or arrival time and capacity demand. The aim of DARP is to minimize transportation cost while satisfying ...... routing problems for the vehicles using a routing heuristic. The algorithm is implemented in Java and tested on publicly available data sets. The new solution method has achieved solutions comparable with the current state-of-the-art methods....

  14. A Fast and Accurate Algorithm for l1 Minimization Problems in Compressive Sampling (Preprint)

    Science.gov (United States)

    2013-01-22

    However, updating uk+1 via the formulation of Step 2 in Algorithm 1 can be implemented through the use of the component-wise Gauss - Seidel iteration which...may accelerate the rate of convergence of the algorithm and therefore reduce the total CPU-time consumed. The efficiency of component-wise Gauss - Seidel ...Micchelli, L. Shen, and Y. Xu, A proximity algorithm accelerated by Gauss - Seidel iterations for L1/TV denoising models, Inverse Problems, 28 (2012), p

  15. Pengaruh Pelapis Bionanokomposit terhadap Mutu Mangga Terolah Minimal

    Directory of Open Access Journals (Sweden)

    Ata Aditya Wardana

    2017-04-01

    Full Text Available Abstract Minimally-processed mango is a perishable product due to high respiration and transpiration and microbial decay. Edible coating is one of the alternative methods to maintain the quality of minimally - processed mango. The objective of this study was to evaluate the effects of bionanocomposite edible coating from tapioca and ZnO nanoparticles (NP-ZnO on quality of minimally - processed mango cv. Arumanis, stored for 12 days at 8°C. The combination of tapioca and NP-ZnO (0, 1, 2% by weight of tapioca were used to coat minimally processed mango. The result showed that application of bionanocomposite edible coatings were able to maintain the quality of minimally-processed mango during the storage periods. The bionanocomposite from tapioca + NP-ZnO (2% by weight of tapioca was the most effective in reducing weight loss, firmness, browning index, total acidity, total soluble solids ,respiration, and microbial counts. Thus, the use of bionanocomposite edible coating might provide an alternative method to maintain storage quality of minimally-processed mango. Abstrak Mangga terolah minimal merupakan produk yang cepat mengalami kerusakan dikarenakan respirasi yang cepat, transpirasi dan kerusakan oleh mikroba. Edible coating merupakan salah satu alternatif metode untuk mempertahankan mutu mangga terolah minimal. Tujuan dari penelitian ini adalah untuk mengevaluasi pengaruh pelapis bionanokomposit dari tapioka dan nanopartikel ZnO (NP-ZnO terhadap mutu mangga terolah minimal cv. Arumanis yang disimpan selama 12 hari pada suhu 8oC. Kombinasi dari tapioka dan NP-ZnO (0, 1, 2% b/b tapioka digunakan untuk melapisi mangga terolah minimal. Hasil menunjukkan bahwa pelapisan bionanokomposit mampu mempertahankan mutu mangga terolah minimal selama penyimpanan. Bionanokomposit dari tapioka + NP-ZnO (2% b/b tapioka paling efektif dalam menghambat penurunan susut bobot, kekerasan, indeks pencoklatan, total asam, total padatan terlarut, respirasi dan total

  16. Hazardous waste minimization tracking system

    International Nuclear Information System (INIS)

    Railan, R.

    1994-01-01

    Under RCRA section 3002 9(b) and 3005f(h), hazardous waste generators and owners/operators of treatment, storage, and disposal facilities (TSDFs) are required to certify that they have a program in place to reduce the volume or quantity and toxicity of hazardous waste to the degree determined to be economically practicable. In many cases, there are environmental, as well as, economic benefits, for agencies that pursue pollution prevention options. Several state governments have already enacted waste minimization legislation (e.g., Massachusetts Toxic Use Reduction Act of 1989, and Oregon Toxic Use Reduction Act and Hazardous Waste Reduction Act, July 2, 1989). About twenty six other states have established legislation that will mandate some type of waste minimization program and/or facility planning. The need to address the HAZMIN (Hazardous Waste Minimization) Program at government agencies and private industries has prompted us to identify the importance of managing The HAZMIN Program, and tracking various aspects of the program, as well as the progress made in this area. The open-quotes WASTEclose quotes is a tracking system, which can be used and modified in maintaining the information related to Hazardous Waste Minimization Program, in a manageable fashion. This program maintains, modifies, and retrieves information related to hazardous waste minimization and recycling, and provides automated report generating capabilities. It has a built-in menu, which can be printed either in part or in full. There are instructions on preparing The Annual Waste Report, and The Annual Recycling Report. The program is very user friendly. This program is available in 3.5 inch or 5 1/4 inch floppy disks. A computer with 640K memory is required

  17. Supersymmetric hybrid inflation with non-minimal Kahler potential

    International Nuclear Information System (INIS)

    Bastero-Gil, M.; King, S.F.; Shafi, Q.

    2007-01-01

    Minimal supersymmetric hybrid inflation based on a minimal Kahler potential predicts a spectral index n s ∼>0.98. On the other hand, WMAP three year data prefers a central value n s ∼0.95. We propose a class of supersymmetric hybrid inflation models based on the same minimal superpotential but with a non-minimal Kahler potential. Including radiative corrections using the one-loop effective potential, we show that the prediction for the spectral index is sensitive to the small non-minimal corrections, and can lead to a significantly red-tilted spectrum, in agreement with WMAP

  18. Dual and primal mixed Petrov-Galerkin finite element methods in heat transfer problems

    International Nuclear Information System (INIS)

    Loula, A.F.D.; Toledo, E.M.

    1988-12-01

    New mixed finite element formulations for the steady state heat transfer problem are presented with no limitation in the choice of conforming finite element spaces. Adding least square residual forms of the governing equations of the classical Galerkin formulation the original saddle point problem is transformed into a minimization problem. Stability analysis, error estimates and numerical results are presented, confirming the error estimates and the good performance of this new formulation. (author) [pt

  19. Null-polygonal minimal surfaces in AdS{sub 4} from perturbed W minimal models

    Energy Technology Data Exchange (ETDEWEB)

    Hatsuda, Yasuyuki [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Ito, Katsushi [Tokyo Institute of Technology (Japan). Dept. of Physics; Satoh, Yuji [Tsukuba Univ., Sakura, Ibaraki (Japan). Inst. of Physics

    2012-11-15

    We study the null-polygonal minimal surfaces in AdS{sub 4}, which correspond to the gluon scattering amplitudes/Wilson loops in N=4 super Yang-Mills theory at strong coupling. The area of the minimal surfaces with n cusps is characterized by the thermodynamic Bethe ansatz (TBA) integral equations or the Y-system of the homogeneous sine-Gordon model, which is regarded as the SU(n-4){sub 4}/U(1){sup n-5} generalized parafermion theory perturbed by the weight-zero adjoint operators. Based on the relation to the TBA systems of the perturbed W minimal models, we solve the TBA equations by using the conformal perturbation theory, and obtain the analytic expansion of the remainder function around the UV/regular-polygonal limit for n = 6 and 7. We compare the rescaled remainder function for n=6 with the two-loop one, to observe that they are close to each other similarly to the AdS{sub 3} case.

  20. Assessment of LANL waste minimization plan

    International Nuclear Information System (INIS)

    Davis, K.D.; McNair, D.A.; Jennrich, E.A.; Lund, D.M.

    1991-04-01

    The objective of this report is to evaluate the Los Alamos National Laboratory (LANL) Waste Minimization Plan to determine if it meets applicable internal (DOE) and regulatory requirements. The intent of the effort is to assess the higher level elements of the documentation to determine if they have been addressed rather than the detailed mechanics of the program's implementation. The requirement for a Waste Minimization Plan is based in several DOE Orders as well as environmental laws and regulations. Table 2-1 provides a list of the major documents or regulations that require waste minimization efforts. The table also summarizes the applicable requirements