Adaptive Alternating Minimization Algorithms
Niesen, Urs; Wornell, Gregory
2007-01-01
The classical alternating minimization (or projection) algorithm has been successful in the context of solving optimization problems over two variables or equivalently of finding a point in the intersection of two sets. The iterative nature and simplicity of the algorithm has led to its application to many areas such as signal processing, information theory, control, and finance. A general set of sufficient conditions for the convergence and correctness of the algorithm is quite well-known when the underlying problem parameters are fixed. In many practical situations, however, the underlying problem parameters are changing over time, and the use of an adaptive algorithm is more appropriate. In this paper, we study such an adaptive version of the alternating minimization algorithm. As a main result of this paper, we provide a general set of sufficient conditions for the convergence and correctness of the adaptive algorithm. Perhaps surprisingly, these conditions seem to be the minimal ones one would expect in ...
On the Hopcroft's minimization algorithm
Paun, Andrei
2007-01-01
We show that the absolute worst case time complexity for Hopcroft's minimization algorithm applied to unary languages is reached only for de Bruijn words. A previous paper by Berstel and Carton gave the example of de Bruijn words as a language that requires O(n log n) steps by carefully choosing the splitting sets and processing these sets in a FIFO mode. We refine the previous result by showing that the Berstel/Carton example is actually the absolute worst case time complexity in the case of unary languages. We also show that a LIFO implementation will not achieve the same worst time complexity for the case of unary languages. Lastly, we show that the same result is valid also for the cover automata and a modification of the Hopcroft's algorithm, modification used in minimization of cover automata.
An algorithm for reduct cardinality minimization
AbouEisha, Hassan M.
2013-12-01
This is devoted to the consideration of a new algorithm for reduct cardinality minimization. This algorithm transforms the initial table to a decision table of a special kind, simplify this table, and use a dynamic programming algorithm to finish the construction of an optimal reduct. Results of computer experiments with decision tables from UCI ML Repository are discussed. © 2013 IEEE.
An algorithm for minimization of quantum cost
Banerjee, Anindita; Pathak, Anirban
2009-01-01
A new algorithm for minimization of quantum cost of quantum circuits has been designed. The quantum cost of different quantum circuits of particular interest (eg. circuits for EPR, quantum teleportation, shor code and different quantum arithmetic operations) are computed by using the proposed algorithm. The quantum costs obtained using the proposed algorithm is compared with the existing results and it is found that the algorithm has produced minimum quantum cost in all cases.
An algorithm for constructing minimal order inverses
Patel, R. V.
1976-01-01
In this paper an algorithm is presented for constructing minimal order inverses of linear, time invariant, controllable and observable, multivariable systems. By means of simple matrix operations, a 'state-overdescribed' system is first constructed which is an inverse of the given multivariable system. A simple Gauss-Jordan type reduction procedure is then used to remove the redundancy in the state vector of the inverse system to obtain a minimal order inverse. When the given multivariable system is not invertible, the algorithm enables a minimal order inverse of an invertible subsystem to be constructed. Numerical examples are given to illustrate the use of the algorithm.
Genetic algorithms for minimal source reconstructions
Energy Technology Data Exchange (ETDEWEB)
Lewis, P.S.; Mosher, J.C.
1993-12-01
Under-determined linear inverse problems arise in applications in which signals must be estimated from insufficient data. In these problems the number of potentially active sources is greater than the number of observations. In many situations, it is desirable to find a minimal source solution. This can be accomplished by minimizing a cost function that accounts from both the compatibility of the solution with the observations and for its ``sparseness``. Minimizing functions of this form can be a difficult optimization problem. Genetic algorithms are a relatively new and robust approach to the solution of difficult optimization problems, providing a global framework that is not dependent on local continuity or on explicit starting values. In this paper, the authors describe the use of genetic algorithms to find minimal source solutions, using as an example a simulation inspired by the reconstruction of neural currents in the human brain from magnetoencephalographic (MEG) measurements.
MULTIOBJECTIVE PARALLEL GENETIC ALGORITHM FOR WASTE MINIMIZATION
In this research we have developed an efficient multiobjective parallel genetic algorithm (MOPGA) for waste minimization problems. This MOPGA integrates PGAPack (Levine, 1996) and NSGA-II (Deb, 2000) with novel modifications. PGAPack is a master-slave parallel implementation of a...
Quadratic Interpolation Algorithm for Minimizing Tabulated Function
Directory of Open Access Journals (Sweden)
E. A. Youness
2008-01-01
Full Text Available Problem statement: The problem of finding the minimum value of objective function, when we know only some values of it, is needed in more practical fields. Quadratic interpolation algorithms are the famous tools deal with this kind of these problems. These algorithms interested with the polynomial space in which the objective function is approximated. Approach: In this study we approximated the objective function by a one dimensional quadratic polynomial. This approach saved the time and the effort to get the best point at which the objective is minimized. Results: The quadratic polynomial in each one of the steps of the proposed algorithm, accelerate the convergent to the best value of the objective function without taking into account all points of the interpolation set. Conclusion: Any n-dimensional problem of finding a minimal value of a function, given by some values, can be converted to one dimensional problem easier in deal.
Sequential unconstrained minimization algorithms for constrained optimization
Byrne, Charles
2008-02-01
The problem of minimizing a function f(x):RJ → R, subject to constraints on the vector variable x, occurs frequently in inverse problems. Even without constraints, finding a minimizer of f(x) may require iterative methods. We consider here a general class of iterative algorithms that find a solution to the constrained minimization problem as the limit of a sequence of vectors, each solving an unconstrained minimization problem. Our sequential unconstrained minimization algorithm (SUMMA) is an iterative procedure for constrained minimization. At the kth step we minimize the function G_k(x)=f(x)+g_k(x), to obtain xk. The auxiliary functions gk(x):D ⊆ RJ → R+ are nonnegative on the set D, each xk is assumed to lie within D, and the objective is to minimize the continuous function f:RJ → R over x in the set C=\\overline D , the closure of D. We assume that such minimizers exist, and denote one such by \\hat x . We assume that the functions gk(x) satisfy the inequalities 0\\leq g_k(x)\\leq G_{k-1}(x)-G_{k-1}(x^{k-1}), for k = 2, 3, .... Using this assumption, we show that the sequence {f(xk)} is decreasing and converges to f({\\hat x}) . If the restriction of f(x) to D has bounded level sets, which happens if \\hat x is unique and f(x) is closed, proper and convex, then the sequence {xk} is bounded, and f(x^*)=f({\\hat x}) , for any cluster point x*. Therefore, if \\hat x is unique, x^*={\\hat x} and \\{x^k\\}\\rightarrow {\\hat x} . When \\hat x is not unique, convergence can still be obtained, in particular cases. The SUMMA includes, as particular cases, the well-known barrier- and penalty-function methods, the simultaneous multiplicative algebraic reconstruction technique (SMART), the proximal minimization algorithm of Censor and Zenios, the entropic proximal methods of Teboulle, as well as certain cases of gradient descent and the Newton-Raphson method. The proof techniques used for SUMMA can be extended to obtain related results for the induced proximal
A Global Minimization Algorithm for Empirical Contact Potential Functions
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
Global minimization algorithm is indispensable for solving protein folding problems based on thermodynamic hypothesis. A contact difference (CD) based on pseudo potential function, for simulating empirical contact potential functions and testing global minimization algorithm was proposed. The present article describes a conformational sampling and global minimization algorithm, which is called WL, based on Monte Carlo simulation and simulated annealing. Itcan be used to locate CD's globe minimum and refold extended protein structures, as small as 0. 03 nm, from the native structures, back to ones with root mean square distance(RMSD). These results demonstrate that the global minimization problems for empirical contact potential functions may be solvable.
Local Community Detection Algorithm Based on Minimal Cluster
Directory of Open Access Journals (Sweden)
Yong Zhou
2016-01-01
Full Text Available In order to discover the structure of local community more effectively, this paper puts forward a new local community detection algorithm based on minimal cluster. Most of the local community detection algorithms begin from one node. The agglomeration ability of a single node must be less than multiple nodes, so the beginning of the community extension of the algorithm in this paper is no longer from the initial node only but from a node cluster containing this initial node and nodes in the cluster are relatively densely connected with each other. The algorithm mainly includes two phases. First it detects the minimal cluster and then finds the local community extended from the minimal cluster. Experimental results show that the quality of the local community detected by our algorithm is much better than other algorithms no matter in real networks or in simulated networks.
Edge Crossing Minimization Algorithm for Hierarchical Graphs Based on Genetic Algorithms
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
We present an edge crossing minimization algorithm forhierarchical gr aphs based on genetic algorithms, and comparing it with some heuristic algorithm s. The proposed algorithm is more efficient and has the following advantages: th e frame of the algorithms is unified, the method is simple, and its implementati on and revision are easy.
Constrained minimization of smooth functions using a genetic algorithm
Moerder, Daniel D.; Pamadi, Bandu N.
1994-01-01
The use of genetic algorithms for minimization of differentiable functions that are subject to differentiable constraints is considered. A technique is demonstrated for converting the solution of the necessary conditions for a constrained minimum into an unconstrained function minimization. This technique is extended as a global constrained optimization algorithm. The theory is applied to calculating minimum-fuel ascent control settings for an energy state model of an aerospace plane.
Algorithms for degree-constrained Euclidean Steiner minimal tree
Institute of Scientific and Technical Information of China (English)
Zhang Jin; Ma Liang; Zhang Liantang
2008-01-01
A new problem of degree-constrained Euclidean Steiner minimal tree is discussed,which is quite useful in several fields.Although it is slightly different from the traditional degree-constrained minimal spanning tree,it is aho NP-hard.Two intelligent algorithms are proposed in an attempt to solve this difficult problem.Series of numerical examples are tested,which demonstrate that the algorithms also work well in practice.
A complete algorithm to find exact minimal polynomial by approximations
Qin, Xiaolin; Chen, Jingwei; Zhang, Jingzhong
2010-01-01
We present a complete algorithm for finding an exact minimal polynomial from its approximate value by using an improved parameterized integer relation construction method. Our result is superior to the existence of error controlling on obtaining an exact rational number from its approximation. The algorithm is applicable for finding exact minimal polynomial of an algebraic number by its approximate root. This also enables us to provide an efficient method of converting the rational approximation representation to the minimal polynomial representation, and devise a simple algorithm to factor multivariate polynomials with rational coefficients. Compared with the subsistent methods, our method combines advantage of high efficiency in numerical computation, and exact, stable results in symbolic computation. we also discuss some applications to some transcendental numbers by approximations. Moreover, the Digits of our algorithm is far less than the LLL-lattice basis reduction technique in theory. In this paper, we...
PROXIMAL POINT ALGORITHM FOR MINIMIZATION OF DC FUNCTION
Institute of Scientific and Technical Information of China (English)
Wen-yu Sun; Raimundo. J.B. Sampaio; M.A.B. Candido
2003-01-01
In this paper we present some algorithms for minimization of DC function (difference of two convex functions). They are descent methods of the proximal-type which use the convex properties of the two convex functions separately. We also consider an approximate proximal point algorithm. Some properties of the ε-subdifferential and the ε-directional derivative are discussed. The convergence properties of the algorithms are established in both exact and approximate forms. Finally, we give some applications to the concave programming and maximum eigenvalue problems.
Wirelength Minimization in Partitioning and Floorplanning Using Evolutionary Algorithms
Directory of Open Access Journals (Sweden)
I. Hameem Shanavas
2011-01-01
Full Text Available Minimizing the wirelength plays an important role in physical design automation of very large-scale integration (VLSI chips. The objective of wirelength minimization can be achieved by finding an optimal solution for VLSI physical design components like partitioning and floorplanning. In VLSI circuit partitioning, the problem of obtaining a minimum delay has prime importance. In VLSI circuit floorplanning, the problem of minimizing silicon area is also a hot issue. Reducing the minimum delay in partitioning and area in floorplanning helps to minimize the wirelength. The enhancements in partitioning and floorplanning have influence on other criteria like power, cost, clock speed, and so forth. Memetic Algorithm (MA is an Evolutionary Algorithm that includes one or more local search phases within its evolutionary cycle to obtain the minimum wirelength by reducing delay in partitioning and by reducing area in floorplanning. MA applies some sort of local search for optimization of VLSI partitioning and floorplanning. The algorithm combines a hierarchical design technique like genetic algorithm and constructive technique like Simulated Annealing for local search to solve VLSI partitioning and floorplanning problem. MA can quickly produce optimal solutions for the popular benchmark.
Minimal-Length Interoperability Test Sequences Generation via Genetic Algorithm
Institute of Scientific and Technical Information of China (English)
ZHONG Ning; KUANG Jing-ming; HE Zun-wen
2008-01-01
A novel interoperability test sequences optimization scheme is proposed in which the genetic algo-rithm(GA)is used to obtain the minimal-length interoperability test sequences.During our work,the basicin teroperability test sequences are generated based on the minimal-complete-coverage criterion,which removes the redundancy from conformance test sequences.Then interoperability sequences minimization problem can be considered as an instance of the set covering problem,and the GA is applied to remove redundancy in interoperability transitions.The results show that compared to conventional algorithm,the proposed algorithm is more practical to avoid the state space explosion problem,for it can reduce the length of the test sequences and maintain the same transition coverage.
Geometry-Experiment Algorithm for Steiner Minimal Tree Problem
Directory of Open Access Journals (Sweden)
Zong-Xiao Yang
2013-01-01
Full Text Available It is well known that the Steiner minimal tree problem is one of the classical nonlinear combinatorial optimization problems. A visualization experiment approach succeeds in generating Steiner points automatically and showing the system shortest path, named Steiner minimum tree, physically and intuitively. However, it is difficult to form stabilized system shortest path when the number of given points is increased and irregularly distributed. Two algorithms, geometry algorithm and geometry-experiment algorithm (GEA, are constructed to solve system shortest path using the property of Delaunay diagram and basic philosophy of Geo-Steiner algorithm and matching up with the visualization experiment approach (VEA when the given points increase. The approximate optimizing results are received by GEA and VEA for two examples. The validity of GEA was proved by solving practical problems in engineering, experiment, and comparative analysis. And the global shortest path can be obtained by GEA successfully with several actual calculations.
Attitude-Control Algorithm for Minimizing Maneuver Execution Errors
Acikmese, Behcet
2008-01-01
A G-RAC attitude-control algorithm is used to minimize maneuver execution error in a spacecraft with a flexible appendage when said spacecraft must induce translational momentum by firing (in open loop) large thrusters along a desired direction for a given period of time. The controller is dynamic with two integrators and requires measurement of only the angular position and velocity of the spacecraft. The global stability of the closed-loop system is guaranteed without having access to the states describing the dynamics of the appendage and with severe saturation in the available torque. Spacecraft apply open-loop thruster firings to induce a desired translational momentum with an extended appendage. This control algorithm will assist this maneuver by stabilizing the attitude dynamics around a desired orientation, and consequently minimize the maneuver execution errors.
Linearly convergent inexact proximal point algorithm for minimization. Revision 1
Energy Technology Data Exchange (ETDEWEB)
Zhu, C.
1993-08-01
In this paper, we propose a linearly convergent inexact PPA for minimization, where the inner loop stops when the relative reduction on the residue (defined as the objective value minus the optimal value) of the inner loop subproblem meets some preassigned constant. This inner loop stopping criterion can be achieved in a fixed number of iterations if the inner loop algorithm has a linear rate on the regularized subproblems. Therefore the algorithm is able to avoid the computationally expensive process of solving the inner loop subproblems exactly or asymptotically accurately; a process required by most of the other linearly convergent PPAs. As applications of this inexact PPA, we develop linearly convergent iteration schemes for minimizing functions with singular Hessian matrices, and for solving hemiquadratic extended linear-quadratic programming problems. We also prove that Correa-Lemarechal`s ``implementable form`` of PPA converges linearly under mild conditions.
Majorization-minimization algorithms for wavelet-based image restoration.
Figueiredo, Mário A T; Bioucas-Dias, José M; Nowak, Robert D
2007-12-01
Standard formulations of image/signal deconvolution under wavelet-based priors/regularizers lead to very high-dimensional optimization problems involving the following difficulties: the non-Gaussian (heavy-tailed) wavelet priors lead to objective functions which are nonquadratic, usually nondifferentiable, and sometimes even nonconvex; the presence of the convolution operator destroys the separability which underlies the simplicity of wavelet-based denoising. This paper presents a unified view of several recently proposed algorithms for handling this class of optimization problems, placing them in a common majorization-minimization (MM) framework. One of the classes of algorithms considered (when using quadratic bounds on nondifferentiable log-priors) shares the infamous "singularity issue" (SI) of "iteratively reweighted least squares" (IRLS) algorithms: the possibility of having to handle infinite weights, which may cause both numerical and convergence issues. In this paper, we prove several new results which strongly support the claim that the SI does not compromise the usefulness of this class of algorithms. Exploiting the unified MM perspective, we introduce a new algorithm, resulting from using l1 bounds for nonconvex regularizers; the experiments confirm the superior performance of this method, when compared to the one based on quadratic majorization. Finally, an experimental comparison of the several algorithms, reveals their relative merits for different standard types of scenarios.
Genetic algorithm for network cost minimization using threshold based discounting
Directory of Open Access Journals (Sweden)
Hrvoje Podnar
2003-01-01
Full Text Available We present a genetic algorithm for heuristically solving a cost minimization problem applied to communication networks with threshold based discounting. The network model assumes that every two nodes can communicate and offers incentives to combine flow from different sources. Namely, there is a prescribed threshold on every link, and if the total flow on a link is greater than the threshold, the cost of this flow is discounted by a factor α. A heuristic algorithm based on genetic strategy is developed and applied to a benchmark set of problems. The results are compared with former branch and bound results using the CPLEX® solver. For larger data instances we were able to obtain improved solutions using less CPU time, confirming the effectiveness of our heuristic approach.
Iterative minimization algorithm for efficient calculations of transition states
Gao, Weiguo; Leng, Jing; Zhou, Xiang
2016-03-01
This paper presents an efficient algorithmic implementation of the iterative minimization formulation (IMF) for fast local search of transition state on potential energy surface. The IMF is a second order iterative scheme providing a general and rigorous description for the eigenvector-following (min-mode following) methodology. We offer a unified interpretation in numerics via the IMF for existing eigenvector-following methods, such as the gentlest ascent dynamics, the dimer method and many other variants. We then propose our new algorithm based on the IMF. The main feature of our algorithm is that the translation step is replaced by solving an optimization subproblem associated with an auxiliary objective function which is constructed from the min-mode information. We show that using an efficient scheme for the inexact solver and enforcing an adaptive stopping criterion for this subproblem, the overall computational cost will be effectively reduced and a super-linear rate between the accuracy and the computational cost can be achieved. A series of numerical tests demonstrate the significant improvement in the computational efficiency for the new algorithm.
Modified Binary Exponential Backoff Algorithm to Minimize Mobiles Communication Time
Directory of Open Access Journals (Sweden)
Ibrahim Sayed Ahmad
2014-02-01
Full Text Available the field of Wireless Local Area Networks (LANs is expanding rapidly as a result of advances in digital communications, portable computers, and semiconductor technology. The early adopters of this technology have primarily been vertical application that places a premium on the mobility offered by such systems. Binary Exponential Backoff (BEB refers to a collision resolution mechanism used in random access MAC protocols. This algorithm is used in Ethernet (IEEE 802.3 wired LANs. In Ethernet networks, this algorithm is commonly used to schedule retransmissions after collisions. The paper’s goal is to minimize the time transmission cycle of the information between mobiles moving in a Wi-Fi by changing the BEB algorithm. The Protocol CSMA / CA manage access to the radio channel by performing an arbitration based on time. This causes many problems in relation to time transmission between mobiles moving in a cell 802.11. what we have done show that the protocol using CSMA / CA access time believed rapidly when the number of stations and / or the network load increases or other circumstances affects the network.
Minimizing Mobiles Communication Time Using Modified Binary Exponential Backoff Algorithm
Directory of Open Access Journals (Sweden)
Ibrahim Sayed Ahmad
2013-11-01
Full Text Available The domain of wireless Local Area Networks (WLANsis growing speedily as a consequence ofdevelopments in digital communications technology.The early adopters of this technology have mainlybeen vertical applicationthat places a premium on the mobility offered by such systems. Examples of thesetypes of applications consist of stocking control in depot environments,point of sale terminals, and rentalcar check-in. Furthermore to the mobility thatbecomes possible with wireless LANs; these systemshavealso been used in environments where cable installation is expensive or impractical. Such environmentsincludemanufacturingfloors, tradingfloors on stock exchanges, conventions and trade shows,and historicbuildings. With the increasing propagation of wireless LANs comes theneed for standardization so as toallow interoperability for an increasingly mobileworkforce. Despite all the advantages and facilities thatWi-FI offers, there is still the delay problem thatis due to many reasons that are introduced in details inour case study which also presents the solutions and simulation that can reduce this delay for betterperformance of the wireless networks.Binary Exponential Backoff (BEB refers to a collision resolution mechanism used in random access MACprotocols. This algorithm is used in Ethernet (IEEE802.3 wired LANs. In Ethernet networks, thisalgorithm is commonly used to schedule retransmissions after collisions.The paper’s goal is to minimize the time transmission cycle of the information between mobiles movingin aWi-Fi by changing the BEB algorithm. The Protocol CSMA / CA manage access to the radio channel byperforming an arbitration based on time. This causes many problems in relation to time transmissionbetween mobiles moving in a cell 802.11. what we have done show that the protocol using CSMA / CAaccess time believed rapidly when the number of stations and / or the network load increases or othercircumstances affects the network.
MINIMAL INVERSION AND ITS ALGORITHMS OF DISCRETE-TIME NONLINEAR SYSTEMS
Institute of Scientific and Technical Information of China (English)
ZHENG Yufan
2005-01-01
The left-inverse system with minimal order and its algorithms of discrete-time nonlinear systems are studied in a linear algebraic framework. The general structure of left-inverse system is described and computed in symbolic algorithm. Two algorithms are given for constructing left-inverse systems with minimal order.
Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.
Directory of Open Access Journals (Sweden)
Gonglin Yuan
Full Text Available Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1 βk ≥ 0 2 the search direction has the trust region property without the use of any line search method 3 the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.
Algorithm for Delay-Constrained Minimal Cost Group Multicasting
Institute of Scientific and Technical Information of China (English)
SUN Yugeng; WANG Yanlin; YAN Xinfang
2005-01-01
Group multicast routing algorithms satisfying quality of service requirements of real-time applications are essential for high-speed networks. A heuristic algorithm was presented for group multicast routing with bandwidth and delay constrained. A new metric was designed as a function of available bandwidth and delay of link. And source-specific routing trees for each member were generated in the algorithm by using the metric, which satisfy member′s bandwidth and end-to-end delay requirements. Simulations over random network were carried out to compare the performance of the proposed algorithm with that from literature.Experimental results show that the algorithm performs better in terms of network cost and ability in constructing feasible multicast trees for group members. Moreover,the algorithm can avoid link blocking and enhance the network behavior efficiently.
AN INTERIOR TRUST REGION ALGORITHM FOR NONLINEAR MINIMIZATION WITH LINEAR CONSTRAINTS
Institute of Scientific and Technical Information of China (English)
Jian-guo Liu
2002-01-01
An interior trust-region-based algorithm for linearly constrained minimization problems is proposed and analyzed. This algorithm is similar to trust region algorithms forunconstrained minimization: a trust region subproblem on a subspace is solved in eachiteration. We establish that the proposed algorithm has convergence properties analogousto those of the trust region algorithms for unconstrained minimization. Namely, every limitpoint of the generated sequence satisfies the Krush-Kuhn-Tucker (KKT) conditions andat least one limit point satisfies second order necessary optimality conditions. In adidition,if one limit point is a strong local minimizer and the Hessian is Lipschitz continuous in aneighborhood of that point, then the generated sequence converges globally to that pointin the rate of at least 2-step quadratic. We are mainly concerned with the theoretical properties of the algorithm in this paper. Implementation issues and adaptation to large-scaleproblems will be addressed in a future report.
Loss-minimal Algorithmic Trading Based on Levy Processes
Directory of Open Access Journals (Sweden)
Farhad Kia
2014-08-01
Full Text Available In this paper we optimize portfolios assuming that the value of the portfolio follows a Lévy process. First we identify the parameters of the underlying Lévy process and then portfolio optimization is performed by maximizing the probability of positive return. The method has been tested by extensive performance analysis on Forex and SP 500 historical time series. The proposed trading algorithm has achieved 4.9\\% percent yearly return on average without leverage which proves its applicability to algorithmic trading.
New Compressed Sensing ISAR Imaging Algorithm Based on Log-Sum Minimization
Ping, Cheng; Jiaqun, Zhao
2016-12-01
To improve the performance of inverse synthetic aperture radar (ISAR) imaging based on compressed sensing (CS), a new algorithm based on log-sum minimization is proposed. A new interpretation of the algorithm is also provided. Compared with the conventional algorithm, the new algorithm can recover signals based on fewer measurements, in looser sparsity condition, with smaller recovery error, and it has obtained better sinusoidal signal spectrum and imaging result for real ISAR data. Therefore, the proposed algorithm is a promising imaging algorithm in CS ISAR.
Localized density matrix minimization and linear scaling algorithms
Lai, Rongjie
2015-01-01
We propose a convex variational approach to compute localized density matrices for both zero temperature and finite temperature cases, by adding an entry-wise $\\ell_1$ regularization to the free energy of the quantum system. Based on the fact that the density matrix decays exponential away from the diagonal for insulating system or system at finite temperature, the proposed $\\ell_1$ regularized variational method provides a nice way to approximate the original quantum system. We provide theoretical analysis of the approximation behavior and also design convergence guaranteed numerical algorithms based on Bregman iteration. More importantly, the $\\ell_1$ regularized system naturally leads to localized density matrices with banded structure, which enables us to develop approximating algorithms to find the localized density matrices with computation cost linearly dependent on the problem size.
Modified Binary Exponential Backoff Algorithm to Minimize Mobiles Communication Time
Ibrahim Sayed Ahmad; Ali Kalakech; Seifedine Kadry
2014-01-01
the field of Wireless Local Area Networks (LANs) is expanding rapidly as a result of advances in digital communications, portable computers, and semiconductor technology. The early adopters of this technology have primarily been vertical application that places a premium on the mobility offered by such systems. Binary Exponential Backoff (BEB) refers to a collision resolution mechanism used in random access MAC protocols. This algorithm is used in Ethernet (IEEE 802.3) wired LANs. In Ethe...
Minimal K-Covering Set Algorithm based on Particle Swarm Optimizer
National Research Council Canada - National Science Library
Yong Hu
2013-01-01
.... In order to maximize the cost savings network resources for wireless sensor networks, extend the life network, this paper proposed a algorithm for the minimal k-covering set based on particle swarm optimizer...
New Algorithm to Evaluate the Unreliability of Flow Networks Based on Minimal Cutsets
Institute of Scientific and Technical Information of China (English)
王芳; 候朝桢
2004-01-01
Several conclusions on minimal cutset are proposed, from which a new algorithm is deduced to evaluate the unreliability of flow networks. Beginning with one unreliability product of the network, disjointed unreliability products are branched out one by one, every of which is selected from the network minimal cutsets. Finally the unreliability of the network is obtained by adding all these unreliability products up.
A Cost-Minimizing Algorithm for School Choice
Aksoy, Sinan; Coppersmith, Chaya; Glass, Julie; Karaali, Gizem; Zhao, Xueying; Zhu, Xinjing
2010-01-01
The school choice problem concerns the design and implementation of matching mechanisms that produce school assignments for students within a given public school district. Previously considered criteria for evaluating proposed mechanisms such as stability, strategyproofness and Pareto efficiency do not always translate into desirable student assignments. In this note we propose methods to expand upon the notion of desirability for a given assignment mechanism by focusing on honoring student preferences. In particular we define two new student-optimal criteria that are not met by any previously employed mechanism in the school choice literature. We then use these criteria to adapt a well-known combinatorial optimization technique (Hungarian algorithm) to the school choice problem. In particular we create two mechanisms, each geared specifically to perform optimally with respect to one of the new criteria. Both mechanisms yield "student-optimal" outcomes. We discuss the practical implications and limitations of...
AN IMPLEMENTABLE ALGORITHM AND ITS CONVERGENCE FOR GLOBAL MINIMIZATION WITH CONSTRAINS
Institute of Scientific and Technical Information of China (English)
李善良; 邬冬华; 田蔚文; 张连生
2003-01-01
With the integral-level approach to global optimization, a class of discon-tinuous penalty functions is proposed to solve constrained minimization problems. Inthis paper we propose an implementable algorithm by means of the good point set ofuniform distribution which conquers the default of Monte-Carlo method. At last weprove the convergence of the implementable algorithm.
An optimal L1-minimization algorithm for stationary Hamilton-Jacobi equations
Guermond, Jean-Luc
2009-01-01
We describe an algorithm for solving steady one-dimensional convex-like Hamilton-Jacobi equations using a L1-minimization technique on piecewise linear approximations. For a large class of convex Hamiltonians, the algorithm is proven to be convergent and of optimal complexity whenever the viscosity solution is q-semiconcave. Numerical results are presented to illustrate the performance of the method.
An Efficient Algorithm for the Split K-Layer Circular Topological Via Minimization Problem
Directory of Open Access Journals (Sweden)
J. S. Huang
1996-01-01
Full Text Available The split k-layer (k ≥ 2 circular topological via minimization (k-CTVM problem is reconsidered here. The problem is finding a topological routing of the n nets, using k available layers, such that the total number of vias is minimized. The optimal solution of this problem is solved in O(n2k+1 time. However, such an algorithm is inefficient even for n ≥ 8 and k ≥ 2. A heuristic algorithm with complexity of O(k n4 is presented. When the experimental results of this algorithm and that of an exhaustive algorithm are compared, the same number of optimal solutions is obtained from this heuristic algorithm for all permutations of 1 n = 8 with k = 2 or 3, and 2 n = 10 with k = 3. For other cases, the number of optimal solutions from this algorithm depends on the permutations been selected; and this number, in general, will increase as k increases.
A constrained, total-variation minimization algorithm for low-intensity X-ray CT
Sidky, Emil Y; Ullberg, Christer; Pan, Xiaochuan
2010-01-01
Purpose: We develop an iterative image-reconstruction algorithm for application to low-intensity computed tomography (CT) projection data, which is based on constrained, total-variation (TV) minimization. The algorithm design focuses on recovering structure on length scales comparable to a detector-bin width. Method: Recovering the resolution on the scale of a detector bin, requires that pixel size be much smaller than the bin width. The resulting image array contains many more pixels than data, and this undersampling is overcome with a combination of Fourier upsampling of each projection and the use of constrained, TV-minimization, as suggested by compressive sensing. The presented pseudo-code for solving constrained, TV-minimization is designed to yield an accurate solution to this optimization problem within 100 iterations. Results: The proposed image-reconstruction algorithm is applied to a low-intensity scan of a rabbit with a thin wire, to test resolution. The proposed algorithm is compared with filtere...
Minimizing Compositions of Functions Using Proximity Algorithms with Application in Image Deblurring
Directory of Open Access Journals (Sweden)
Feishe Chen
2016-09-01
Full Text Available We consider minimization of functions that are compositions of functions having closed-form proximity operators with linear transforms. A wide range of image processing problems including image deblurring can be formulated in this way. We develop proximity algorithms based on the fixed point characterization of the solution to the minimization problems . We further refine the proposed algorithms when the outer functions of the composed objective functions are separable. The convergence analysis of the developed algorithms is established. Numerical experiments in comparison with the well-known Chambolle-Pock algorithm and Zhang-Burger-Osher scheme for image deblurring are given to demonstrate that the proposed algorithms are efficient and robust.
Obendorf, Hartmut
2009-01-01
The notion of Minimalism is proposed as a theoretical tool supporting a more differentiated understanding of reduction and thus forms a standpoint that allows definition of aspects of simplicity. This book traces the development of minimalism, defines the four types of minimalism in interaction design, and looks at how to apply it.
Zero-Temperature Limit of a Convergent Algorithm to Minimize the Bethe Free Energy
Werner, Tomas
2011-01-01
After the discovery that fixed points of loopy belief propagation coincide with stationary points of the Bethe free energy, several researchers proposed provably convergent algorithms to directly minimize the Bethe free energy. These algorithms were formulated only for non-zero temperature (thus finding fixed points of the sum-product algorithm) and their possible extension to zero temperature is not obvious. We present the zero-temperature limit of the double-loop algorithm by Heskes, which converges a max-product fixed point. The inner loop of this algorithm is max-sum diffusion. Under certain conditions, the algorithm combines the complementary advantages of the max-product belief propagation and max-sum diffusion (LP relaxation): it yields good approximation of both ground states and max-marginals.
Directory of Open Access Journals (Sweden)
Chein-Shan Liu
2014-01-01
Full Text Available To solve an unconstrained nonlinear minimization problem, we propose an optimal algorithm (OA as well as a globally optimal algorithm (GOA, by deflecting the gradient direction to the best descent direction at each iteration step, and with an optimal parameter being derived explicitly. An invariant manifold defined for the model problem in terms of a locally quadratic function is used to derive a purely iterative algorithm and the convergence is proven. Then, the rank-two updating techniques of BFGS are employed, which result in several novel algorithms as being faster than the steepest descent method (SDM and the variable metric method (DFP. Six numerical examples are examined and compared with exact solutions, revealing that the new algorithms of OA, GOA, and the updated ones have superior computational efficiency and accuracy.
A Distributed Algorithm for Determining Minimal Covers of Acyclic Database Schemes
Institute of Scientific and Technical Information of China (English)
叶新铭
1994-01-01
Acyclic databases possess several desirable properties for their design and use.A distributed algorithm is proposed for determining a minimal cover of an alpha-,beta-,gamma-,or Berge-acyclic database scheme over a set of attributes in a distributed environment.
Directory of Open Access Journals (Sweden)
Marimuthu Murugesan
2011-01-01
Full Text Available Problem statement: Network wide broadcasting is a fundamental operation in ad hoc networks. In broadcasting, a source node sends a message to all the other nodes in the network. Unlike in a wired network, a packet transmitted by a node in ad hoc wireless network can reach all neighbors. Therefore, the total number of transmissions (Forwarding nodes used as the cost criterion for broadcasting. Approach: This study proposes a reliable and efficient broadcasting algorithm using minimized forward node list algorithm which uses 2-hop neighborhood information more effectively to reduce redundant transmissions in asymmetric Mobile Ad hoc networks that guarantees full delivery. Among the 1-hop neighbors of the sender, only selected forwarding nodes retransmit the broadcast message. Forwarding nodes are selected such a way that to cover the uncovered 2-hop neighbors. Results: Simulation results show that the proposed broadcasting algorithm provides high delivery ratio, low broadcast forward ratio, low overhead and minimized delay. Conclusion: In this study, reliable and efficient broadcasting algorithm in asymmetric Mobile Ad Hoc Networks using minimized forward node list algorithm has been proposed which provides low forward ratio, high delivery ratio while suppressing broadcast redundancy.
A Review of Fast l1-Minimization Algorithms for Robust Face Recognition
Yang, Allen Y; Zhou, Zihan; Sastry, S Shankar; Ma, Yi
2010-01-01
l1-minimization refers to finding the minimum l1-norm solution to an underdetermined linear system b=Ax. It has recently received much attention, mainly motivated by the new compressive sensing theory that shows that under quite general conditions the minimum l1-norm solution is also the sparsest solution to the system of linear equations. Although the underlying problem is a linear program, conventional algorithms such as interior-point methods suffer from poor scalability for large-scale real world problems. A number of accelerated algorithms have been recently proposed that take advantage of the special structure of the l1-minimization problem. In this paper, we provide a comprehensive review of five representative approaches, namely, Gradient Projection, Homotopy, Iterative Shrinkage-Thresholding, Proximal Gradient, and Augmented Lagrange Multiplier. The work is intended to fill in a gap in the existing literature to systematically benchmark the performance of these algorithms using a consistent experimen...
Sochi, Taha
2014-01-01
Several deterministic and stochastic multi-variable global optimization algorithms (Conjugate Gradient, Nelder-Mead, Quasi-Newton, and Global) are investigated in conjunction with energy minimization principle to resolve the pressure and volumetric flow rate fields in single ducts and networks of interconnected ducts. The algorithms are tested with seven types of fluid: Newtonian, power law, Bingham, Herschel-Bulkley, Ellis, Ree-Eyring and Casson. The results obtained from all those algorithms for all these types of fluid agree very well with the analytically derived solutions as obtained from the traditional methods which are based on the conservation principles and fluid constitutive relations. The results confirm and generalize the findings of our previous investigations that the energy minimization principle is at the heart of the flow dynamics systems. The investigation also enriches the methods of Computational Fluid Dynamics for solving the flow fields in tubes and networks for various types of Newtoni...
In-network Sparsity-regularized Rank Minimization: Algorithms and Applications
Mardani, Morteza; Giannakis, Georgios B
2012-01-01
Given a limited number of entries from the superposition of a low-rank matrix plus the product of a known fat compression matrix times a sparse matrix, recovery of the low-rank and sparse components is a fundamental task subsuming compressed sensing, matrix completion, and principal components pursuit. This paper develops algorithms for distributed sparsity-regularized rank minimization over networks, when the nuclear- and $\\ell_1$-norm are used as surrogates to the rank and nonzero entry counts of the sought matrices, respectively. While nuclear-norm minimization has well-documented merits when centralized processing is viable, non-separability of the singular-value sum challenges its distributed minimization. To overcome this limitation, an alternative characterization of the nuclear norm is adopted which leads to a separable, yet non-convex cost minimized via the alternating-direction method of multipliers. The novel distributed iterations entail reduced-complexity per-node tasks, and affordable message pa...
Improvements to the Levenberg-Marquardt algorithm for nonlinear least-squares minimization
Transtrum, Mark K
2012-01-01
When minimizing a nonlinear least-squares function, the Levenberg-Marquardt algorithm can suffer from a slow convergence, particularly when it must navigate a narrow canyon en route to a best fit. On the other hand, when the least-squares function is very flat, the algorithm may easily become lost in parameter space. We introduce several improvements to the Levenberg-Marquardt algorithm in order to improve both its convergence speed and robustness to initial parameter guesses. We update the usual step to include a geodesic acceleration correction term, explore a systematic way of accepting uphill steps that may increase the residual sum of squares due to Umrigar and Nightingale, and employ the Broyden method to update the Jacobian matrix. We test these changes by comparing their performance on a number of test problems with standard implementations of the algorithm. We suggest that these two particular challenges, slow convergence and robustness to initial guesses, are complimentary problems. Schemes that imp...
Ryu, Minsoo
Time-Triggered Controller Area Network is widely accepted as a viable solution for real-time communication systems such as in-vehicle communications. However, although TTCAN has been designed to support both periodic and sporadic real-time messages, previous studies mostly focused on providing deterministic real-time guarantees for periodic messages while barely addressing the performance issue of sporadic messages. In this paper, we present an O(n2) scheduling algorithm that can minimize the maximum duration of exclusive windows occupied by periodic messages, thereby minimizing the worst-case scheduling delays experienced by sporadic messages.
Felkel, P; Wegenkittl, R; Felkel, Petr; Bruckwschwaiger, Mario; Wegenkittl, Rainer
2001-01-01
The watershed algorithm belongs to classical algorithms in mathematical morphology. Lotufo et al. published a principle of the watershed computation by means of an iterative forest transform (IFT), which computes a shortest path forest from given markers. The algorithm itself was described for a 2D case (image) without a detailed discussion of its computation and memory demands for real datasets. As IFT cleverly solves the problem of plateaus and as it gives precise results when thin objects have to be segmented, it is obvious to use this algorithm for 3D datasets taking in mind the minimizing of a higher memory consumption for the 3D case without loosing low asymptotical time complexity of O(m+C) (and also the real computation speed). The main goal of this paper is an implementation of the IFT algorithm with a priority queue with buckets and careful tuning of this implementation to reach as minimal memory consumption as possible. The paper presents five possible modifications and methods of implementation of...
An Algorithm for Determining Minimal Reduced—Coverings of Acyclic Database Schemes
Institute of Scientific and Technical Information of China (English)
刘铁英; 叶新铭
1996-01-01
This paper reports an algoritm(DTV)for deermining the minimal reducedcovering of an acyclic database scheme over a specified subset of attributes.The output of this algotithm contains not only minimum number of attributes but also minimum number of partial relation schemes.The algorithm has complexity O(|N|·|E|2),where|N| is the number of attributes and |E|the number of relation schemes.It is also proved that for Berge,γ or β acyclic database schemes,the output of algorithm DTV maintains the acyclicity correspondence.
Institute of Scientific and Technical Information of China (English)
ZhuDetong
2004-01-01
This paper proposes a nonmonotonic backtracking trust region algorithm via bilevel linear programming for solving the general multicommodity minimal cost flow problems. Using the duality theory of the linear programming and convex theory, the generalized directional derivative of the general multicommodity minimal cost flow problems is derived. The global convergence and superlinear convergence rate of the proposed algorithm are established under some mild conditions.
Generalized phase-shifting algorithms: error analysis and minimization of noise propagation.
Ayubi, Gastón A; Perciante, César D; Di Martino, J Matías; Flores, Jorge L; Ferrari, José A
2016-02-20
Phase shifting is a technique for phase retrieval that requires a series of intensity measurements with certain phase steps. The purpose of the present work is threefold: first we present a new method for generating general phase-shifting algorithms with arbitrarily spaced phase steps. Second, we study the conditions for which the phase-retrieval error due to phase-shift miscalibration can be minimized. Third, we study the phase extraction from interferograms with additive random noise, and deduce the conditions to be satisfied for minimizing the phase-retrieval error. Algorithms with unevenly spaced phase steps are discussed under linear phase-shift errors and additive Gaussian noise, and simulations are presented.
Dimensional optimization of a minimally invasive surgical robot system based on NSGA-II algorithm
Wei Wang; Weidong Wang; Wei Dong; Hongjian Yu; Zhiyuan Yan; Zhijiang Du
2015-01-01
Based on the proposed end-effector structure of a laparoscopic minimally invasive surgical manipulator, a dimensional optimization method is investigated to enlarge the motion range of the mechanical arm in the specific target area and reduce the collision among the mechanical arms simultaneously. Both the length of the kinematics links and the overall size of the integrated system are considered in the optimization process. The NSGA-II algorithm oriented to the multi-objective optimization i...
Overhead-Aware-Best-Fit (OABF) Resource Allocation Algorithm for Minimizing VM Launching Overhead
Energy Technology Data Exchange (ETDEWEB)
Wu, Hao [IIT; Garzoglio, Gabriele [Fermilab; Ren, Shangping [IIT, Chicago; Timm, Steven [Fermilab; Noh, Seo Young [KISTI, Daejeon
2014-11-11
FermiCloud is a private cloud developed in Fermi National Accelerator Laboratory to provide elastic and on-demand resources for different scientific research experiments. The design goal of the FermiCloud is to automatically allocate resources for different scientific applications so that the QoS required by these applications is met and the operational cost of the FermiCloud is minimized. Our earlier research shows that VM launching overhead has large variations. If such variations are not taken into consideration when making resource allocation decisions, it may lead to poor performance and resource waste. In this paper, we show how we may use an VM launching overhead reference model to minimize VM launching overhead. In particular, we first present a training algorithm that automatically tunes a given refer- ence model to accurately reflect FermiCloud environment. Based on the tuned reference model for virtual machine launching overhead, we develop an overhead-aware-best-fit resource allocation algorithm that decides where and when to allocate resources so that the average virtual machine launching overhead is minimized. The experimental results indicate that the developed overhead-aware-best-fit resource allocation algorithm can significantly improved the VM launching time when large number of VMs are simultaneously launched.
Directory of Open Access Journals (Sweden)
Rubing Xi
2014-01-01
Full Text Available The variational models with nonlocal regularization offer superior image restoration quality over traditional method. But the processing speed remains a bottleneck due to the calculation quantity brought by the recent iterative algorithms. In this paper, a fast algorithm is proposed to restore the multichannel image in the presence of additive Gaussian noise by minimizing an energy function consisting of an l2-norm fidelity term and a nonlocal vectorial total variational regularization term. This algorithm is based on the variable splitting and penalty techniques in optimization. Following our previous work on the proof of the existence and the uniqueness of the solution of the model, we establish and prove the convergence properties of this algorithm, which are the finite convergence for some variables and the q-linear convergence for the rest. Experiments show that this model has a fabulous texture-preserving property in restoring color images. Both the theoretical derivation of the computation complexity analysis and the experimental results show that the proposed algorithm performs favorably in comparison to the widely used fixed point algorithm.
Scheduling to minimize average completion time: Off-line and on-line algorithms
Energy Technology Data Exchange (ETDEWEB)
Hall, L.A. [Johns Hopkins Univ., Baltimore, MD (United States); Shmoys, D.B. [Cornell Univ., Ithaca, NY (United States); Wein, J. [Polytechnic Univ., Brooklyn, NY (United States)
1996-12-31
Time-indexed linear programming formulations have recently received a great deal of attention for their practical effectiveness in solving a number of single-machine scheduling problems. We show that these formulations are also an important tool in the design of approximation algorithms with good worst-case performance guarantees. We give simple new rounding techniques to convert an optimal fractional solution into a feasible schedule for which we can prove a constant-factor performance guarantee, thereby giving the first theoretical evidence of the strength of these relaxations. Specifically, we consider the problem of minimizing the total weighted job completion time on a single machine subject to precedence constraints, and give a polynomial-time (4 + {epsilon})-approximation algorithm, for any {epsilon} > 0; the best previously known guarantee for this problem was superlogarithmic. With somewhat larger constants, we also show how to extend this result to the case with release date constraints, and still more generally, to the case with m identical parallel machines. We give two other techniques for problems in which there are release dates, but no precedence constraints: the first is based on other new LP rounding algorithms, whereas the second is a general framework for designing on-line algorithms to minimize the total weighted completion time.
Chatfield, David C.; Reeves, Melissa S.; Truhlar, Donald G.; Duneczky, Csilla; Schwenke, David W.
1992-01-01
Complex dense matrices corresponding to the D + H2 and O + HD reactions were solved using a complex generalized minimal residual (GMRes) algorithm described by Saad and Schultz (1986) and Saad (1990). To provide a test case with a different structure, the H + H2 system was also considered. It is shown that the computational effort for solutions with the GMRes algorithm depends on the dimension of the linear system, the total energy of the scattering problem, and the accuracy criterion. In several cases with dimensions in the range 1110-5632, the GMRes algorithm outperformed the LAPACK direct solver, with speedups for the linear equation solution as large as a factor of 23.
Directory of Open Access Journals (Sweden)
Chunfeng Liu
2013-01-01
Full Text Available The paper presents a novel hybrid genetic algorithm (HGA for a deterministic scheduling problem where multiple jobs with arbitrary precedence constraints are processed on multiple unrelated parallel machines. The objective is to minimize total tardiness, since delays of the jobs may lead to punishment cost or cancellation of orders by the clients in many situations. A priority rule-based heuristic algorithm, which schedules a prior job on a prior machine according to the priority rule at each iteration, is suggested and embedded to the HGA for initial feasible schedules that can be improved in further stages. Computational experiments are conducted to show that the proposed HGA performs well with respect to accuracy and efficiency of solution for small-sized problems and gets better results than the conventional genetic algorithm within the same runtime for large-sized problems.
Directory of Open Access Journals (Sweden)
Salazar-Hornig E.
2013-01-01
Full Text Available A genetic algorithm for the parallel shop with identical machines scheduling problem with sequence dependent setup times and makespan (Cmáx minimization is presented. The genetic algorithm is compared with other heuristic methods using a randomly generated test problem set. A local improvement procedure in the evolutionary process of the genetic algorithm is introduced, which significantly improves its performance.
A surface-based DNA algorithm for the minimal vertex cover problem
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
DNA computing was proposed for solving a class of intractable computational problems, of which the computing time will grow exponentially with the problem size. Up to now, many achievements have been made to improve its performance and increase its reliability. It has been shown many times that the surface-based DNA computing technique has very low error rate, but the technique has not been widely used in the DNA computing algorithms design. In this paper, a surface-based DNA computing algorithm for minimal vertex cover problem, a problem well-known for its exponential difficulty, is introduced. This work provides further evidence for the ability of surface-based DNA computing in solving NP-complete problems.
A Compton scattering image reconstruction algorithm based on total variation minimization
Institute of Scientific and Technical Information of China (English)
Li Shou-Peng; Wang Lin-Yuan; Yan Bin; Li Lei; Liu Yong-Jun
2012-01-01
Compton scattering imaging is a novel radiation imaging method using scattered photons.Its main characteristics are detectors that do not have to be on the opposite side of the source,so avoiding the rotation process.The reconstruction problem of Compton scattering imaging is the inverse problem to solve electron densities from nonlinear equations,which is ill-posed.This means the solution exhibits instability and sensitivity to noise or erroneous measurements.Using the theory for reconstruction of sparse images,a reconstruction algorithm based on total variation minimization is proposed.The reconstruction problem is described as an optimization problem with nonlinear data-consistency constraint.The simulated results show that the proposed algorithm could reduce reconstruction error and improve image quality,especially when there are not enough measurements.
Approximate k-NN delta test minimization method using genetic algorithms: Application to time series
Mateo, F; Gadea, Rafael; Sovilj, Dusan
2010-01-01
In many real world problems, the existence of irrelevant input variables (features) hinders the predictive quality of the models used to estimate the output variables. In particular, time series prediction often involves building large regressors of artificial variables that can contain irrelevant or misleading information. Many techniques have arisen to confront the problem of accurate variable selection, including both local and global search strategies. This paper presents a method based on genetic algorithms that intends to find a global optimum set of input variables that minimize the Delta Test criterion. The execution speed has been enhanced by substituting the exact nearest neighbor computation by its approximate version. The problems of scaling and projection of variables have been addressed. The developed method works in conjunction with MATLAB's Genetic Algorithm and Direct Search Toolbox. The goodness of the proposed methodology has been evaluated on several popular time series examples, and also ...
Heuristic algorithm for RCPSP with the objective of minimizing activities' cost
Institute of Scientific and Technical Information of China (English)
Liu Zhenyuan; Wang Hongwei
2006-01-01
Resource-constrained project scheduling problem(RCPSP) is an important problem in research on project management. But there has been little attention paid to the objective of minimizing activities' cost with the resource constraints that is a critical sub-problem in partner selection of construction supply chain management because the capacities of the renewable resources supplied by the partners will effect on the project scheduling. Its mathematic model is presented firstly, and analysis on the characteristic of the problem shows that the objective function is non-regular and the problem is NP-complete following which the basic idea for solution is clarified. Based on a definition of preposing activity cost matrix, a heuristic algorithm is brought forward. Analyses on the complexity of the heuristics and the result of numerical studies show that the heuristic algorithm is feasible and relatively effective.
CONVERGENCE PROPERTIES OF MULTI-DIRECTIONAL PARALLEL ALGORITHMS FOR UNCONSTRAINED MINIMIZATION
Institute of Scientific and Technical Information of China (English)
Cheng-xian Xu; Yue-ting Yang
2005-01-01
Convergence properties of a class of multi-directional parallel quasi-Newton algorithms for the solution of unconstrained minimization problems are studied in this paper. At each iteration these algorithms generate several different quasi-Newton directions, and then apply line searches to determine step lengths along each direction, simultaneously. The next iterate is obtained among these trail points by choosing the lowest point in the sense of function reductions. Different quasi-Newton updating formulas from the Broyden family are used to generate a main sequence of Hessian matrix approximations. Based on the BFGS and the modified BFGS updating formulas, the global and superlinear convergence results are proved. It is observed that all the quasi-Newton directions asymptotically approach the Newton direction in both direction and length when the iterate sequence converges to a local minimum of the objective function, and hence the result of superlinear convergence follows.
Sharper lower bounds on the performance of the empirical risk minimization algorithm
Lecué, Guillaume; 10.3150/09-BEJ225
2011-01-01
We present an argument based on the multidimensional and the uniform central limit theorems, proving that, under some geometrical assumptions between the target function $T$ and the learning class $F$, the excess risk of the empirical risk minimization algorithm is lower bounded by \\[\\frac{\\mathbb{E}\\sup_{q\\in Q}G_q}{\\sqrt{n}}\\delta,\\] where $(G_q)_{q\\in Q}$ is a canonical Gaussian process associated with $Q$ (a well chosen subset of $F$) and $\\delta$ is a parameter governing the oscillations of the empirical excess risk function over a small ball in $F$.
Minimizing makespan for a no-wait ﬂowshop using genetic algorithm
Indian Academy of Sciences (India)
Imran Ali Chaudhry; Abdul Munem Khan
2012-12-01
This paper explains minimization of makespan or total completion time for -jobs, -machine, no-wait ﬂowshop problem (NW-FSSP). A spread sheet based general purpose genetic algorithm is proposed for the NW-FSSP. The example analysis shows that the proposed approach produces results are comparable to the previous approaches cited in the literature. Additionally, it is demonstrated that the current application is a general purpose approach whereby the objective function can be tailored without any change in the logic of the GA routine.
A Modified PSO Algorithm for Minimizing the Total Costs of Resources in MRCPSP
Directory of Open Access Journals (Sweden)
Mohammad Khalilzadeh
2012-01-01
Full Text Available We introduce a multimode resource-constrained project scheduling problem with finish-to-start precedence relations among project activities, considering renewable and nonrenewable resource costs. We assume that renewable resources are rented and are not available in all periods of time of the project. In other words, there is a mandated ready date as well as a due date for each renewable resource type so that no resource is used before its ready date. However, the resources are permitted to be used after their due dates by paying penalty costs. The objective is to minimize the total costs of both renewable and nonrenewable resource usage. This problem is called multimode resource-constrained project scheduling problem with minimization of total weighted resource tardiness penalty cost (MRCPSP-TWRTPC, where, for each activity, both renewable and nonrenewable resource requirements depend on activity mode. For this problem, we present a metaheuristic algorithm based on a modified Particle Swarm Optimization (PSO approach introduced by Tchomté and Gourgand which uses a modified rule for the displacement of particles. We present a prioritization rule for activities and several improvement and local search methods. Experimental results reveal the effectiveness and efficiency of the proposed algorithm for the problem in question.
A Heuristic Scheduling Algorithm for Minimizing Makespan and Idle Time in a Nagare Cell
Directory of Open Access Journals (Sweden)
M. Muthukumaran
2012-01-01
Full Text Available Adopting a focused factory is a powerful approach for today manufacturing enterprise. This paper introduces the basic manufacturing concept for a struggling manufacturer with limited conventional resources, providing an alternative solution to cell scheduling by implementing the technique of Nagare cell. Nagare cell is a Japanese concept with more objectives than cellular manufacturing system. It is a combination of manual and semiautomatic machine layout as cells, which gives maximum output flexibility for all kind of low-to-medium- and medium-to-high- volume productions. The solution adopted is to create a dedicated group of conventional machines, all but one of which are already available on the shop floor. This paper focuses on the development of heuristic scheduling algorithm in step-by-step method. The algorithm states that the summation of processing time of all products on each machine is calculated first and then the sum of processing time is sorted by the shortest processing time rule to get the assignment schedule. Based on the assignment schedule Nagare cell layout is arranged for processing the product. In addition, this algorithm provides steps to determine the product ready time, machine idle time, and product idle time. And also the Gantt chart, the experimental analysis, and the comparative results are illustrated with five (1×8 to 5×8 scheduling problems. Finally, the objective of minimizing makespan and idle time with greater customer satisfaction is studied through.
Paksi, A. B. N.; Ma'ruf, A.
2016-02-01
In general, both machines and human resources are needed for processing a job on production floor. However, most classical scheduling problems have ignored the possible constraint caused by availability of workers and have considered only machines as a limited resource. In addition, along with production technology development, routing flexibility appears as a consequence of high product variety and medium demand for each product. Routing flexibility is caused by capability of machines that offers more than one machining process. This paper presents a method to address scheduling problem constrained by both machines and workers, considering routing flexibility. Scheduling in a Dual-Resource Constrained shop is categorized as NP-hard problem that needs long computational time. Meta-heuristic approach, based on Genetic Algorithm, is used due to its practical implementation in industry. Developed Genetic Algorithm uses indirect chromosome representative and procedure to transform chromosome into Gantt chart. Genetic operators, namely selection, elitism, crossover, and mutation are developed to search the best fitness value until steady state condition is achieved. A case study in a manufacturing SME is used to minimize tardiness as objective function. The algorithm has shown 25.6% reduction of tardiness, equal to 43.5 hours.
Ahmed, Qasim Zeeshan
2015-02-01
In this paper, a new detector is proposed for an amplify-and-forward (AF) relaying system. The detector is designed to minimize the symbol-error-rate (SER) of the system. The SER surface is non-linear and may have multiple minimas, therefore, designing an SER detector for cooperative communications becomes an optimization problem. Evolutionary based algorithms have the capability to find the global minima, therefore, evolutionary algorithms such as particle swarm optimization (PSO) and differential evolution (DE) are exploited to solve this optimization problem. The performance of proposed detectors is compared with the conventional detectors such as maximum likelihood (ML) and minimum mean square error (MMSE) detector. In the simulation results, it can be observed that the SER performance of the proposed detectors is less than 2 dB away from the ML detector. Significant improvement in SER performance is also observed when comparing with the MMSE detector. The computational complexity of the proposed detector is much less than the ML and MMSE algorithms. Moreover, in contrast to ML and MMSE detectors, the computational complexity of the proposed detectors increases linearly with respect to the number of relays.
Dimensional optimization of a minimally invasive surgical robot system based on NSGA-II algorithm
Directory of Open Access Journals (Sweden)
Wei Wang
2015-02-01
Full Text Available Based on the proposed end-effector structure of a laparoscopic minimally invasive surgical manipulator, a dimensional optimization method is investigated to enlarge the motion range of the mechanical arm in the specific target area and reduce the collision among the mechanical arms simultaneously. Both the length of the kinematics links and the overall size of the integrated system are considered in the optimization process. The NSGA-II algorithm oriented to the multi-objective optimization is utilized to calculate the Pareto solution set of the objective function. Finally, the dependence of the evaluation indexes is analysed to filter the non-inferior set, which guarantees the selection of the optimization solution.
Parameter estimation for VLE calculation by global minimization: the genetic algorithm
Directory of Open Access Journals (Sweden)
V. H. Alvarez
2008-06-01
Full Text Available Vapor-liquid equilibrium calculations require global minimization of deviations in pressure and gas phase compositions. In this work, two versions of a stochastic global optimization technique, the genetic algorithm, the freeware MyGA program, and the modified mMyGA program, are evaluated and compared for vapor-liquid equilibrium problems. Reliable experimental data from the literature on vapor liquid equilibrium for water + formic acid, tert-butanol + 1-butanol and water + 1,2-ethanediol systems were correlated using the Wilson equation for activity coefficients, considering acid association in both liquid and vapor phases. The results show that the modified mMyGA is generally more accurate and reliable than the original MyGA. Next, the mMyGA program is applied to the CO2 + ethanol and CO2 + 1-n-butyl-3-methylimidazolium hexafluorophosphate systems, and the results show a good fit for the data.
The Surface Extraction from TIN based Search-space Minimization (SETSM) algorithm
Noh, Myoung-Jong; Howat, Ian M.
2017-07-01
Digital Elevation Models (DEMs) provide critical information for a wide range of scientific, navigational and engineering activities. Submeter resolution, stereoscopic satellite imagery with high geometric and radiometric quality, and wide spatial coverage are becoming increasingly accessible for generating stereo-photogrammetric DEMs. However, low contrast and repeatedly-textured surfaces, such as snow and glacial ice at high latitudes, and mountainous terrains challenge existing stereo-photogrammetric DEM generation techniques, particularly without a-priori information such as existing seed DEMs or the manual setting of terrain-specific parameters. To utilize these data for fully-automatic DEM extraction at a large scale, we developed the Surface Extraction from TIN-based Search-space Minimization (SETSM) algorithm. SETSM is fully automatic (i.e. no search parameter settings are needed) and uses only the sensor model Rational Polynomial Coefficients (RPCs). SETSM adopts a hierarchical, combined image- and object-space matching strategy utilizing weighted normalized cross-correlation with both original distorted and geometrically corrected images for overcoming ambiguities caused by foreshortening and occlusions. In addition, SETSM optimally minimizes search-spaces to extract optimal matches over problematic terrains by iteratively updating object surfaces within a Triangulated Irregular Network, and utilizes a geometric-constrained blunder and outlier detection in object space. We prove the ability of SETSM to mitigate typical stereo-photogrammetric matching problems over a range of challenging terrains. SETSM is the primary DEM generation software for the US National Science Foundation's ArcticDEM project.
A heuristic algorithm for scheduling in a flow shop environment to minimize makespan
Directory of Open Access Journals (Sweden)
Arun Gupta
2015-04-01
Full Text Available Scheduling ‘n’ jobs on ‘m’ machines in a flow shop is NP- hard problem and places itself at prominent place in the area of production scheduling. The essence of any scheduling algorithm is to minimize the makespan in a flowshop environment. In this paper an attempt has been made to develop a heuristic algorithm, based on the reduced weightage of ma-chines at each stage to generate different combination of ‘m-1’ sequences. The proposed heuristic has been tested on several benchmark problems of Taillard (1993 [Taillard, E. (1993. Benchmarks for basic scheduling problems. European Journal of Operational Research, 64, 278-285.]. The performance of the proposed heuristic is compared with three well-known heuristics, namely Palmer’s heuristic, Campbell’s CDS heuristic, and Dannenbring’s rapid access heuristic. Results are evaluated with the best-known upper-bound solutions and found better than the above three.
Directory of Open Access Journals (Sweden)
N.H. Shamsudin
2014-08-01
Full Text Available This study presents the implementation of Improved Genetic Algorithm (IGA to minimize the power losses in the distribution network by improving selection operator pertaining to the least losses generated from the algorithm. The major part of power losses in electrical power network was highly contributed from the distribution system. Thus, the need of restructuring the topological of distribution network configuration from its primary feeders should be considered. The switches identification within different probabilities cases for reconfiguration purposes are comprehensively implemented through the proposed algorithm. The investigation was conducted to test the proposed algorithm on the 33 radial busses system and found to give the better results in minimizing power losses and voltage profile.
A new extension algorithm for cubic B-splines based on minimal strain energy
Institute of Scientific and Technical Information of China (English)
MO Guo-liang; ZHAO Ya-nan
2006-01-01
Extension ora B-spline curve or surface is a useful function in a CAD system. This paper presents an algorithm for extending cubic B-spline curves or surfaces to one or more target points. To keep the extension curve segment GC2-continuous with the original one, a family of cubic polynomial interpolation curves can be constructed. One curve is chosen as the solution from a sub-class of such a family by setting one GC2 parameter to be zero and determining the second GC2 parameter by minimizing the strain energy. To simplify the final curve representation, the extension segment is reparameterized to achieve C2-continuity with the given B-spline curve, and then knot removal from the curve is done. As a result, a sub-optimized solution subject to the given constraints and criteria is obtained. Additionally, new control points of the extension B-spline segment can be determined by solving lower triangular linear equations. Some computing examples for comparing our method and other methods are given.
Novel torque ripple minimization algorithm for direct torque control of induction motor drive
Institute of Scientific and Technical Information of China (English)
LONG Bo; GUO Gui-fang; HAO Xiao-hong; LI Xiao-ning
2009-01-01
To elucidate the principles of notable torque and flux ripple during the steady state of the conventional direct torque control (DTC) of induction machines, the factors of influence torque variation are examined. A new torque ripple minimization algorithm is proposed. The novel method eradicated the torque ripple by imposing the required stator voltage vector in each control cycle. The M and T axial components of the stator voltage are accomplished by measuring the stator flux error and the expected incremental value of the torque at every sampling time. The maximum angle rotation allowed is obtained. Experimental results showed that the proposed method combined with the space vector pulse width modulation(SVPWM) could be implemented in most existing digital drive controllers, offering high performance in both steady and transient states of the induction drives at full speed range. The result of the present work imphes that torque fluctuation could be eliminated by imposing proper stator voltage, and the proposed scheme could not only maintain constant switching frequency for the inverter, but also solve the heating problem and current harmonics in traditional induction motor drives.
The multi-motion-overlap algorithms for minimizing the time between successive scans of wafer stage
Institute of Scientific and Technical Information of China (English)
Pan Haihong; Chen Lin; Li Xiaoqing; Zhou Yunfei
2008-01-01
In order to optimize the transitional time during the successive exposure seam for a step-and-scan lithography and improve the productivity in a wafer production process, an investigation of the motion trajectory planning along the scanning direction for wafer stage was carried out.The motions of wafer stage were divided into two respective logical moves ( i.e.step-move and scan-move) and the multi-motionoverlap algorithms (MMOA) were presented for optimizing the transitional time between the successive exposure scans.The conventional motion planning method, the Hazehon method and the MMOA were analyzed theoretically and simulated using MATLAB under four different exposure field sizes.The results show that the total time between two successive scans consumed by MMOA is reduced by 4.82%,2.62%, 3.06% and 3.96%, compared with those of the conventional motion planning method; and reduced by 2.58%, 0.76%, 1.63% and 2.92%, compared with those of the Hazehon method respectively.The theoretical analyses and simulation results illuminate that the MMOA can effectively minimize the transitional step time between successive exposure scans and therefore increase the wafer fabricating productivity.
Indian Academy of Sciences (India)
Subhajit Nandy; Pinaki Chaudhury; S P Bhattacharyya
2004-08-01
A genetic algorithm-based recipe involving minimization of the Rayleigh quotient is proposed for the sequential extraction of eigenvalues and eigenvectors of a real symmetric matrix with and without basis optimization. Important features of the method are analysed, and possible directions of development suggested.
Pallez, Denis; Baccino, Thierry; Dumercy, Laurent
2008-01-01
In this paper, we describe a new algorithm that consists in combining an eye-tracker for minimizing the fatigue of a user during the evaluation process of Interactive Evolutionary Computation. The approach is then applied to the Interactive One-Max optimization problem.
Directory of Open Access Journals (Sweden)
Wei-Tzer Huang
2015-12-01
Full Text Available This study aimed to minimize energy losses in traditional distribution networks and microgrids through a network reconfiguration and phase balancing approach. To address this problem, an algorithm composed of a multi-objective function and operation constraints is proposed. Network connection matrices based on graph theory and the backward/forward sweep method are used to analyze power flow. A minimizing energy loss approach is developed for network reconfiguration and phase balancing, and the particle swarm optimization (PSO algorithm is adopted to solve this optimal combination problem. The proposed approach is tested on the IEEE 37-bus test system and the first outdoor microgrid test bed established by the Institute of Nuclear Energy Research (INER in Taiwan. Simulation results demonstrate that the proposed two-stage approach can be applied in network reconfiguration to minimize energy loss.
A Simplicial Algorithm for Concave Minimization and Its Performance as a Heuristic Tool
Kuno, Takahito; Shiguro, Yoshiyuki
2007-01-01
In this paper, we develop a kind of branch-and-bound algorithm for solving concaveminimization problems. We show that the algorithm converges to an optimalsolution of this multiextremal global optimization problem, and that it generatesa high-quality heuristic solution even if it is forced to terminate. Therefore, thealgorithm can be used in two ways, as an exact algorithm and as a heuristic tool.We also report some numerical results of a comparison with an existing algorithm,and show the per...
Institute of Scientific and Technical Information of China (English)
Li Kai; Yang Shanlin
2008-01-01
A class of nonidentical parallel machine scheduling problems are considered in which the goal is to minimize the total weighted completion time.Models and relaxations are collected.Most of these problems are NP-hard,in the strong sense,or open problems,therefore approximation algorithms are studied.The review reveals that there exist some potential areas worthy of further research.
Dimensional optimization of a minimally invasive surgical robot system based on NSGA-II algorithm
National Research Council Canada - National Science Library
Wang, Wei; Wang, Weidong; Dong, Wei; Yu, Hongjian; Yan, Zhiyuan; Du, Zhijiang
2015-01-01
Based on the proposed end-effector structure of a laparoscopic minimally invasive surgical manipulator, a dimensional optimization method is investigated to enlarge the motion range of the mechanical...
A Review of Fast L(1)-Minimization Algorithms for Robust Face Recognition
2010-07-01
process- ing and optimization communities in the last five years or so. In CS theory ∗This work was partially supported by NSF IIS 08-49292, NSF ECCS 07...good approximate solutions. The estimation error of Homo - topy is slightly higher than the rest four algorithms. 3. In terms of speed, L1LS and...linearly with the sparsity ratio, while the other algorithms are relatively unaffected. Thus, Homo - topy is more suitable for scenarios where the unknown
Maximal use of minimal libraries through the adaptive substituent reordering algorithm.
Liang, Fan; Feng, Xiao-jiang; Lowry, Michael; Rabitz, Herschel
2005-03-31
This paper describes an adaptive algorithm for interpolation over a library of molecules subjected to synthesis and property assaying. Starting with a coarse sampling of the library compounds, the algorithm finds the optimal substituent orderings on all of the functionalized scaffold sites to allow for accurate property interpolation over all remaining compounds in the full library space. A previous paper introduced the concept of substituent reordering and a smoothness-based criterion to search for optimal orderings (Shenvi, N.; Geremia, J. M.; Rabitz, H. J. Phys. Chem. A 2003, 107, 2066). Here, we propose a data-driven root-mean-squared (RMS) criteria and a combined RMS/smoothness criteria as alternative methods for the discovery of optimal substituent orderings. Error propagation from the property measurements of the sampled compounds is determined to provide confidence intervals on the interpolated molecular property values, and a substituent rescaling technique is introduced to manage poorly designed/sampled libraries. Finally, various factors are explored that can influence the applicability and interpolation quality of the algorithm. An adaptive methodology is proposed to iteratively and efficiently use laboratory experiments to optimize these algorithmic factors, so that the accuracy of property predictions is maximized. The enhanced algorithm is tested on copolymer and transition metal complex libraries, and the results demonstrate the capability of the algorithm to accurately interpolate various properties of both molecular libraries.
Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.
2016-05-01
X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.
A new efficient algorithm generating all minimal S-T cut-sets in a graph-modeled network
Malinowski, Jacek
2016-06-01
A new algorithm finding all minimal s-t cut-sets in a graph-modeled network with failing links and nodes is presented. It is based on the analysis of the tree of acyclic s-t paths connecting a given pair of nodes in the considered structure. The construction of such a tree is required by many existing algorithms for s-t cut-sets generation in order to eliminate "stub" edges or subgraphs through which no acyclic path passes. The algorithm operates on the acyclic paths tree alone, i.e. no other analysis of the network's topology is necessary. It can be applied to both directed and undirected graphs, as well as partly directed ones. It is worth noting that the cut-sets can be composed of both links and failures, while many known algorithms do not take nodes into account, which is quite restricting from the practical point of view. The developed cut-sets generation technique makes the algorithm significantly faster than most of the previous methods, as proved by the experiments.
Sriram, Vinay K; Montgomery, Doug
2017-07-01
The Internet is subject to attacks due to vulnerabilities in its routing protocols. One proposed approach to attain greater security is to cryptographically protect network reachability announcements exchanged between Border Gateway Protocol (BGP) routers. This study proposes and evaluates the performance and efficiency of various optimization algorithms for validation of digitally signed BGP updates. In particular, this investigation focuses on the BGPSEC (BGP with SECurity extensions) protocol, currently under consideration for standardization in the Internet Engineering Task Force. We analyze three basic BGPSEC update processing algorithms: Unoptimized, Cache Common Segments (CCS) optimization, and Best Path Only (BPO) optimization. We further propose and study cache management schemes to be used in conjunction with the CCS and BPO algorithms. The performance metrics used in the analyses are: (1) routing table convergence time after BGPSEC peering reset or router reboot events and (2) peak-second signature verification workload. Both analytical modeling and detailed trace-driven simulation were performed. Results show that the BPO algorithm is 330% to 628% faster than the unoptimized algorithm for routing table convergence in a typical Internet core-facing provider edge router.
Azadnia, Amir Hossein; Taheri, Shahrooz; Ghadimi, Pezhman; Saman, Muhamad Zameri Mat; Wong, Kuan Yew
2013-01-01
One of the cost-intensive issues in managing warehouses is the order picking problem which deals with the retrieval of items from their storage locations in order to meet customer requests. Many solution approaches have been proposed in order to minimize traveling distance in the process of order picking. However, in practice, customer orders have to be completed by certain due dates in order to avoid tardiness which is neglected in most of the related scientific papers. Consequently, we proposed a novel solution approach in order to minimize tardiness which consists of four phases. First of all, weighted association rule mining has been used to calculate associations between orders with respect to their due date. Next, a batching model based on binary integer programming has been formulated to maximize the associations between orders within each batch. Subsequently, the order picking phase will come up which used a Genetic Algorithm integrated with the Traveling Salesman Problem in order to identify the most suitable travel path. Finally, the Genetic Algorithm has been applied for sequencing the constructed batches in order to minimize tardiness. Illustrative examples and comparisons are presented to demonstrate the proficiency and solution quality of the proposed approach.
Directory of Open Access Journals (Sweden)
Hadi Mokhtari
2013-01-01
Full Text Available In this paper, the problem of received order scheduling by a manufacturer, with the measure of maximum completion times of orders, has been formulated and then an analytical approach has been devised for its solution. At the beginning of a planning period, the manufacturer receives a number of orders from customers, each of which requires two different stages for processing. In order to minimize the work in process inventories, the no-wait condition between two operations of each order is regarded. Then, the equality of obtained schedules is proved by machine idle time minimization, as objective, with the schedules obtained by maximum completion time minimization. A concept entitled “Order pairing” has been defined and an algorithm for achieving optimal order pairs which is based on symmetric assignment problem has been presented. Using the established order pairs, an upper bound has been developed based on contribution of every order pair out of total machines idle time. Out of different states of improving upper bound, 12 potential situations of order pairs sequencing have been also evaluated and then the upper bound improvement has been proved in each situation, separately. Finally, a heuristic algorithm has been developed based on attained results of pair improvement and a case study in printing industry has been investigated and analyzed to approve its applicability.
Directory of Open Access Journals (Sweden)
Amir Hossein Azadnia
2013-01-01
Full Text Available One of the cost-intensive issues in managing warehouses is the order picking problem which deals with the retrieval of items from their storage locations in order to meet customer requests. Many solution approaches have been proposed in order to minimize traveling distance in the process of order picking. However, in practice, customer orders have to be completed by certain due dates in order to avoid tardiness which is neglected in most of the related scientific papers. Consequently, we proposed a novel solution approach in order to minimize tardiness which consists of four phases. First of all, weighted association rule mining has been used to calculate associations between orders with respect to their due date. Next, a batching model based on binary integer programming has been formulated to maximize the associations between orders within each batch. Subsequently, the order picking phase will come up which used a Genetic Algorithm integrated with the Traveling Salesman Problem in order to identify the most suitable travel path. Finally, the Genetic Algorithm has been applied for sequencing the constructed batches in order to minimize tardiness. Illustrative examples and comparisons are presented to demonstrate the proficiency and solution quality of the proposed approach.
A Fast and Accurate Algorithm for l1 Minimization Problems in Compressive Sampling (Preprint)
2013-01-22
performance of algorithms in terms of various error metrics, speed, and robustness to noise. All the experiments are performed in Matlab 7.11 on...online version available, (2011). [17] J.-J. Moreau, Fonctions convexes duales et points proximaux dans un espace hilbertien, C.R. Acad. Sci. Paris Sér
Energy Technology Data Exchange (ETDEWEB)
Filho, Faete J [ORNL; Tolbert, Leon M [ORNL; Ozpineci, Burak [ORNL
2012-01-01
The work developed here proposes a methodology for calculating switching angles for varying DC sources in a multilevel cascaded H-bridges converter. In this approach the required fundamental is achieved, the lower harmonics are minimized, and the system can be implemented in real time with low memory requirements. Genetic algorithm (GA) is the stochastic search method to find the solution for the set of equations where the input voltages are the known variables and the switching angles are the unknown variables. With the dataset generated by GA, an artificial neural network (ANN) is trained to store the solutions without excessive memory storage requirements. This trained ANN then senses the voltage of each cell and produces the switching angles in order to regulate the fundamental at 120 V and eliminate or minimize the low order harmonics while operating in real time.
Bredies, Kristian
2009-01-01
We consider the task of computing an approximate minimizer of the sum of a smooth and a non-smooth convex functional, respectively, in Banach space. Motivated by the classical forward-backward splitting method for the subgradients in Hilbert space, we propose a generalization which involves the iterative solution of simpler subproblems. Descent and convergence properties of this new algorithm are studied. Furthermore, the results are applied to the minimization of Tikhonov-functionals associated with linear inverse problems and semi-norm penalization in Banach spaces. With the help of Bregman-Taylor-distance estimates, rates of convergence for the forward-backward splitting procedure are obtained. Examples which demonstrate the applicability are given, in particular, a generalization of the iterative soft-thresholding method by Daubechies, Defrise and De Mol to Banach spaces as well as total-variation-based image restoration in higher dimensions are presented.
Pollakis, Emmanuel; Stańczak, Slawomir
2012-01-01
In this paper, we study the problem of reducing the energy consumption in a mobile communication network; we select the smallest set of active base stations that can preserve the quality of service (the minimum data rate) required by the users. In more detail, we start by posing this problem as an integer programming problem, the solution of which shows the optimal assignment (in the sense of minimizing the total energy consumption) between base stations and users. In particular, this solution shows which base stations can then be switched off or put in idle mode to save energy. However, solving this problem optimally is intractable in general, so in this study we develop a suboptimal approach that builds upon recent techniques that have been successfully applied to, among other problems, sparse signal reconstruction, portfolio optimization, statistical estimation, and error correction. More precisely, we relax the original integer programming problem as a minimization problem where the objective function is ...
Eom, Jae-Boo; Hwang, Sang-Moon; Kim, Tae-Jong; Jeong, Weui-Bong; Kang, Beom-Soo
2001-05-01
Cogging torque is often a principal source of vibration and acoustic noise in high precision spindle motor applications. In this paper, cogging torque is analytically calculated using energy method with Fourier series expansion. It shows that cogging torque is effectively minimized by controlling airgap permeance function with teeth pairing design, and by controlling flux density function with magnet arc design. For an optimization technique, genetic algorithm is applied to handle trade-off effects of design parameters. Results show that the proposed method can reduce the cogging torque effectively.
Directory of Open Access Journals (Sweden)
Gilberto Herrera-Ruíz
2013-03-01
Full Text Available A New Adaptive Self-Tuning Fourier Coefficients Algorithm for Periodic Torque Ripple Minimization in Permanent Magnet Synchronous Motors (PMSM Torque ripple occurs in Permanent Magnet Synchronous Motors (PMSMs due to the non-sinusoidal flux density distribution around the air-gap and variable magnetic reluctance of the air-gap due to the stator slots distribution. These torque ripples change periodically with rotor position and are apparent as speed variations, which degrade the PMSM drive performance, particularly at low speeds, because of low inertial filtering. In this paper, a new self-tuning algorithm is developed for determining the Fourier Series Controller coefficients with the aim of reducing the torque ripple in a PMSM, thus allowing for a smoother operation. This algorithm adjusts the controller parameters based on the component’s harmonic distortion in time domain of the compensation signal. Experimental evaluation is performed on a DSP-controlled PMSM evaluation platform. Test results obtained validate the effectiveness of the proposed self-tuning algorithm, with the Fourier series expansion scheme, in reducing the torque ripple.
Gómez-Espinosa, Alfonso; Hernández-Guzmán, Víctor M; Bandala-Sánchez, Manuel; Jiménez-Hernández, Hugo; Rivas-Araiza, Edgar A; Rodríguez-Reséndiz, Juvenal; Herrera-Ruíz, Gilberto
2013-03-19
A New Adaptive Self-Tuning Fourier Coefficients Algorithm for Periodic Torque Ripple Minimization in Permanent Magnet Synchronous Motors (PMSM) Torque ripple occurs in Permanent Magnet Synchronous Motors (PMSMs) due to the non-sinusoidal flux density distribution around the air-gap and variable magnetic reluctance of the air-gap due to the stator slots distribution. These torque ripples change periodically with rotor position and are apparent as speed variations, which degrade the PMSM drive performance, particularly at low speeds, because of low inertial filtering. In this paper, a new self-tuning algorithm is developed for determining the Fourier Series Controller coefficients with the aim of reducing the torque ripple in a PMSM, thus allowing for a smoother operation. This algorithm adjusts the controller parameters based on the component's harmonic distortion in time domain of the compensation signal. Experimental evaluation is performed on a DSP-controlled PMSM evaluation platform. Test results obtained validate the effectiveness of the proposed self-tuning algorithm, with the Fourier series expansion scheme, in reducing the torque ripple.
A Static Control Algorithm for Adaptive Beam String Structures Based on Minimal Displacement
Directory of Open Access Journals (Sweden)
Yanbin Shen
2013-01-01
Full Text Available The beam string structure (BSS is a type of prestressed structure and has been widely used in large span structures nowadays. The adaptive BSS is a typical smart structure that can optimize the working status itself by controlling the length of active struts via certain control device. The control device commonly consists of actuators in all struts and sensors on the beam. The key point of the control process is to determine the length adjustment values of actuators according to the data obtained by preinstalled sensors. In this paper, a static control algorithm for adaptive BSS has been presented for the adjustment solution. To begin with, an optimization model of adaptive BSS with multiple active struts is established, which uses a sensitivity analysis method. Next, a linear displacement control process is presented, and the adjustment values of struts are calculated by a simulated annealing algorithm. A nonlinear iteration procedure is used afterwards to calibrate the results of linear calculation. Finally, an example of adaptive BSS under different external loads is carried out to verify the feasibility and accuracy of the algorithm. And the results also show that the adaptive BSS has much better adaptivity and capability than the noncontrolled BSS.
Institute of Scientific and Technical Information of China (English)
李进华; 王雪生; 邢元; 王树新; 李建民; 梁科
2016-01-01
To evaluate the operation comfortability in the master-slave robotic minimally invasive surgery (MIS), an optimal function was built with two operation comfortability decided indices, i.e., the center distance and volume contact ratio. Two verifying experiments on Phantom Desktop and MicroHand S were conducted. Experimental results show that the operation effect at the optimal relative location is better than that at the random location, which means that the optimal function constructed in this paper is effective in optimizing the operation comfortability.
Hash Dijkstra Algorithm for Approximate Minimal Spanning Tree%近似最小树的哈希Dijkstra算法
Institute of Scientific and Technical Information of China (English)
李玉鑑; 李厚君
2011-01-01
为了解决Dijkstra（DK）算法对大规模数据构造最小树时效率不高的问题，结合局部敏感哈希映射（LSH），针对欧氏空间中的样本，提出了一种近似最小树的快速生成算法，即LSHDK算法．该算法通过减少查找近邻点的计算量提高运行速度．计算实验结果表明，当数据规模大于50000个点时，LSHDK算法比DK算法速度更快且所计算的近似最小树在维数较低时误差非常小（0．00—0．05％），在维数较高时误差通常为0．1％～3．0％．%In order to overcome the low efficiency of Dijkstra （DK） algorithm in constructing Minimal Spanning Trees （MST） for large-scale datasets, this paper uses Locality Sensitive Hashing （LSH） to design a fast approximate algorithm, namely, LSHDK algorithm, to build MST in Euclidean space. The LSHDK algorithm can achieve a faster speed with small error by reducing computations in search of nearest points. Computational experiments show that it runs faster than the DK algorithm on datasets of more than 50 000 points, while the resulting approximate MST has an small error which is very small （0.00 - 0.05% ） in low dimension, and generally between 0. 1% -3.0% in high dimension.
A General Algorithm for Robot Formations Using Local Sensing and Minimal Communication
DEFF Research Database (Denmark)
Fredslund, Jakob; Matarić, Maja J
2002-01-01
the friend in the sensor's field of view. We also present a general analytical measure for evaluating formations and apply it to the position data from both simulation and physical robot experiments. We used two lasers to track the physical robots to obtain ground truth validation data.......We study the problem of achieving global behavior in a group of distributed robots using only local sensing and minimal communication, in the context of formations. The goal is to have mobile robots establish and maintain some predetermined geo- metric shape. We report results from extensive...... simulation exper- iments, and 40+ experiments with four physical robots, showing the viability of our approach. The key idea is that each robot keeps a single friend at a desired angle , using some appropriate sensor. By panning the sensor by degrees, the goal for all formations be- comes simply to center...
Algorithmic PON/P2P FTTH Access Network Design for CAPEX Minimization
DEFF Research Database (Denmark)
Papaefthimiou, Kostantinos; Tefera, Yonas; Mihylov, Dimitar
2013-01-01
Due to the emergence of high bandwidth-requiring services, telecommunication operators (telcos) are called to upgrade their fixed access network. In order to keep up with the competition, they must consider different optical access network solutions with Fiber To The Home (FTTH) as the prevailing...... one. It provides an obvious advantage for the end users in terms of high achievable data rates. On the other hand, the high initial deployment cost required exists as the heaviest impediment. The main goal of this paper is to study different approaches when designing a fiber access network. More...... concretely, two different optimizations are alternatively evaluated, fiber and trenching minimization, over two of the most typical fiber access architectures, Point-to-Point (P2P) and Passive Optical Network (PON). These are applied to a real geographical scenario and the best returned output in terms...
Bordin, Fabiane; Gonzaga, Luiz, Jr.; Galhardo Muller, Fabricio; Veronez, Mauricio Roberto; Scaioni, Marco
2016-06-01
Laser scanning technique from airborne and land platforms has been largely used for collecting 3D data in large volumes in the field of geosciences. Furthermore, the laser pulse intensity has been widely exploited to analyze and classify rocks and biomass, and for carbon storage estimation. In general, a laser beam is emitted, collides with targets and only a percentage of emitted beam returns according to intrinsic properties of each target. Also, due interferences and partial collisions, the laser return intensity can be incorrect, introducing serious errors in classification and/or estimation processes. To address this problem and avoid misclassification and estimation errors, we have proposed a new algorithm to correct return intensity for laser scanning sensors. Different case studies have been used to evaluate and validated proposed approach.
FSM State-Encoding for Area and Power Minimization Using Simulated Evolution Algorithm
Directory of Open Access Journals (Sweden)
Sadiq M. Sait
2012-11-01
Full Text Available In this paper we describe the engineering of a non-deterministic iterative heuristic [1] known as simulated evolution(SimE to solve the well-known NP-hard state assignment problem (SAP. Each assignment of a code to a state isgiven a Goodness value derived from a matrix representation of the desired adjacency graph (DAG proposed byAmaral et.al [2]. We use the (DAGa proposed in previous studies to optimize the area, and propose a new DAGpand employ it to reduce the power dissipation. In the process of evolution, those states that have high Goodness havea smaller probability of getting perturbed, while those with lower Goodness can be easily reallocated. States areassigned to cells of a Karnaugh-map, in a way that those states that have to be close in terms of Hamming distanceare assigned adjacent cells. Ordered weighed average (OWA operator proposed by Yager [3] is used to combine thetwo objectives. Results are compared with those published in previous studies, for circuits obtained from the MCNCbenchmark suite. It was found that the SimE heuristic produces better quality results in most cases, and/or in lessertime, when compared to both deterministic heuristics and non-deterministic iterative heuristics such as GeneticAlgorithm.
Directory of Open Access Journals (Sweden)
M. Collaud Coen
2010-04-01
Full Text Available The aerosol light absorption coefficient is an essential parameter involved in atmospheric radiation budget calculations. The Aethalometer (AE has the great advantage of measuring the aerosol light absorption coefficient at several wavelengths, but the derived absorption coefficients are systematically too high when compared to reference methods. Up to now, four different correction algorithms of the AE absorption coefficients have been proposed by several authors. A new correction scheme based on these previously published methods has been developed, which accounts for the optical properties of the aerosol particles embedded in the filter. All the corrections have been tested on six datasets representing different aerosol types and loadings and include multi-wavelength AE and white-light AE. All the corrections have also been evaluated through comparison with a Multi-Angle Absorption Photometer (MAAP for four datasets lasting between 6 months and five years. The modification of the wavelength dependence by the different corrections is analyzed in detail. The performances and the limits of all AE corrections are determined and recommendations are given.
Directory of Open Access Journals (Sweden)
P. L. N. U. Cooray
2017-01-01
Full Text Available During the last decade, tremendous focus has been given to sustainable logistics practices to overcome environmental concerns of business practices. Since transportation is a prominent area of logistics, a new area of literature known as Green Transportation and Green Vehicle Routing has emerged. Vehicle Routing Problem (VRP has been a very active area of the literature with contribution from many researchers over the last three decades. With the computational constraints of solving VRP which is NP-hard, metaheuristics have been applied successfully to solve VRPs in the recent past. This is a threefold study. First, it critically reviews the current literature on EMVRP and the use of metaheuristics as a solution approach. Second, the study implements a genetic algorithm (GA to solve the EMVRP formulation using the benchmark instances listed on the repository of CVRPLib. Finally, the GA developed in Phase 2 was enhanced through machine learning techniques to tune its parameters. The study reveals that, by identifying the underlying characteristics of data, a particular GA can be tuned significantly to outperform any generic GA with competitive computational times. The scrutiny identifies several knowledge gaps where new methodologies can be developed to solve the EMVRPs and develops propositions for future research.
Collaud Coen, M.; Weingartner, E.; Apituley, A.; Ceburnis, D.; Fierz-Schmidhauser, R.; Flentje, H.; Henzing, J. S.; Jennings, S. G.; Moerman, M.; Petzold, A.; Schmid, O.; Baltensperger, U.
2010-04-01
The aerosol light absorption coefficient is an essential parameter involved in atmospheric radiation budget calculations. The Aethalometer (AE) has the great advantage of measuring the aerosol light absorption coefficient at several wavelengths, but the derived absorption coefficients are systematically too high when compared to reference methods. Up to now, four different correction algorithms of the AE absorption coefficients have been proposed by several authors. A new correction scheme based on these previously published methods has been developed, which accounts for the optical properties of the aerosol particles embedded in the filter. All the corrections have been tested on six datasets representing different aerosol types and loadings and include multi-wavelength AE and white-light AE. All the corrections have also been evaluated through comparison with a Multi-Angle Absorption Photometer (MAAP) for four datasets lasting between 6 months and five years. The modification of the wavelength dependence by the different corrections is analyzed in detail. The performances and the limits of all AE corrections are determined and recommendations are given.
Directory of Open Access Journals (Sweden)
M. Collaud Coen
2009-07-01
Full Text Available The aerosol light absorption coefficient is an essential parameter involved in atmospheric radiation budget calculations. The Aethalometer (AE has the great advantage of measuring the aerosol light absorption coefficient at several wavelengths, but the derived absorption coefficients are systematically too high when compared to reference methods. Up to now, four different correction algorithms of the AE absorption coefficients have been proposed by several authors. A new correction scheme based on these previously published methods has been developed, which accounts for the optical properties of the aerosol particles embedded in the filter. All the corrections have been tested on six datasets representing different aerosol types and loadings and include multi-wavelength AE and white-light AE. All the corrections have also been evaluated through comparison with a Multi-Angle Absorption Photometer (MAAP for four datasets lasting between 6 months and five years. The modification of the wavelength dependence by the different corrections is analyzed in detail. The performances and the limits of all AE corrections are determined and recommendations are given.
Directory of Open Access Journals (Sweden)
Maaz Bin Ahmad
2014-01-01
Full Text Available Insider threats detection problem has always been one of the most difficult challenges for organizations and research community. Effective behavioral categorization of users plays a vital role for the success of any detection mechanisms. It also helps to reduce false alarms in case of insider threats. In order to achieve this, a fuzzy classifier has been implemented along with genetic algorithm (GA to enhance the efficiency of a fuzzy classifier. It also enhances the functionality of all other modules to achieve better results in terms of false alarms. A scenario driven approach along with mathematical evaluation verifies the effectiveness of the modified framework. It has been tested for the enterprises having critical nature of business. Other organizations can adopt it in accordance with their specific nature of business, need, and operational processes. The results prove that accurate classification and detection of users were achieved by adopting the modified framework which in turn minimizes false alarms.
Directory of Open Access Journals (Sweden)
Guanlong Deng
2016-01-01
Full Text Available This paper presents an enhanced discrete artificial bee colony algorithm for minimizing the total flow time in the flow shop scheduling problem with buffer capacity. First, the solution in the algorithm is represented as discrete job permutation to directly convert to active schedule. Then, we present a simple and effective scheme called best insertion for the employed bee and onlooker bee and introduce a combined local search exploring both insertion and swap neighborhood. To validate the performance of the presented algorithm, a computational campaign is carried out on the Taillard benchmark instances, and computations and comparisons show that the proposed algorithm is not only capable of solving the benchmark set better than the existing discrete differential evolution algorithm and iterated greedy algorithm, but also capable of performing better than two recently proposed discrete artificial bee colony algorithms.
Directory of Open Access Journals (Sweden)
Wei Fan
2014-01-01
Full Text Available Vibration signals captured from faulty mechanical components are often associated with transients which are significant for machinery fault diagnosis. However, the existence of strong background noise makes the detection of transients a basis pursuit denoising (BPD problem, which is hard to be solved in explicit form. With sparse representation theory, this paper proposes a novel method for machinery fault diagnosis by combining the wavelet basis and majorization-minimization (MM algorithm. This method converts transients hidden in the noisy signal into sparse coefficients; thus the transients can be detected sparsely. Simulated study concerning cyclic transient signals with different signal-to-noise ratio (SNR shows that the effectiveness of this method. The comparison in the simulated study shows that the proposed method outperforms the method based on split augmented Lagrangian shrinkage algorithm (SALSA in convergence and detection effect. Application in defective gearbox fault diagnosis shows the fault feature of gearbox can be sparsely and effectively detected. A further comparison between this method and the method based on SALSA shows the superiority of the proposed method in machinery fault diagnosis.
Xiao, Xingqing; Hung, Michelle E; Leonard, Joshua N; Hall, Carol K
2016-10-15
Our previously developed peptide-design algorithm was improved by adding an energy minimization strategy which allows the amino acid sidechains to move in a broad configuration space during sequence evolution. In this work, the new algorithm was used to generate a library of 21-mer peptides which could substitute for λ N peptide in binding to boxB RNA. Six potential peptides were obtained from the algorithm, all of which exhibited good binding capability with boxB RNA. Atomistic molecular dynamics simulations were then conducted to examine the ability of the λ N peptide and three best evolved peptides, viz. Pept01, Pept26, and Pept28, to bind to boxB RNA. Simulation results demonstrated that our evolved peptides are better at binding to boxB RNA than the λ N peptide. Sequence searches using the old (without energy minimization strategy) and new (with energy minimization strategy) algorithms confirm that the new algorithm is more effective at finding good RNA-binding peptides than the old algorithm. © 2016 Wiley Periodicals, Inc.
DEFF Research Database (Denmark)
Gribonval, Rémi; Nielsen, Morten
In a series of recent results, several authors have shown that both l¹-minimization (Basis Pursuit) and greedy algorithms (Matching Pursuit) can successfully recover a sparse representation of a signal provided that it is sparse enough, that is to say if its support (which indicates where are loc...
Directory of Open Access Journals (Sweden)
Anton A. Buzdin
2014-08-01
Full Text Available The diversity of the installed sequencing and microarray equipment make it increasingly difficult to compare and analyze the gene expression datasets obtained using the different methods. Many applications requiring high-quality and low error rates can not make use of available data using traditional analytical approaches. Recently, we proposed a new concept of signalome-wide analysis of functional changes in the intracellular pathways termed OncoFinder, a bioinformatic tool for quantitative estimation of the signaling pathway activation (SPA. We also developed methods to compare the gene expression data obtained using multiple platforms and minimizing the error rates by mapping the gene expression data onto the known and custom signaling pathways. This technique for the first time makes it possible to analyze the functional features of intracellular regulation on a mathematical basis. In this study we show that the OncoFinder method significantly reduces the errors introduced by transcriptome-wide experimental techniques. We compared the gene expression data for the same biological samples obtained by both the next generation sequencing (NGS and microarray methods. For these different techniques we demonstrate that there is virtually no correlation between the gene expression values for all datasets analyzed (R2 < 0.1. In contrast, when the OncoFinder algorithm is applied to the data we observed clear-cut correlations between the NGS and microarray gene expression datasets. The signaling pathway activation profiles obtained using NGS and microarray techniques were almost identical for the same biological samples allowing for the platform-agnostic analytical applications. We conclude that this feature of the OncoFinder enables to characterize the functional states of the transcriptomes and interactomes more accurately as before, which makes OncoFinder a method of choice for many applications including genetics, physiology, biomedicine and
Directory of Open Access Journals (Sweden)
Rómulo Castillo Cárdenas
2013-06-01
Full Text Available In this work we consider the problem OVO (order value optimization. The problem we address is to minimize f with x 2 by a genetic algorithm that by its very nature has the advantage over existing continuous optimization methods, to nd global minimizers. We illustrate the application of this algorithm on examples considered showing its e ectiveness in solving them.// RESUMEN En el presente trabajo consideramos el problema OVO (order value optimization. El problema que abordamos consiste entonces en minimizar f con x 2 por medio de un algoritmo gen etico que por su naturaleza intrínseca tiene la ventaja, sobre métodos de optimización continua existentes, de encontrar minimizadores globales. Ilus- tramos la aplicación de este algoritmo sobre ejemplos considerados mostrando su eficacia en la resolución de los mismos.
Directory of Open Access Journals (Sweden)
Rómulo Castillo Cárdenas
2013-06-01
Full Text Available In this work we consider the problem OVO (order value optimization. The problem we address is to minimize f with x 2 by a genetic algorithm that by its very nature has the advantage over existing continuous optimization methods, to nd global minimizers. We illustrate the application of this algorithm on examples considered showing its e ectiveness in solving them.// RESUMEN En el presente trabajo consideramos el problema OVO (order value optimization. El problema que abordamos consiste entonces en minimizar f con x 2 por medio de un algoritmo gen etico que por su naturaleza intrínseca tiene la ventaja, sobre métodos de optimización continua existentes, de encontrar minimizadores globales. Ilus- tramos la aplicación de este algoritmo sobre ejemplos considerados mostrando su eficacia en la resolución de los mismos.
Institute of Scientific and Technical Information of China (English)
邓冠龙; 徐震浩; 顾幸生
2012-01-01
A discrete artificial bee colony algorithm is proposed for solving the blocking flow shop scheduling problem with total flow time criterion. Firstly, the solution in the algorithm is represented as job permutation. Secondly, an initialization scheme based on a variant of the NEH (Nawaz-Enscore-Ham) heuristic and a local search is designed to construct the initial population with both quality and diversity. Thirdly, based on the idea of iterated greedy algorithm, some newly designed schemes for employed bee, onlooker bee and scout bee are presented. The performance of the proposed algorithm is tested on the well-known Taillard benchmark set, and the computational results demonstrate the effectiveness of the discrete artificial bee colony algorithm. In addition, the best known solutions of the benchmark set are provided for the blocking flow shop scheduling problem with total flow time criterion.
Ring, Brian Z; Hout, David R; Morris, Stephan W; Lawrence, Kasey; Schweitzer, Brock L; Bailey, Daniel B; Lehmann, Brian D; Pietenpol, Jennifer A; Seitz, Robert S
2016-02-23
Recently, a gene expression algorithm, TNBCtype, was developed that can divide triple-negative breast cancer (TNBC) into molecularly-defined subtypes. The algorithm has potential to provide predictive value for TNBC subtype-specific response to various treatments. TNBCtype used in a retrospective analysis of neoadjuvant clinical trial data of TNBC patients demonstrated that TNBC subtype and pathological complete response to neoadjuvant chemotherapy were significantly associated. Herein we describe an expression algorithm reduced to 101 genes with the power to subtype TNBC tumors similar to the original 2188-gene expression algorithm and predict patient outcomes. The new classification model was built using the same expression data sets used for the original TNBCtype algorithm. Gene set enrichment followed by shrunken centroid analysis were used for feature reduction, then elastic-net regularized linear modeling was used to identify genes for a centroid model classifying all subtypes, comprised of 101 genes. The predictive capability of both this new "lean" algorithm and the original 2188-gene model were applied to an independent clinical trial cohort of 139 TNBC patients treated initially with neoadjuvant doxorubicin/cyclophosphamide and then randomized to receive either paclitaxel or ixabepilone to determine association of pathologic complete response within the subtypes. The new 101-gene expression model reproduced the classification provided by the 2188-gene algorithm and was highly concordant in the same set of seven TNBC cohorts used to generate the TNBCtype algorithm (87%), as well as in the independent clinical trial cohort (88%), when cases with significant correlations to multiple subtypes were excluded. Clinical responses to both neoadjuvant treatment arms, found BL2 to be significantly associated with poor response (Odds Ratio (OR) =0.12, p=0.03 for the 2188-gene model; OR = 0.23, p sets can recapitulate the TNBC subtypes identified by the original 2188
Institute of Scientific and Technical Information of China (English)
Belgacem BETTAYEB; Imed KACEM; Kondo H.ADJALLAH
2008-01-01
This article investigates identical parallel machines scheduling with family setup times. Theobjective function being the weighted sum of completion times, the problem is known to be strongly NP-hard. We propose a constructive heuristic algorithm and three complementary lower bounds. Two of these bounds proceed by elimination of setup times or by distributing each of them to jobs of the corresponding family, while the third one is based on a lagrangian relaxation. The bounds and the heuristic are incorporated into a branch-and-bound algorithm. Experimental results obtained outperform those of the methods presented in previous works, in term of size of solved problems.
Competitive Decision Algorithm for the Steiner Minimal Tree Problem in Graphs%图的Steiner最小树的竞争决策算法
Institute of Scientific and Technical Information of China (English)
熊小华; 刘艳芳; 宁爱兵
2012-01-01
图的Steiner最小树问题是一个著名的NP难题,在通讯网络、VLSI等工程实践中有着重要的应用.在分析图的Steiner最小树问题数学性质的基础上,提出了图的Steiner最小树的竞争决策算法.为了验证算法的有效性,求解了OR-Library中的基准问题,测试结果表明了算法具有较好的求解效果.%The Steiner minimal tree problem in graphs(GSTP) is a well-known NP-hard problem. Its applications can be found in many areas, such as telecommunication network design,VLSI design,etc. A competitive decision algorithm was developed to solve the GSTP. The mathematical properties of GSTP were analysed, which can be used to scale down the size of original problem and accelerate the algorithm. To assess the efficiency of the proposed competitive decision algorithm,it was applied to a set of benchmark problems in the OR-Library. In terms of computation times,our algorithm clearly outperforms other heuristics for the Steiner problem in graphs, while obtaining better or comparable solutions.
Institute of Scientific and Technical Information of China (English)
Chang-yin Zhou; Guo-ping He; Yong-li Wang
2006-01-01
In this paper,we propose a feasible QP-free method for solving nonlinear inequality constrained optimization problems. A new working set is proposed to estimate the active set. Specially,to determine the working set,the new method makes use of the multiplier information from the previous iteration,eliminating the need to compute a multiplier function. At each iteration,two or three reduced symmetric systems of linear equations with a common coefficient matrix involving only constraints in the working set are solved,and when the iterate is sufficiently close to a KKT point,only two of them are involved.Moreover,the new algorithm is proved to be globally convergent to a KKT point under mild conditions. Without assuming the strict complementarity,the convergence rate is superlinear under a condition weaker than the strong second-order sufficiency condition. Numerical experiments illustrate the efficiency of the algorithm.
Lu, Zhonghua; Arikatla, Venkata S; Han, Zhongqing; Allen, Brian F; De, Suvranu
2014-12-01
High-frequency electricity is used in the majority of surgical interventions. However, modern computer-based training and simulation systems rely on physically unrealistic models that fail to capture the interplay of the electrical, mechanical and thermal properties of biological tissue. We present a real-time and physically realistic simulation of electrosurgery by modelling the electrical, thermal and mechanical properties as three iteratively solved finite element models. To provide subfinite-element graphical rendering of vaporized tissue, a dual-mesh dynamic triangulation algorithm based on isotherms is proposed. The block compressed row storage (BCRS) structure is shown to be critical in allowing computationally efficient changes in the tissue topology due to vaporization. We have demonstrated our physics-based electrosurgery cutting algorithm through various examples. Our matrix manipulation algorithms designed for topology changes have shown low computational cost. Our simulator offers substantially greater physical fidelity compared to previous simulators that use simple geometry-based heat characterization. Copyright © 2013 John Wiley & Sons, Ltd.
Siddeq, M. M.; Rodrigues, M. A.
2015-09-01
Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.
Directory of Open Access Journals (Sweden)
Maria Pia Francescato
Full Text Available Physical activity in patients with type 1 diabetes (T1DM is hindered because of the high risk of glycemic imbalances. A recently proposed algorithm (named Ecres estimates well enough the supplemental carbohydrates for exercises lasting one hour, but its performance for prolonged exercise requires validation. Nine T1DM patients (5M/4F; 35-65 years; HbA1c 54 ± 13 mmol · mol(-1 performed, under free-life conditions, a 3-h walk at 30% heart rate reserve while insulin concentrations, whole-body carbohydrate oxidation rates (determined by indirect calorimetry and supplemental carbohydrates (93% sucrose, together with glycemia, were measured every 30 min. Data were subsequently compared with the corresponding values estimated by the algorithm. No significant difference was found between the estimated insulin concentrations and the laboratory-measured values (p = NS. Carbohydrates oxidation rate decreased significantly with time (from 0.84 ± 0.31 to 0.53 ± 0.24 g · min(-1, respectively; p < 0.001, being estimated well enough by the algorithm (p = NS. Estimated carbohydrates requirements were practically equal to the corresponding measured values (p = NS, the difference between the two quantities amounting to -1.0 ± 6.1 g, independent of the elapsed exercise time (time effect, p = NS. Results confirm that Ecres provides a satisfactory estimate of the carbohydrates required to avoid glycemic imbalances during moderate intensity aerobic physical activity, opening the prospect of an intriguing method that could liberate patients from the fear of exercise-induced hypoglycemia.
Francescato, Maria Pia; Stel, Giuliana; Stenner, Elisabetta; Geat, Mario
2015-01-01
Physical activity in patients with type 1 diabetes (T1DM) is hindered because of the high risk of glycemic imbalances. A recently proposed algorithm (named Ecres) estimates well enough the supplemental carbohydrates for exercises lasting one hour, but its performance for prolonged exercise requires validation. Nine T1DM patients (5M/4F; 35-65 years; HbA1c 54 ± 13 mmol · mol(-1)) performed, under free-life conditions, a 3-h walk at 30% heart rate reserve while insulin concentrations, whole-body carbohydrate oxidation rates (determined by indirect calorimetry) and supplemental carbohydrates (93% sucrose), together with glycemia, were measured every 30 min. Data were subsequently compared with the corresponding values estimated by the algorithm. No significant difference was found between the estimated insulin concentrations and the laboratory-measured values (p = NS). Carbohydrates oxidation rate decreased significantly with time (from 0.84 ± 0.31 to 0.53 ± 0.24 g · min(-1), respectively; p exercise time (time effect, p = NS). Results confirm that Ecres provides a satisfactory estimate of the carbohydrates required to avoid glycemic imbalances during moderate intensity aerobic physical activity, opening the prospect of an intriguing method that could liberate patients from the fear of exercise-induced hypoglycemia.
Directory of Open Access Journals (Sweden)
Supriya Aggarwal
2012-01-01
Full Text Available One of the most important steps in spectral analysis is filtering, where window functions are generally used to design filters. In this paper, we modify the existing architecture for realizing the window functions using CORDIC processor. Firstly, we modify the conventional CORDIC algorithm to reduce its latency and area. The proposed CORDIC algorithm is completely scale-free for the range of convergence that spans the entire coordinate space. Secondly, we realize the window functions using a single CORDIC processor as against two serially connected CORDIC processors in existing technique, thus optimizing it for area and latency. The linear CORDIC processor is replaced by a shift-add network which drastically reduces the number of pipelining stages required in the existing design. The proposed design on an average requires approximately 64% less pipeline stages and saves up to 44.2% area. Currently, the processor is designed to implement Blackman windowing architecture, which with slight modifications can be extended to other widow functions as well. The details of the proposed architecture are discussed in the paper.
Directory of Open Access Journals (Sweden)
M. Yousefi, M. Omid, Sh. Rafiee, S.F. Ghaderi
2013-01-01
Full Text Available Iran's primary energy consumption (PEC was modeled as a linear function of five socioeconomic and meteorological explanatory variables using particle swarm optimization (PSO and artificial neural networks (ANNs techniques. Results revealed that ANN outperforms PSO model to predict test data. However, PSO technique is simple and provided us with a closed form expression to forecast PEC. Energy demand was forecasted by PSO and ANN using represented scenario. Finally, adapting about 10% renewable energy revealed that based on the developed linear programming (LP model under minimum CO2 emissions, Iran will emit about 2520 million metric tons CO2 in 2025. The LP model indicated that maximum possible development of hydropower, geothermal and wind energy resources will satisfy the aim of minimization of CO2 emissions. Therefore, the main strategic policy in order to reduce CO2 emissions would be exploitation of these resources.
Energy Technology Data Exchange (ETDEWEB)
Yousefi, M.; Omid, M.; Rafiee, Sh. [Department of Agricultural Machinery Engineering, University of Tehran, Karaj (Iran, Islamic Republic of); Ghaderi, S.F. [Department of Industrial Engineering, University of Tehran, Tehran (Iran, Islamic Republic of)
2013-07-01
Iran's primary energy consumption (PEC) was modeled as a linear function of five socioeconomic and meteorological explanatory variables using particle swarm optimization (PSO) and artificial neural networks (ANNs) techniques. Results revealed that ANN outperforms PSO model to predict test data. However, PSO technique is simple and provided us with a closed form expression to forecast PEC. Energy demand was forecasted by PSO and ANN using represented scenario. Finally, adapting about 10% renewable energy revealed that based on the developed linear programming (LP) model under minimum CO2 emissions, Iran will emit about 2520 million metric tons CO2 in 2025. The LP model indicated that maximum possible development of hydropower, geothermal and wind energy resources will satisfy the aim of minimization of CO2 emissions. Therefore, the main strategic policy in order to reduce CO2 emissions would be exploitation of these resources.
Theofilatos, Konstantinos; Georgopoulos, Efstratios; Likothanassis, Spiridon
2009-09-01
In this paper, a variation of traditional Genetic Programming(GP) is used to model the MagnetoencephaloGram(MEG) of Epileptic Patients. This variation is Linear Genetic Programming(LGP). LGP is a particular subset of GP wherein computer programs in population are represented as a sequence of instructions from imperative programming language or machine language. The derived models from this method were simplified using genetic algorithms. The proposed method was used to model the MEG signal of epileptic patients using 6 different datasets. Each dataset uses different number of previous values of MEG to predict the next value. The models were tested in datasets different from the ones which were used to produce them and the results were very promising.
Ozheredov, V. A.; Breus, T. K.
2016-03-01
Several problems can emerge in front of investigators, who take a detailed restoration of dependency. The key of them - is a mathematically rigorous formulation of the desired degree of details. Second in importance is the reliability problem of reproduction of these details. And the third problem is the evaluation of data collection efforts that will ensure the desired depending on the required details and results reliability. In this work the strict concept of spatial resolution of the locally linear algorithm of direct dependence recovery (DDR) is formulated mathematically. Such approach implies approximation of the system reaction (dependent variable) in the case of the assigned value of factors which only utilizes the data (precedents) from a spherical cluster surrounding those assigned value of factors. The concept of reliability of details is formalized through the noise attenuation coefficient. We derive a relationship between the size of the minimum required database, spatial resolution of the recovery algorithm, the number of influencing factors and the noise attenuation coefficient. Analytical findings are verified by numerical experiments. Maximum number of factors, functional dependence on which can be recovered via the database figuring in various helio-biological works published by many authors for several 10 of years, is estimated. It is shown that the minimum required size of the database depends on the number of influencing factors (dimension of space of the independent variable) as a power law. The analysis conducted in this study reveals that the majority of the dimensional potentials of helio-biological databases are significantly higher that dimensions, which are appear in the approaches of authors of these works.
Directory of Open Access Journals (Sweden)
Kondapalli Siva Prasad
2013-06-01
Full Text Available Austenitic stainless steel sheets have gathered wide acceptance in the fabrication of components, which require high temperature resistance and corrosion resistance, such as metal bellows used in expansion joints in aircraft, aerospace and petroleum industry. In case of single pass welding of thinner sections of this alloy, Pulsed Current Micro Plasma Arc Welding (PCMPAW was found beneficial due to its advantages over the conventional continuous current process. This paper highlights the development of empirical mathematical equations using multiple regression analysis, correlating various process parameters to pitting corrosion rates in PCMPAW of AISI 304L sheets in 1 Normal HCl. The experiments were conducted based on a five factor, five level central composite rotatable design matrix. A Genetic Algorithm (GA was developed to optimize the process parameters for minimizing the pitting corrosion rates.
Energy Technology Data Exchange (ETDEWEB)
Peyton, B.W.
1999-07-01
When minimum orderings proved too difficult to deal with, Rose, Tarjan, and Leuker instead studied minimal orderings and how to compute them (Algorithmic aspects of vertex elimination on graphs, SIAM J. Comput., 5:266-283, 1976). This paper introduces an algorithm that is capable of computing much better minimal orderings much more efficiently than the algorithm in Rose et al. The new insight is a way to use certain structures and concepts from modern sparse Cholesky solvers to re-express one of the basic results in Rose et al. The new algorithm begins with any initial ordering and then refines it until a minimal ordering is obtained. it is simple to obtain high-quality low-cost minimal orderings by using fill-reducing heuristic orderings as initial orderings for the algorithm. We examine several such initial orderings in some detail.
Increasingly minimal bias routing
Energy Technology Data Exchange (ETDEWEB)
Bataineh, Abdulla; Court, Thomas; Roweth, Duncan
2017-02-21
A system and algorithm configured to generate diversity at the traffic source so that packets are uniformly distributed over all of the available paths, but to increase the likelihood of taking a minimal path with each hop the packet takes. This is achieved by configuring routing biases so as to prefer non-minimal paths at the injection point, but increasingly prefer minimal paths as the packet proceeds, referred to herein as Increasing Minimal Bias (IMB).
Directory of Open Access Journals (Sweden)
Sudhakaran .R,
2010-05-01
Full Text Available This paper presents a study on optimization of process parameters using genetic algorithm to minimize angular distortion in 202 grade stainless steel gas tungsten arc welded plates. Angular distortion is a major problem and most pronounced among different types of distortion in butt welded plates. The extent of distortion depends onthe welding process control parameters. The important process control parameters chosen for study are gun angle (θ, welding speed (V, plate length (L, welding current (I and gas flow rate (Q. The experiments are conducted based on five factor five level central composite rotatable designs with full replication technique. A mathematical model was developed correlating the process parameters and the angular distortion. The developed model is checked for the adequacy based on ANOVA analysis and accuracy of prediction by confirmatory test. The optimization of process parameters was done using genetic algorithms (GA. A source code was developed using C language to do the optimization. The optimal process parameters gave a value of 0.000379° for angular distortion which demonstrates the accuracy and effectiveness of the model presented and program developed. The obtained results indicate that the optimized parameters are capable of producing weld with minimum distortion.
Minimal Pairs: Minimal Importance?
Brown, Adam
1995-01-01
This article argues that minimal pairs do not merit as much attention as they receive in pronunciation instruction. There are other aspects of pronunciation that are of greater importance, and there are other ways of teaching vowel and consonant pronunciation. (13 references) (VWL)
Noh, M. J.; Howat, I. M.; Porter, C. C.; Willis, M. J.; Morin, P. J.
2016-12-01
The Arctic is undergoing rapid change associated with climate warming. Digital Elevation Models (DEMs) provide critical information for change measurement and infrastructure planning in this vulnerable region, yet the existing quality and coverage of DEMs in the Arctic is poor. Low contrast and repeatedly-textured surfaces, such as snow and glacial ice and mountain shadows, all common in the Arctic, challenge existing stereo-photogrammetric techniques. Submeter resolution, stereoscopic satellite imagery with high geometric and radiometric quality, and wide spatial coverage are becoming increasingly accessible to the scientific community. To utilize these imagery for extracting DEMs at a large scale over glaciated and high latitude regions we developed the Surface Extraction from TIN-based Searchspace Minimization (SETSM) algorithm. SETSM is fully automatic (i.e. no search parameter settings are needed) and uses only the satellite rational polynomial coefficients (RPCs). Using SETSM, we have generated a large number of DEMs (> 100,000 scene pair) from WorldView, GeoEye and QuickBird stereo images collected by DigitalGlobe Inc. and archived by the Polar Geospatial Center (PGC) at the University of Minnesota through an academic licensing program maintained by the US National Geospatial-Intelligence Agency (NGA). SETSM is the primary DEM generation software for the US National Science Foundation's ArcticDEM program, with the objective of generating high resolution (2-8m) topography for the entire Arctic landmass, including seamless DEM mosaics and repeat DEM strips for change detection. ArcticDEM is collaboration between multiple US universities, governmental agencies and private companies, as well as international partners assisting with quality control and registration. ArcticDEM is being produced using the petascale Blue Waters supercomputer at the National Center for Supercomputer Applications at the University of Illinois. In this paper, we introduce the SETSM
Directory of Open Access Journals (Sweden)
Ambika Ramamoorthy
2016-01-01
Full Text Available Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF and weak (WK bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5 and PQ capacities of DGs (P alone, Q alone, and P and Q both are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology.
Ramamoorthy, Ambika; Ramachandran, Rajeswari
2016-01-01
Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF) and weak (WK) bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5) and PQ capacities of DGs (P alone, Q alone, and P and Q both) are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology.
Nachtigal, Noel M.
1991-01-01
The Lanczos algorithm can be used both for eigenvalue problems and to solve linear systems. However, when applied to non-Hermitian matrices, the classical Lanczos algorithm is susceptible to breakdowns and potential instabilities. In addition, the biconjugate gradient (BCG) algorithm, which is the natural generalization of the conjugate gradient algorithm to non-Hermitian linear systems, has a second source of breakdowns, independent of the Lanczos breakdowns. Here, we present two new results. We propose an implementation of a look-ahead variant of the Lanczos algorithm which overcomes the breakdowns by skipping over those steps where a breakdown or a near-breakdown would occur. The new algorithm can handle look-ahead steps of any length and requires the same number of matrix-vector products and inner products per step as the classical Lanczos algorithm without look-ahead. Based on the proposed look-ahead Lanczos algorithm, we then present a novel BCG-like approach, the quasi-minimal residual (QMR) method, which avoids the second source of breakdowns in the BCG algorithm. We present details of the new method and discuss some of its properties. In particular, we discuss the relationship between QMR and BCG, showing how one can recover the BCG iterates, when they exist, from the QMR iterates. We also present convergence results for QMR, showing the connection between QMR and the generalized minimal residual (GMRES) algorithm, the optimal method in this class of methods. Finally, we give some numerical examples, both for eigenvalue computations and for non-Hermitian linear systems.
Institute of Scientific and Technical Information of China (English)
朱德通
2004-01-01
This paper proposes a nonmonotonic backtracking trust region algorithm via bilevel linear programming for solving the general multicommodity minimal cost flow problems. Using the duality theory of the linear programming and convex theory, the generalized directional derivative of the general multicommodity minimal cost flow problems is derived. The global convergence and superlinear convergence rate of the proposed algorithm are established under some mild conditions.
Wang, Y; Guo, G D; Chen, L F
2013-01-01
Frediction of the three-dimensional structure of a protein from its amino acid sequence can be considered as a global optimization problem. In this paper, the Chaotic Artificial Bee Colony (CABC) algorithm was introduced and applied to 3D protein structure prediction. Based on the 3D off-lattice AB model, the CABC algorithm combines global search and local search of the Artificial Bee Colony (ABC) algorithm with the Chaotic search algorithm to avoid the problem of premature convergence and easily trapping the local optimum solution. The experiments carried out with the popular Fibonacci sequences demonstrate that the proposed algorithm provides an effective and high-performance method for protein structure prediction.
Institute of Scientific and Technical Information of China (English)
张建民; 沈胜宇; 李思昆
2009-01-01
极小不可满足子式能够为可满足性模理论(SMT)公式的不可满足的原因提供精确的解释,帮助自动化工具迅速定位错误.针对极小SMT不可满足子式的求解问题,提出了SMT公式搜索树及其3类结点的概念,并给出了不可满足子式、极小不可满足子式与3类结点之间的映射关系.基于这种映射关系,采用宽度优先的搜索策略提出了宽度优先搜索的极小SMT不可满足子式求解算法.基于业界公认的SMT Competition 2007测试集进行实验的结果表明,该算法能够有效地求解极小不可满足子式.%A minimal unsatisfiable subformula can provide a succinct explanation of infeasibility of formulae in satisfiability modulo theories (SMT), and could be used in automatic tools to rapidly locate the errors. We present the definitions of searching tree for a formula in SMT and three kinds of nodes, and discuss the relationship between minimal unsatisfiable subformula and the nodes. Based on the relationship, we propose a breadth-first-search algorithm to derive minimal unsatisfiable subformulae in SMT. We have evaluated the effectiveness of the algorithm on SMT Competition 2007 industrial benchmarks. Experimental results show that the breadth-first-search algorithm can effectively compute minimal unsatisfiable subformula.
Institute of Scientific and Technical Information of China (English)
王静; 刘向阳; 王新梅
2009-01-01
提出了一种基于网络编码的新的多播路由算法,该算法在搜索信源节点到各接收者路径族的过程中,考虑了不同路径族之间链路的共享,以降低带宽资源消耗,提高网络性能.基于得到的多播路由图,提出了一种最小子树图搜索算法,并利用最小子树图的性质,对相应于多播路由图的子树图进行化简.最后,在最小子树图上进行有效的网络编码,所有的网络编码问题可以简化为搜索多播网络的最小子树图问题.%This paper presents a new multicast routing algorithm based on network coding. In the process of searching the routing groups from source nodes to each receiver, the algorithm considers link-sharing between different path groups to decrease bandwidth resource consumption and improve network performances. And a scheme for search of minimal subtree graphs is presented based on the multicast routing graphs obtained. By using some properties of minimal subtree graphs, the subtree graphs corresponding to multicast routing graphs are reduced to get minimal subtree graphs. Finally, it can effectively construct network coding in minimal subtree graphs, and all the network coding problems can be equivalent to search the minimal subtree graphs.
Possel, B.; Wismans, Luc Johannes Josephus; van Berkum, Eric C.; Bliemer, M.C.J.
2016-01-01
Incorporation of externalities in the Multi-Objective Network Design Problem (MO NDP) as objectives is an important step in designing sustainable networks. In this research the problem is defined as a bi-level optimization problem in which minimizing externalities are the objectives and link types
Minimization of Fuzzy Finite Generalized Automata
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
Some concepts in Fuzzy Generalized Automata (FGA) are established. Then an important new algorithm which would calculate the minimal FGA is given. The new algorithm is composed of two parts: the first is called E-reduction which contracts equivalent states, and the second is called RE-reduction which removes retrievable states. Finally an example is given to illuminate the algorithm of minimization.
Knox, C. E.; Cannon, D. G.
1979-01-01
A flight management algorithm designed to improve the accuracy of delivering the airplane fuel efficiently to a metering fix at a time designated by air traffic control is discussed. The algorithm provides a 3-D path with time control (4-D) for a test B 737 airplane to make an idle thrust, clean configured descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path is calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithms and the results of the flight tests are discussed.
Parallel scheduling algorithms
Energy Technology Data Exchange (ETDEWEB)
Dekel, E.; Sahni, S.
1983-01-01
Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.
Maj, P.; Baumbaugh, A.; Deptuch, G.; Grybos, P.; Szczygiel, R.
2012-12-01
Charge sharing is the main limitation of pixel detectors used in spectroscopic applications, noting that this applies to both time and amplitude/energy spectroscopy. Even though, charge sharing was the subject of many studies, there is still no ultimate solution which could be implemented in the hardware to suppress the negative effects of charge sharing. This is mainly because of strong demand on low power dissipation and small silicon area of a single pixel. The first solution of this problem was proposed by CERN and consequently it was implemented in the Medipix III chip. However, due to pixel-to-pixel threshold dispersions and some imperfections of the simplified algorithm, the hit allocation was not functioning properly. We are presenting novel algorithms which allow proper hit allocation even at the presence of charge sharing. They can be implemented in an integrated circuit using a deep submicron technology. In performed simulations, we assumed not only diffusive charge spread occurring in the course of charge drifting towards the electrodes but also limitations in the readout electronics, i.e. signal fluctuations due to noise and mismatch (gain and offsets). The simulations show that using, for example, a silicon pixel detector in the low X-ray energy range, we have been able to perform proper hit position identification and use the information from summing inter-pixel nodes for spectroscopy measurements.
Institute of Scientific and Technical Information of China (English)
江兆林; 徐宗本; 高淑萍
2006-01-01
In this paper, the permutation factor circulant matrix over any field is introduced. Algorithms for computing the minimal polynomial and common minimal polynomial of this kind of matrices over any field are presented by means of the Gr(o)bner basis of the ideal in the polynomial ring, and two algorithms for finding the inverses of such matrices are also presented. Finally, an algorithm for the inverse of partitioned matrix with permutation factor circulant blocks over any field is given by using the Schur complement, which can be implemented by CoCoA 4.0, an algebraic system, over the field of rational numbers or the field of residue classes of modulo prime number.%本文引入了任意域上置换因子循环矩阵,利用多项式环的理想的Gr(o)bner基的算法给出了任意域上置换因子循环矩阵的极小多项式和公共极小多项式的算法,同时给出了这类矩阵逆矩阵的两种算法.最后,利用Schur补给出了任意域上具有置换因子循环矩阵块的分块矩阵逆的一个算法,在有理数域或模素数剩余类域上,这一算法可由代数系统软件CoCoA 4.0实现.
基于重置引用计数器值的BDD路径优化算法%An Algorithm for Minimizing Number of Paths in BDDs
Institute of Scientific and Technical Information of China (English)
段珊
2011-01-01
Variable swapping as the core of BDD optimization theory has been applied to reduce the size of BDD nodes,this paper from the standpoint of theory and practice of applying the theory to achieve the goal of reducing the number of BDD paths. By redefining the node reference field to achieve a record node path, analysis of variables in the swapping process to obtain the Changes in the amount of local path, introducing the dynamic list to complete recording and propagation of the node path, and eventually got the final path number of BDD node. The algorithm uses the C language completed, integrated into the CUDD package. Experiment result show correction and efficiency of the algorithm.%变量交换作为BDD优化算法的核心理论已成功运用于BDD节点规模的减少,本论文从理论和实践的角度提出了运用该理论实现BDD路径数量减少的目标.通过对节点引用域的重新定义来实现节点路径的记录,分析变量交换过程中本地节点重定向来获取局部路径的改变量,引入动态链表完成节点路径增量的记录和传递,最终在终节点得到了BDD的路径数量.该算法用C语言完成,整合到CUDD软件包,经多个函数的实验测试,证实了这种路径优化算法的正确、有效.
Pakarinen, Sami; Toivonen, Lauri
2013-09-01
To investigate if an advanced AV search hysteresis (AVSH) algorithm, Ventricular Intrinsic Preference (VIP(™)), reduces the incidence of ventricular pacing (VP) in sinus node dysfunction (SND) with both intact and compromised AV conduction and with intermittent AV block regardless of the lead positions in the right atria and the ventricle. Patients were classified as having intact AV (AVi) conduction if the PR interval was ≤ 210 ms on ECG and 1:1 AV conduction during atrial pacing up to 120 bpm with PR interval ≤ 350 ms. Otherwise the AV conduction was classified as compromised (AVc). Both AVi and AVc patients were randomized to VIP ON or OFF. VIP performed an intrinsic AV conduction search every 30 s for three consecutive atrial cycles with the extension of the sensed and paced AV (SAV/PAV) delays from basic values of 150/200 ms to 300/350 ms. Extended AV intervals were allowed for three cycles when VP occurred before returning to basic AV delays. The primary end-point was %VP at 12 months. Among 389 patients, 30.1% had intact and 69.9% had compromised AV conduction. The mean %VP at 12 months was 9.6% by VIP compared to 51.8% with standard AV settings in patients with AVi (P < 0.0001) and 28.0% versus 78.9% (P < 0.0001) with AVc. With VIP, excessive %VP among most used lead positions was not seen. Conversely, when VIP was off %VP was low only in patients who had leads in the RA septal-RV septal position (23.0%). VIP feature reduces VP both in patients with SND and with intermittent heart block regardless of the lead positions in the right atria and the ventricle.
L1-norm minimization for quaternion signals
Wu, Jiasong; Wang, Xiaoqing; Senhadji, Lotfi; Shu, Huazhong
2012-01-01
The l1-norm minimization problem plays an important role in the compressed sensing (CS) theory. We present in this letter an algorithm for solving the problem of l1-norm minimization for quaternion signals by converting it to second-order cone programming. An application example of the proposed algorithm is also given for practical guidelines of perfect recovery of quaternion signals. The proposed algorithm may find its potential application when CS theory meets the quaternion signal processing.
Bachas, C; Wiese, K J; Bachas, Constantin; Doussal, Pierre Le; Wiese, Kay Joerg
2006-01-01
We study minimal surfaces which arise in wetting and capillarity phenomena. Using conformal coordinates, we reduce the problem to a set of coupled boundary equations for the contact line of the fluid surface, and then derive simple diagrammatic rules to calculate the non-linear corrections to the Joanny-de Gennes energy. We argue that perturbation theory is quasi-local, i.e. that all geometric length scales of the fluid container decouple from the short-wavelength deformations of the contact line. This is illustrated by a calculation of the linearized interaction between contact lines on two opposite parallel walls. We present a simple algorithm to compute the minimal surface and its energy based on these ideas. We also point out the intriguing singularities that arise in the Legendre transformation from the pure Dirichlet to the mixed Dirichlet-Neumann problem.
Resource Minimization Job Scheduling
Chuzhoy, Julia; Codenotti, Paolo
Given a set J of jobs, where each job j is associated with release date r j , deadline d j and processing time p j , our goal is to schedule all jobs using the minimum possible number of machines. Scheduling a job j requires selecting an interval of length p j between its release date and deadline, and assigning it to a machine, with the restriction that each machine executes at most one job at any given time. This is one of the basic settings in the resource-minimization job scheduling, and the classical randomized rounding technique of Raghavan and Thompson provides an O(logn/loglogn)-approximation for it. This result has been recently improved to an O(sqrt{log n})-approximation, and moreover an efficient algorithm for scheduling all jobs on O((OPT)^2) machines has been shown. We build on this prior work to obtain a constant factor approximation algorithm for the problem.
Institute of Scientific and Technical Information of China (English)
戴星; 张少明; 周礼明; 杜勤; 吴正一
2013-01-01
目的 探讨有效缩短患者候检时间累加和的优化算法,充分挖掘医院现有资源的服务能力.方法 以门诊患者候检时间累加和为目标函数,建立门诊检查的混合开放车间作业模型.计算各检查部门设备的服务检查负荷,确定瓶颈部门；提出了基于瓶颈部门的半在线优化算法,并在上海交通大学医学院附属第九人民医院的20批门诊患者的候诊过程中进行数据验证.结果 通过基于瓶颈的半在线检查项目序列优化算法,20批门诊患者的候诊时间累加和减少了10.5％.结论 与患者随机生成检查序列相比,优化算法可以明显缩短患者的候检时间累加和,提高医院现有资源的服务能力.%Objective To propose the optimized algorithm to effectively reduce the total waiting time of patients as so to make full use of the hospital resources for medical service.Methods With total waiting time of outpatients as objective function,hybrid open flow shop model of outpatient examinations was established.The examination load of equipment of each examination department was calculated,and the bottleneck department was determined.The semi-online algorithm based on the bottleneck department was proposed to minimize the total waiting time of outpatients,and was performed on the data collected from 20 groups of outpatients in the Ninth People's Hospital Affiliated to Shanghai Jiaotong University School of Medicine.Results Through the semi-online algorithm based on the bottleneck department,the total waiting time of 20 groups of outpatients decreased by 10.5％.Conclusion Compared with the sequence generated randomly by the outpatients,the proposed algorithm can effectively reduce the total waiting time and improve the efficiency of hospital resources for medical service.
Piazza, Federico
2015-01-01
The minimal requirement for cosmography - a nondynamical description of the universe - is a prescription for calculating null geodesics, and timelike geodesics as a function of their proper time. In this paper, we consider the most general linear connection compatible with homogeneity and isotropy, but not necessarily with a metric. A light-cone structure is assigned by choosing a set of geodesics representing light rays. This defines a "scale factor" and a local notion of distance, as that travelled by light in a given proper time interval. We find that the velocities and relativistic energies of free-falling bodies decrease in time as a consequence of cosmic expansion, but at a rate that can be different than that dictated by the usual metric framework. By extrapolating this behavior to photons redshift, we find that the latter is in principle independent of the "scale factor". Interestingly, redshift-distance relations and other standard geometric observables are modified in this extended framework, in a w...
Piazza, Federico; Schücker, Thomas
2016-04-01
The minimal requirement for cosmography—a non-dynamical description of the universe—is a prescription for calculating null geodesics, and time-like geodesics as a function of their proper time. In this paper, we consider the most general linear connection compatible with homogeneity and isotropy, but not necessarily with a metric. A light-cone structure is assigned by choosing a set of geodesics representing light rays. This defines a "scale factor" and a local notion of distance, as that travelled by light in a given proper time interval. We find that the velocities and relativistic energies of free-falling bodies decrease in time as a consequence of cosmic expansion, but at a rate that can be different than that dictated by the usual metric framework. By extrapolating this behavior to photons' redshift, we find that the latter is in principle independent of the "scale factor". Interestingly, redshift-distance relations and other standard geometric observables are modified in this extended framework, in a way that could be experimentally tested. An extremely tight constraint on the model, however, is represented by the blackbody-ness of the cosmic microwave background. Finally, as a check, we also consider the effects of a non-metric connection in a different set-up, namely, that of a static, spherically symmetric spacetime.
... to your desktop! more... What Is Minimally Invasive Dentistry? Article Chapters What Is Minimally Invasive Dentistry? Minimally ... techniques. Reviewed: January 2012 Related Articles: Minimally Invasive Dentistry Minimally Invasive Veneers Dramatically Change Smiles What Patients ...
最小化全局完成时间的成像卫星任务规划算法%Scheduling Algorithm for Image Satellite to Minimize the Global Make-span
Institute of Scientific and Technical Information of China (English)
张利宁; 邱涤珊; 李皓平; 祝江汉
2011-01-01
Task scheduling for image reconnaissance satellite is a typical combinatorial problem. Minimizing global make-span is a standard optimization objective in task scheduling field. Under this objective, a hybrid algorithm that combined integer programming and constraint programming was proposed in this paper to solve this problem. The constrained integer programming model of this problem was decomposed into master and sub-problem through Benders' decomposition; these two parts were solved by the state-of-the-art software MOSEK and GECODE separately. Generated Benders cuts were sent to the master problem for iterative solving process until an optimizing solution obtained. The efficiency of this algorithm was tested by simulation, and promised result was ful-filled.%成像侦察卫星任务规划问题是一类典型多约束组合优化问题.最小化全局完成时间是任务规划领域时效性要求较高情况下的一种优化目标.提出一种整合整数规划与约束规划方法,在最小化任务规划方案全局完成时间的目标下,求解成像侦察卫星任务规划问题的组合算法.该算法通过应用Benders分解将原约束整数规划模型划分为主问题与子问题两部分,采用软件MOSEK与GECODE对主、子问题分别求解.根据子问题求解结果生成剪枝约束,返回主问题迭代,直到获得优化解.算法有效性通过仿真实验进行了检验并取得预期效果.
Institute of Scientific and Technical Information of China (English)
吕岚; 甘旭升; 屈虹; 赵海涛
2014-01-01
为提高RBF神经网络的建模性能，提出一种基于改进无迹Kalman滤波（UKF）的RBF神经网络训练算法。在该算法中，首先将比例最小偏度单形Sigma点采样策略引入UT，以有效改进UKF，提升其计算效率，然后利用改进的UKF优化估计RBF神经网络的最优参数。仿真结果表明，改进的UKF比EKF具有更高的RBF神经网络模型训练精度，与传统UKF的模型精度大体相当，但速度更快，计算效率更高。%To improve the modeling of RBF neural network,a training algorithm of RBF neural network based on modified Unscented Kalman Filter(UKF)is proposed. In the algorithm,first a scaled minimal skew simplex Sigma point sampling strategy is introduced in Unscented Transform (UT) to improve UKF for computation efficiency,and then the improved UKF is used to optimize the parameters of RBF neural network. Simulation show that,for the training problem of RBF neural network,the model precision of proposed UKF is higher than that of EKF,and is approximately close to traditional UKF with faster training and better computation.
BDD Minimization for Approximate Computing
Soeken, Mathias; Grosse, Daniel; Chandrasekharan, Arun; Drechsler, Rolf
2016-01-01
We present Approximate BDD Minimization (ABM) as a problem that has application in approximate computing. Given a BDD representation of a multi-output Boolean function, ABM asks whether there exists another function that has a smaller BDD representation but meets a threshold w.r.t. an error metric. We present operators to derive approximated functions and present algorithms to exactly compute the error metrics directly on the BDD representation. An experimental evaluation demonstrates the app...
Institute of Scientific and Technical Information of China (English)
邓波; 杨晓东
2000-01-01
In a massively parallel processors(MPP)system,a routing algorithm constitutes the primary factor influencing the performance of the interconnect network and MPP system. After analysising the characteristics of message routing in interconnection network,one new concept"the Best Network for Routing"(BNR)is proposed. Using it,we can analyse any minimal deadlock-free fully-adaptive routing algorithm(MDF2A2)proposed,and also can design two new MDF2A2:VBA and LCFAA. On this point,it gives guidelines to the interconnection network designers.
Institute of Scientific and Technical Information of China (English)
李江晨; 徐小维; 韩君佩; 胡昱; 邹雪城
2013-01-01
多点触摸技术已应用在生活的诸多方面，带来了人机交互上的巨大便利。在多种新型的基于视觉的多点触摸技术中FTIR技术是极具潜力的优势技术，但是该技术会受环境红外噪声干扰，不能有效识别日光环境下的手指触点信号。针对环境红外噪声干扰的问题，提出了一种基于同步脉冲光源的相邻帧差算法(SPLA)使得FTIR技术具有良好的抗环境红外噪声的特点，可在日光环境中有效识别手指触点。同时还构建了嵌入同步脉冲光源的多点触摸硬件平台，实现了SPLA算法，并进行了大量的触摸实验。实验结果表明，相比于传统的背景差算法，SPLA算法的触点对比度提高了将近3.5倍，可以准确地识别出触摸点。鉴于硬件实现的通用性，SPLA算法还可应用到其他多点触摸平台，具有较强的可移植性。%The multi-touch technology has been widely used in various aspects of the every-day life, and has brought tremendous convenience during the process of the human-computer interaction. Among many new vision-based implementation methods for the multi-touch function, the frustrated total internal reflection (FTIR) method is one of the most promising one with unique advantages. However, the FTIR-based multi-touch implementation is sensitive to the ambient infrared noise and it currently can only be used in the dark environment. In this paper, a synchronized pulsed LED algorithm was proposed, namely SPLA, which could effectively improve the sensitivity of the FTIR-based multi-touch implementation in the normal ambient lighting environment. Based on the SPLA, a FTIR-based multi-touch platform was implemented . The experimental results show that the proposed SPLA increases the contrast of the blobs (touch points) by 3.5 times compared with the conventional methods. Because of similar hardware structure, the proposed SPLA can be also extended to minimize the ambient noise for
Neurocontroller analysis via evolutionary network minimization.
Ganon, Zohar; Keinan, Alon; Ruppin, Eytan
2006-01-01
This study presents a new evolutionary network minimization (ENM) algorithm. Neurocontroller minimization is beneficial for finding small parsimonious networks that permit a better understanding of their workings. The ENM algorithm is specifically geared to an evolutionary agents setup, as it does not require any explicit supervised training error, and is very easily incorporated in current evolutionary algorithms. ENM is based on a standard genetic algorithm with an additional step during reproduction in which synaptic connections are irreversibly eliminated. It receives as input a successfully evolved neurocontroller and aims to output a pruned neurocontroller, while maintaining the original fitness level. The small neurocontrollers produced by ENM provide upper bounds on the neurocontroller size needed to perform a given task successfully, and can provide for more effcient hardware implementations.
Minimizing ADMs on WDM Directed Fiber Trees
Institute of Scientific and Technical Information of China (English)
ZHOU FengFeng (周丰丰); CHEN GuoLiang (陈国良); XU YinLong (许胤龙); GU Jun (顾钧)
2003-01-01
This paper proposes a polynomial-time algorithm for Minimum WDM/SONET Add/Drop Multiplexer Problem (MADM) on WDM directed fiber trees whether or not wavelength converters are used. It runs in time O(m2n), where n and m are the number of nodes of the tree and the number of the requests respectively. Incorporating T. Erlebach et al.'s work into the proposed algorithm, it also reaches the lower bound of the required wavelengths with greedy algorithms for the case without wavelength converters. Combined with some previous work, the algorithm reduces the number of required wavelengths greatly while using minimal number of ADMs for the case with limited wavelength converters. The experimental results show the minimal number of required ADMs on WDM directed fiber trees.
On Quantum Channel Estimation with Minimal Resources
Zorzi, M; Ferrante, A
2011-01-01
We determine the minimal experimental resources that ensure a unique solution in the estimation of trace-preserving quantum channels with both direct and convex optimization methods. A convenient parametrization of the constrained set is used to develop a globally converging Newton-type algorithm that ensures a physically admissible solution to the problem. Numerical simulations are provided to support the results, and indicate that the minimal experimental setting is sufficient to guarantee good estimates.
Minimization over randomly selected lines
Directory of Open Access Journals (Sweden)
Ismet Sahin
2013-07-01
Full Text Available This paper presents a population-based evolutionary optimization method for minimizing a given cost function. The mutation operator of this method selects randomly oriented lines in the cost function domain, constructs quadratic functions interpolating the cost function at three different points over each line, and uses extrema of the quadratics as mutated points. The crossover operator modifies each mutated point based on components of two points in population, instead of one point as is usually performed in other evolutionary algorithms. The stopping criterion of this method depends on the number of almost degenerate quadratics. We demonstrate that the proposed method with these mutation and crossover operations achieves faster and more robust convergence than the well-known Differential Evolution and Particle Swarm algorithms.
MINIMAL FUZZY MICROCONTROLLER IMPLEMENTATION FOR DIDACTIC APPLICATIONS
F. Lara-Rojo; E. N. Sánchez; D. Zaldívar-Navarro
2003-01-01
Fuzzy techniques have been successfully used in control in several fields, and engineers and researchers are today considering fuzzy logic algorithms in order to implement intelligent functions in embedded systems. We have started to develop a set of teaching tools to support our courses on intelligent control. Low cost implementations of didactic systems are particularly important in developing countries. In this paper we present the implementation of a minimal PD fuzzy four-rule algorithm i...
Subspace Correction Methods for Total Variation and $\\ell_1$-Minimization
Fornasier, Massimo
2009-01-01
This paper is concerned with the numerical minimization of energy functionals in Hilbert spaces involving convex constraints coinciding with a seminorm for a subspace. The optimization is realized by alternating minimizations of the functional on a sequence of orthogonal subspaces. On each subspace an iterative proximity-map algorithm is implemented via oblique thresholding, which is the main new tool introduced in this work. We provide convergence conditions for the algorithm in order to compute minimizers of the target energy. Analogous results are derived for a parallel variant of the algorithm. Applications are presented in domain decomposition methods for degenerate elliptic PDEs arising in total variation minimization and in accelerated sparse recovery algorithms based on 1-minimization. We include numerical examples which show e.cient solutions to classical problems in signal and image processing. © 2009 Society for Industrial and Applied Physics.
Institute of Scientific and Technical Information of China (English)
吴彪; 张颖; 张兆扬
2006-01-01
This paper proposes an optimal solution to choose the number of enhancement layers in fine granularity scalability (FGS) scheme under the constraint of minimum transmission energy, in which FGS is combined with transmission energy control, so that FGS enhancement layer transmission energy is minimized while the distortion guaranteed. By changing the bit-plane level and packet loss rate, minimum transmission energy of enhancement layer is obtained, while the expected distortion is satisfied.
Totally Corrective Boosting for Regularized Risk Minimization
Shen, Chunhua; Barnes, Nick
2010-01-01
Consideration of the primal and dual problems together leads to important new insights into the characteristics of boosting algorithms. In this work, we propose a general framework that can be used to design new boosting algorithms. A wide variety of machine learning problems essentially minimize a regularized risk functional. We show that the proposed boosting framework, termed CGBoost, can accommodate various loss functions and different regularizers in a totally-corrective optimization fashion. We show that, by solving the primal rather than the dual, a large body of totally-corrective boosting algorithms can actually be efficiently solved and no sophisticated convex optimization solvers are needed. We also demonstrate that some boosting algorithms like AdaBoost can be interpreted in our framework--even their optimization is not totally corrective. We empirically show that various boosting algorithms based on the proposed framework perform similarly on the UCIrvine machine learning datasets [1] that we hav...
Locally minimal topological groups
Außenhofer, Lydia; Chasco, María Jesús; Dikranjan, Dikran; Domínguez, Xabier
2009-01-01
A Hausdorff topological group $(G,\\tau)$ is called locally minimal if there exists a neighborhood $U$ of 0 in $\\tau$ such that $U$ fails to be a neighborhood of zero in any Hausdorff group topology on $G$ which is strictly coarser than $\\tau.$ Examples of locally minimal groups are all subgroups of Banach-Lie groups, all locally compact groups and all minimal groups. Motivated by the fact that locally compact NSS groups are Lie groups, we study the connection between local minimality and the ...
Khimshiashvili, G.; Siersma, D.
2001-01-01
We describe the structure of minimal round functions on closed surfaces and three-folds. The minimal possible number of critical loops is determined and typical non-equisingular round function germs are interpreted in the spirit of isolated line singularities. We also discuss a version of Lusternik-
Institute of Scientific and Technical Information of China (English)
张建民; 黎铁军; 张峻; 徐炜遐; 李思昆
2012-01-01
While registering transfer level or even behavioral level, the hardware description language is widely used, and Satisfiability Modulo Theories(SMT) gradually replaces Boolean Satisfiability(SAT) , and plays an important role in VLSI formal verification. A minimal unsatisfiable subformula can help automatic tools to rapidly locate the errors. A depth-first-search algorithm is proposed to extract minimal unsatisfiable subformulae in SMT to adopt depth-first searching and incremental solving strategy. The experimental results show that the depth-first-search algorithm effectively derived minimal unsatisfiable subformulae, and is more efficient than the breadth-first-search algorithm, which is the best method for computing the minimal unsatisfiable subformulae in SMT, with the number of variables and clauses in the original formulae increasing.%随着寄存器传输级甚至行为级的硬件描述语言应用越来越广泛,基于一阶逻辑的可满足性模理论(Satisfiability Modulo Theories,SMT)逐渐替代布尔可满足性(Boolean Satisfiability,SAT),在VLSI形式化验证领域具有更加重要的应用价值.而极小不可满足子式能够帮助EDA工具迅速定位硬件中的逻辑错误.针对极小SMT不可满足子式的求解问题,采用深度优先搜索与增量式求解策略,提出了深度优先搜索的极小SMT不可满足子式求解算法.与目前最优的宽度优先搜索算法对比实验表明:该算法能够有效地求解极小不可满足子式,随着公式的规模逐渐增大时,深度优先搜索算法优于宽度优先搜索算法.
Greedy algorithm with weights for decision tree construction
Moshkov, Mikhail
2010-12-01
An approximate algorithm for minimization of weighted depth of decision trees is considered. A bound on accuracy of this algorithm is obtained which is unimprovable in general case. Under some natural assumptions on the class NP, the considered algorithm is close (from the point of view of accuracy) to best polynomial approximate algorithms for minimization of weighted depth of decision trees.
Greedy algorithms withweights for construction of partial association rules
Moshkov, Mikhail
2009-09-10
This paper is devoted to the study of approximate algorithms for minimization of the total weight of attributes occurring in partial association rules. We consider mainly greedy algorithms with weights for construction of rules. The paper contains bounds on precision of these algorithms and bounds on the minimal weight of partial association rules based on an information obtained during the greedy algorithm run.
Heuristic procedures for minimizing makespan and the number of required pallets
Chu, Chengbin; Proth, Jean-Marie; Sethi, Suresh
1993-01-01
A heuristic procedure is developed for minimizing makespan in flow-shop scheduling problems. In comparison with current algorithms, our algorithm seems to result in an improved makespan with a small additional computational effort. An algorithm is also developed to minimize the required number of pallets for a given makespan.
Transportation cost minimization of a manufacturing firm using ...
African Journals Online (AJOL)
Transportation cost minimization of a manufacturing firm using genetic algorithm approach. ... Nigerian Journal of Technology ... the transportation cost) and a corresponding increase in its transportation cost as a result of government's removal ...
... get worse You develop new symptoms, including side effects from the medicines used to treat the disorder Alternative Names Minimal change nephrotic syndrome; Nil disease; Lipoid nephrosis; Idiopathic nephrotic syndrome of childhood Images ...
Gonzalez-Lopez, Jesus E Garcia Veronica A
2010-01-01
In this work we introduce a new and richer class of finite order Markov chain models and address the following model selection problem: find the Markov model with the minimal set of parameters (minimal Markov model) which is necessary to represent a source as a Markov chain of finite order. Let us call $M$ the order of the chain and $A$ the finite alphabet, to determine the minimal Markov model, we define an equivalence relation on the state space $A^{M}$, such that all the sequences of size $M$ with the same transition probabilities are put in the same category. In this way we have one set of $(|A|-1)$ transition probabilities for each category, obtaining a model with a minimal number of parameters. We show that the model can be selected consistently using the Bayesian information criterion.
Ruled Laguerre minimal surfaces
Skopenkov, Mikhail
2011-10-30
A Laguerre minimal surface is an immersed surface in ℝ 3 being an extremal of the functional ∫ (H 2/K-1)dA. In the present paper, we prove that the only ruled Laguerre minimal surfaces are up to isometry the surfaces ℝ (φλ) = (Aφ, Bφ, Cφ + D cos 2φ) + λ(sin φ, cos φ, 0), where A,B,C,D ε ℝ are fixed. To achieve invariance under Laguerre transformations, we also derive all Laguerre minimal surfaces that are enveloped by a family of cones. The methodology is based on the isotropic model of Laguerre geometry. In this model a Laguerre minimal surface enveloped by a family of cones corresponds to a graph of a biharmonic function carrying a family of isotropic circles. We classify such functions by showing that the top view of the family of circles is a pencil. © 2011 Springer-Verlag.
Allanach, B C; Tunstall, Lewis C; Voigt, A; Williams, A G
2013-01-01
We describe an extension to the SOFTSUSY program that provides for the calculation of the sparticle spectrum in the Next-to-Minimal Supersymmetric Standard Model (NMSSM), where a chiral superfield that is a singlet of the Standard Model gauge group is added to the Minimal Supersymmetric Standard Model (MSSM) fields. Often, a $\\mathbb{Z}_{3}$ symmetry is imposed upon the model. SOFTSUSY can calculate the spectrum in this case as well as the case where general $\\mathbb{Z}_{3}$ violating (denoted as $\\,\\mathbf{\\backslash}\\mkern-11.0mu{\\mathbb{Z}}_{3}$) terms are added to the soft supersymmetry breaking terms and the superpotential. The user provides a theoretical boundary condition for the couplings and mass terms of the singlet. Radiative electroweak symmetry breaking data along with electroweak and CKM matrix data are used as weak-scale boundary conditions. The renormalisation group equations are solved numerically between the weak scale and a high energy scale using a nested iterative algorithm. This paper se...
Noise Tolerance under Risk Minimization
Manwani, Naresh
2011-01-01
In this paper we explore the problem of noise tolerant learning of classifiers. We formulate the problem as follows. We assume that there is an ${\\bf unobservable}$ training set which is noise-free. The actual training set given to the learning algorithm is obtained from this ideal data set by corrupting the class label of each example where the probability that the class label on an example is corrupted is a function of the feature vector of the example. This would account for almost all kinds of noisy data one may encounter in practice. We say that a learning method is noise tolerant if the classifiers learnt with the ideal noise-free data and with noisy data have the same classification accuracy on the noise-free data. In this paper we analyze the noise tolerant properties of risk minimization, which is a generic method for learning classifiers. We consider different loss functions such as 0-1 loss, hinge loss, exponential loss, squared error loss etc. We show that the risk minimization under 0-1 loss func...
Institute of Scientific and Technical Information of China (English)
王伟; 王伟东; 董为; 杜志江; 孙永平
2016-01-01
机器人辅助微创外科手术的术前准备工作相对于传统微创手术更加复杂和关键。为了提高手术执行效率，充分利用机器人特性，本文以机械臂的运动学性能和器械间的协作能力作为优化目标，相应地提出了机械臂灵巧度指标IICV与体内协作空间指标IICS，设计了一套基于NSGA-II的多目标术前规划方法，将微创手术介入位置选择与机械臂初始位姿优化两类术前规划问题作为统一整体优化。最后，将由术前规划算法优化得到的系统性能指标与医生凭借经验给出的结果进行比较。实验结果表明，由优化算法得到的术前规划方案优势明显，能够为机器人辅助微创手术操作提供一个相对理想的手术执行环境。%Compared with the traditional minimally invasive surgery, the preoperative preparation of the robot-assisted minimally invasive surgery is more critical and complex. For the purposes of improving the operation efficiency and making full use of the characteristics of the robot, the kinematic performance of the manipulator as well as the collaborative capability between instruments are taken as the optimization objectives, and a new dexterity index named IICV for robot-assisted minimally invasive surgery and the internal collaboration space index named IICS are proposed. A set of multi-objective preoperative optimization methods based on NSGA-II are designed to optimize the incision placement and initialize the pose of the manipulator as a unified whole. Finally, a comparison between the preoperative planning schemes generated by the optimization algorithm and recommended by an experienced surgeon is performed. The experimental result shows that the preoperative planning scheme obtained by the optimization algorithm is effective, and it can provide a relatively superior operating environment for robot-assisted minimally invasive surgery.
On Time with Minimal Expected Cost!
DEFF Research Database (Denmark)
David, Alexandre; Jensen, Peter Gjøl; Larsen, Kim Guldstrand
2014-01-01
) timed game essentially defines an infinite-state Markov (reward) decision proces. In this setting the objective is classically to find a strategy that will minimize the expected reachability cost, but with no guarantees on worst-case behaviour. In this paper, we provide efficient methods for computing...... reachability strategies that will both ensure worst case time-bounds as well as provide (near-) minimal expected cost. Our method extends the synthesis algorithms of the synthesis tool Uppaal-Tiga with suitable adapted reinforcement learning techniques, that exhibits several orders of magnitude improvements w...
Image restoration by minimizing zero norm of wavelet frame coefficients
Bao, Chenglong; Dong, Bin; Hou, Likun; Shen, Zuowei; Zhang, Xiaoqun; Zhang, Xue
2016-11-01
In this paper, we propose two algorithms, namely the extrapolated proximal iterative hard thresholding (EPIHT) algorithm and the EPIHT algorithm with line-search, for solving the {{\\ell }}0-norm regularized wavelet frame balanced approach for image restoration. Under the theoretical framework of Kurdyka-Łojasiewicz property, we show that the sequences generated by the two algorithms converge to a local minimizer with linear convergence rate. Moreover, extensive numerical experiments on sparse signal reconstruction and wavelet frame based image restoration problems including CT reconstruction, image deblur, demonstrate the improvement of {{\\ell }}0-norm based regularization models over some prevailing ones, as well as the computational efficiency of the proposed algorithms.
Discontinuous penalty approach with deviation integral for global constrained minimization
Institute of Scientific and Technical Information of China (English)
Liu CHEN; Yi-rong YAO; Quan ZHENG
2009-01-01
of the penalized minimization problems are proven.To implement the algorithm,the cross-entropy method and the importance sampling are used based on the Monte-Carlo technique.Numerical tests show the effectiveness of the proposed algorithm.
Surface Reconstruction and Image Enhancement via $L^1$-Minimization
Dobrev, Veselin
2010-01-01
A surface reconstruction technique based on minimization of the total variation of the gradient is introduced. Convergence of the method is established, and an interior-point algorithm solving the associated linear programming problem is introduced. The reconstruction algorithm is illustrated on various test cases including natural and urban terrain data, and enhancement oflow-resolution or aliased images. Copyright © by SIAM.
Locally minimal topological groups
enhofer, Lydia Au\\ss; Dikranjan, Dikran; Domínguez, Xabier
2009-01-01
A Hausdorff topological group $(G,\\tau)$ is called locally minimal if there exists a neighborhood $U$ of 0 in $\\tau$ such that $U$ fails to be a neighborhood of zero in any Hausdorff group topology on $G$ which is strictly coarser than $\\tau.$ Examples of locally minimal groups are all subgroups of Banach-Lie groups, all locally compact groups and all minimal groups. Motivated by the fact that locally compact NSS groups are Lie groups, we study the connection between local minimality and the NSS property, establishing that under certain conditions, locally minimal NSS groups are metrizable. A symmetric subset of an abelian group containing zero is said to be a GTG set if it generates a group topology in an analogous way as convex and symmetric subsets are unit balls for pseudonorms on a vector space. We consider topological groups which have a neighborhood basis at zero consisting of GTG sets. Examples of these locally GTG groups are: locally pseudo--convex spaces, groups uniformly free from small subgroups (...
Arveson, W
1995-01-01
It is known that every semigroup of normal completely positive maps of a von Neumann can be ``dilated" in a particular way to an E_0-semigroup acting on a larger von Neumann algebra. The E_0-semigroup is not uniquely determined by the completely positive semigroup; however, it is unique (up to conjugacy) provided that certain conditions of {\\it minimality} are met. Minimality is a subtle property, and it is often not obvious if it is satisfied for specific examples even in the simplest case where the von Neumann algebra is \\Cal B(H). In this paper we clarify these issues by giving a new characterization of minimality in terms projective cocycles and their limits. Our results are valid for semigroups of endomorphisms acting on arbitrary von Neumann algebras with separable predual.
Intuitive Minimal Abduction in Sequent Calculi
Institute of Scientific and Technical Information of China (English)
伊波; 陶先平; 等
1998-01-01
Sme computational issues on abduction are discussed in a framework of the first order sequent calculus.Starting from revising the meaning of “good” abduction ,a new criterion of abduction called intuitive-minimal abduction(IMA)is introduced.An IMA is an abuctive formula equivalent to the minimal abductive formula under the theory part of a sequent and literally as simple as possible.Abduction algorithms are presented on the basis of a complete natural reduction system.An abductive formula,obtained by the algorithms presented in this paper,is an IMA if the reduction tree,from which the abduction is performed,is fully expanded.Instead of using Skolem functions,a term-ordering is used to indicate dependency between terms.
Minimal Change and Bounded Incremental Parsing
Wiren, M
1994-01-01
Ideally, the time that an incremental algorithm uses to process a change should be a function of the size of the change rather than, say, the size of the entire current input. Based on a formalization of ``the set of things changed'' by an incremental modification, this paper investigates how and to what extent it is possible to give such a guarantee for a chart-based parsing framework and discusses the general utility of a minimality notion in incremental processing.
Reaction torque minimization techniques for articulated payloads
Kral, Kevin; Aleman, Roberto M.
1988-01-01
Articulated payloads on spacecraft, such as antenna telemetry systems and robotic elements, impart reaction torques back into the vehicle which can significantly affect the performance of other payloads. This paper discusses ways to minimize the reaction torques of articulated payloads through command-shaping algorithms and unique control implementations. The effects of reaction torques encountered on Landsat are presented and compared with simulated and measured data of prototype systems employing these improvements.
Directory of Open Access Journals (Sweden)
Eduardo Salazar Hornig
2011-08-01
Full Text Available En este trabajo se estudió el problema de secuenciamiento de trabajos en el taller de flujo de permutación con tiempos de preparación dependientes de la secuencia y minimización de makespan. Para ello se propuso un algoritmo de optimización mediante colonia de hormigas (ACO, llevando el problema original a una estructura semejante al problema del vendedor viajero TSP (Traveling Salesman Problem asimétrico, utilizado para su evaluación problemas propuestos en la literatura y se compara con una adaptación de la heurística NEH (Nawaz-Enscore-Ham. Posteriormente se aplica una búsqueda en vecindad a la solución obtenida tanto por ACO como NEH.This paper studied the permutation flowshop with sequence dependent setup times and makespan minimization. An ant colony algorithm which turns the original problem into an asymmetric TSP (Traveling Salesman Problem structure is presented, and applied to problems proposed in the literature and is compared with an adaptation of the NEH heuristic. Subsequently a neighborhood search was applied to the solution obtained by the ACO algorithm and the NEH heuristic.
Directory of Open Access Journals (Sweden)
Eduardo Salazar Hornig
2012-04-01
Full Text Available En este trabajo se considera el problema de programar n trabajos en un flowshop flexible de k etapas, con diferente número de máquinas idénticas por etapa considerando tiempos anticipatorios de preparación dependientes de la secuencia (SDST y minimización de la tardanza. Se comparan los resultados de heurísticas constructivas y un algoritmo genético estándar. La evaluación de los métodos se realiza en forma experimental sobre un conjunto de problemas de prueba generados aleatoriamente. Los resultados muestran que el algoritmo genético supera a alguna de las heurísticas comparadas pero no a todas.This paper studied the problem of sequencing n jobs in a k-stages flexible flowshop with different number of parallel machines per stage with anticipatory sequence dependent setup times (SDST and tardiness minimization. The performance of constructive heuristics and the genetic algorithm are compared. The evaluation of the methods is made experimentally over a set of randomly generated test problems. The results indicate that the genetic algorithm do not outperforms all of the compared heuristics.
Minimal constrained supergravity
Directory of Open Access Journals (Sweden)
N. Cribiori
2017-01-01
Full Text Available We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.
Minimally invasive periodontal therapy.
Dannan, Aous
2011-10-01
Minimally invasive dentistry is a concept that preserves dentition and supporting structures. However, minimally invasive procedures in periodontal treatment are supposed to be limited within periodontal surgery, the aim of which is to represent alternative approaches developed to allow less extensive manipulation of surrounding tissues than conventional procedures, while accomplishing the same objectives. In this review, the concept of minimally invasive periodontal surgery (MIPS) is firstly explained. An electronic search for all studies regarding efficacy and effectiveness of MIPS between 2001 and 2009 was conducted. For this purpose, suitable key words from Medical Subject Headings on PubMed were used to extract the required studies. All studies are demonstrated and important results are concluded. Preliminary data from case cohorts and from many studies reveal that the microsurgical access flap, in terms of MIPS, has a high potential to seal the healing wound from the contaminated oral environment by achieving and maintaining primary closure. Soft tissues are mostly preserved and minimal gingival recession is observed, an important feature to meet the demands of the patient and the clinician in the esthetic zone. However, although the potential efficacy of MIPS in the treatment of deep intrabony defects has been proved, larger studies are required to confirm and extend the reported positive preliminary outcomes.
Logarithmic Superconformal Minimal Models
Pearce, Paul A; Tartaglia, Elena
2013-01-01
The higher fusion level logarithmic minimal models LM(P,P';n) have recently been constructed as the diagonal GKO cosets (A_1^{(1)})_k oplus (A_1^{(1)})_n / (A_1^{(1)})_{k+n} where n>0 is an integer fusion level and k=nP/(P'-P)-2 is a fractional level. For n=1, these are the logarithmic minimal models LM(P,P'). For n>1, we argue that these critical theories are realized on the lattice by n x n fusion of the n=1 models. For n=2, we call them logarithmic superconformal minimal models LSM(p,p') where P=|2p-p'|, P'=p' and p,p' are coprime, and they share the central charges of the rational superconformal minimal models SM(P,P'). Their mathematical description entails the fused planar Temperley-Lieb algebra which is a spin-1 BMW tangle algebra with loop fugacity beta_2=x^2+1+x^{-2} and twist omega=x^4 where x=e^{i(p'-p)pi/p'}. Examples are superconformal dense polymers LSM(2,3) with c=-5/2, beta_2=0 and superconformal percolation LSM(3,4) with c=0, beta_2=1. We calculate the free energies analytically. By numerical...
Prostate resection - minimally invasive
... invasive URL of this page: //medlineplus.gov/ency/article/007415.htm Prostate resection - minimally invasive To use ... into your bladder instead of out through the urethra ( retrograde ... on New Developments in Prostate Cancer and Prostate Diseases. Evaluation and treatment of lower ...
Subspace Expanders and Matrix Rank Minimization
Khajehnejad, Amin; Hassibi, Babak
2011-01-01
Matrix rank minimization (RM) problems recently gained extensive attention due to numerous applications in machine learning, system identification and graphical models. In RM problem, one aims to find the matrix with the lowest rank that satisfies a set of linear constraints. The existing algorithms include nuclear norm minimization (NNM) and singular value thresholding. Thus far, most of the attention has been on i.i.d. Gaussian measurement operators. In this work, we introduce a new class of measurement operators, and a novel recovery algorithm, which is notably faster than NNM. The proposed operators are based on what we refer to as subspace expanders, which are inspired by the well known expander graphs based measurement matrices in compressed sensing. We show that given an $n\\times n$ PSD matrix of rank $r$, it can be uniquely recovered from a minimal sampling of $O(nr)$ measurements using the proposed structures, and the recovery algorithm can be cast as matrix inversion after a few initial processing s...
Minimal perceptrons for memorizing complex patterns
Pastor, Marissa; Song, Juyong; Hoang, Danh-Tai; Jo, Junghyo
2016-11-01
Feedforward neural networks have been investigated to understand learning and memory, as well as applied to numerous practical problems in pattern classification. It is a rule of thumb that more complex tasks require larger networks. However, the design of optimal network architectures for specific tasks is still an unsolved fundamental problem. In this study, we consider three-layered neural networks for memorizing binary patterns. We developed a new complexity measure of binary patterns, and estimated the minimal network size for memorizing them as a function of their complexity. We formulated the minimal network size for regular, random, and complex patterns. In particular, the minimal size for complex patterns, which are neither ordered nor disordered, was predicted by measuring their Hamming distances from known ordered patterns. Our predictions agree with simulations based on the back-propagation algorithm.
Deterministic Discrepancy Minimization
Bansal, N.; Spencer, J.
2013-01-01
We derandomize a recent algorithmic approach due to Bansal (Foundations of Computer Science, FOCS, pp. 3–10, 2010) to efficiently compute low discrepancy colorings for several problems, for which only existential results were previously known. In particular, we give an efficient deterministic algori
Kleinberg, Jon
2006-01-01
Algorithm Design introduces algorithms by looking at the real-world problems that motivate them. The book teaches students a range of design and analysis techniques for problems that arise in computing applications. The text encourages an understanding of the algorithm design process and an appreciation of the role of algorithms in the broader field of computer science.
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Minimally inconsistent reasoning in Semantic Web
Zhang, Xiaowang
2017-01-01
Reasoning with inconsistencies is an important issue for Semantic Web as imperfect information is unavoidable in real applications. For this, different paraconsistent approaches, due to their capacity to draw as nontrivial conclusions by tolerating inconsistencies, have been proposed to reason with inconsistent description logic knowledge bases. However, existing paraconsistent approaches are often criticized for being too skeptical. To this end, this paper presents a non-monotonic paraconsistent version of description logic reasoning, called minimally inconsistent reasoning, where inconsistencies tolerated in the reasoning are minimized so that more reasonable conclusions can be inferred. Some desirable properties are studied, which shows that the new semantics inherits advantages of both non-monotonic reasoning and paraconsistent reasoning. A complete and sound tableau-based algorithm, called multi-valued tableaux, is developed to capture the minimally inconsistent reasoning. In fact, the tableaux algorithm is designed, as a framework for multi-valued DL, to allow for different underlying paraconsistent semantics, with the mere difference in the clash conditions. Finally, the complexity of minimally inconsistent description logic reasoning is shown on the same level as the (classical) description logic reasoning. PMID:28750030
Faster and Simpler Minimal Conflicting Set Identification
Ouangraoua, Aida
2012-01-01
Let C be a finite set of N elements and R = r_1,r_2,..., r_m a family of M subsets of C. A subset X of R verifies the Consecutive Ones Property (C1P) if there exists a permutation P of C such that each r_i in X is an interval of P. A Minimal Conflicting Set (MCS) S is a subset of R that does not verify the C1P, but such that any of its proper subsets does. In this paper, we present a new simpler and faster algorithm to decide if a given element r in R belongs to at least one MCS. Our algorithm runs in O(N^2M^2 + NM^7), largely improving the current O(M^6N^5 (M+N)^2 log(M+N)) fastest algorithm of [Blin {\\em et al}, CSR 2011]. The new algorithm is based on an alternative approach considering minimal forbidden induced subgraphs of interval graphs instead of Tucker matrices.
Fire Evacuation using Ant Colony Optimization Algorithm
National Research Council Canada - National Science Library
Kanika Singhal; Shashank Sahu
2016-01-01
... planning.The objective of the algorithm is to minimizes the entire rescue time of all evacuees.The ant colony optimization algorithm is used to solve the complications of shortest route planning. Presented paper gives a comparative overview of various emergency scenarios using ant colony optimization algorithm.
Cross layer scheduling algorithm for LTE Downlink
DEFF Research Database (Denmark)
Popovska Avramova, Andrijana; Yan, Ying; Dittmann, Lars
2012-01-01
. This paper evaluates a cross layer scheduling algorithm that aims at minimizing the resource utilization. The algorithm makes decisions regarding the channel conditions and the size of transmission buffers and different QoS demands. The simulation results show that the new algorithm improves the resource...
Minimal hepatic encephalopathy.
Zamora Nava, Luis Eduardo; Torre Delgadillo, Aldo
2011-06-01
The term minimal hepatic encephalopathy (MHE) refers to the subtle changes in cognitive function, electrophysiological parameters, cerebral neurochemical/neurotransmitter homeostasis, cerebral blood flow, metabolism, and fluid homeostasis that can be observed in patients with cirrhosis who have no clinical evidence of hepatic encephalopathy; the prevalence is as high as 84% in patients with hepatic cirrhosis. Physician does generally not perceive cirrhosis complications, and neuropsychological tests and another especial measurement like evoked potentials and image studies like positron emission tomography can only make diagnosis. Diagnosis of minimal hepatic encephalopathy may have prognostic and therapeutic implications in cirrhotic patients. The present review pretends to explore the clinic, therapeutic, diagnosis and prognostic aspects of this complication.
Minimal triangulations of simplotopes
Seacrest, Tyler
2009-01-01
We derive lower bounds for the size of simplicial covers of simplotopes, which are products of simplices. These also serve as lower bounds for triangulations of such polytopes, including triangulations with interior vertices. We establish that a minimal triangulation of a product of two simplices is given by a vertex triangulation, i.e., one without interior vertices. For products of more than two simplices, we produce bounds for products of segments and triangles. Our analysis yields linear programs that arise from considerations of covering exterior faces and exploiting the product structure of these polytopes. Aside from cubes, these are the first known lower bounds for triangulations of simplotopes with three or more factors. We also construct a minimal triangulation for the product of a triangle and a square, and compare it to our lower bound.
DEFF Research Database (Denmark)
Channuie, Phongpichit; Jark Joergensen, Jakob; Sannino, Francesco
2011-01-01
We investigate models in which the inflaton emerges as a composite field of a four dimensional, strongly interacting and nonsupersymmetric gauge theory featuring purely fermionic matter. We show that it is possible to obtain successful inflation via non-minimal coupling to gravity, and that the u......We investigate models in which the inflaton emerges as a composite field of a four dimensional, strongly interacting and nonsupersymmetric gauge theory featuring purely fermionic matter. We show that it is possible to obtain successful inflation via non-minimal coupling to gravity......, and that the underlying dynamics is preferred to be near conformal. We discover that the compositeness scale of inflation is of the order of the grand unified energy scale....
DEFF Research Database (Denmark)
Frandsen, Mads Toudal
2007-01-01
I report on our construction and analysis of the effective low energy Lagrangian for the Minimal Walking Technicolor (MWT) model. The parameters of the effective Lagrangian are constrained by imposing modified Weinberg sum rules and by imposing a value for the S parameter estimated from the under...... the underlying Technicolor theory. The constrained effective Lagrangian allows for an inverted vector vs. axial-vector mass spectrum in a large part of the parameter space....
On Minimal Constraint Networks
Gottlob, Georg
2011-01-01
In a minimal binary constraint network, every tuple of a constraint relation can be extended to a solution. It was conjectured that computing a solution to such a network is NP complete. We prove this conjecture true and show that the problem remains NP hard even in case the total domain of all values that may appear in the constraint relations is bounded by a constant.
Joux, Antoine
2009-01-01
Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic
Minimally Invasive Parathyroidectomy
Directory of Open Access Journals (Sweden)
Lee F. Starker
2011-01-01
Full Text Available Minimally invasive parathyroidectomy (MIP is an operative approach for the treatment of primary hyperparathyroidism (pHPT. Currently, routine use of improved preoperative localization studies, cervical block anesthesia in the conscious patient, and intraoperative parathyroid hormone analyses aid in guiding surgical therapy. MIP requires less surgical dissection causing decreased trauma to tissues, can be performed safely in the ambulatory setting, and is at least as effective as standard cervical exploration. This paper reviews advances in preoperative localization, anesthetic techniques, and intraoperative management of patients undergoing MIP for the treatment of pHPT.
Susič, Vasja
2016-06-01
A realistic model in the class of renormalizable supersymmetric E6 Grand Unified Theories is constructed. Its matter sector consists of 3 × 27 representations, while the Higgs sector is 27 +27 ¯+35 1'+35 1' ¯+78 . An analytic solution for a Standard Model vacuum is found and the Yukawa sector analyzed. It is argued that if one considers the increased predictability due to only two symmetric Yukawa matrices in this model, it can be considered a minimal SUSY E6 model with this type of matter sector. This contribution is based on Ref. [1].
Hougardy, Stefan
2016-01-01
Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.
RECONFIGURING POWER SYSTEMS TO MINIMIZE CASCADING FAILURES: MODELS AND ALGORITHMS
Energy Technology Data Exchange (ETDEWEB)
Bienstock, Daniel
2014-04-11
the main goal of this project was to develop new scientific tools, based on optimization techniques, with the purpose of controlling and modeling cascading failures of electrical power transmission systems. We have developed a high-quality tool for simulating cascading failures. The problem of how to control a cascade was addressed, with the aim of stopping the cascade with a minimum of load lost. Yet another aspect of cascade is the investigation of which events would trigger a cascade, or more appropriately the computation of the most harmful initiating event given some constraint on the severity of the event. One common feature of the cascade models described (indeed, of several of the cascade models found in the literature) is that we study thermally-induced line tripping. We have produced a study that accounts for exogenous randomness (e.g. wind and ambient temperature) that could affect the thermal behavior of a line, with a focus on controlling the power flow of the line while maintaining safe probability of line overload. This was done by means of a rigorous analysis of a stochastic version of the heat equation. we incorporated a model of randomness in the behavior of wind power output; again modeling an OPF-like problem that uses chance-constraints to maintain low probability of line overloads; this work has been continued so as to account for generator dynamics as well.
An algorithm of graph planarity testing and cross minimization
Directory of Open Access Journals (Sweden)
Vitalie Cotelea
2007-11-01
Full Text Available This paper presents an overview on one compartment from the graph theory, called graph planarity testing. It covers the fundamental concepts and important work in this area. Also a new approach is presented, which tests if a graph is planar in linear time O(n and it can be used to determine the minimum crosses in a graph if it isn't planar.
Minimizing Mobiles Communication Time Using Modified Binary Exponential Backoff Algorithm
Ibrahim Sayed Ahmad; Ali Kalakech; Seifedine Kadry
2013-01-01
The domain of wireless Local Area Networks (WLANs)is growing speedily as a consequence ofdevelopments in digital communications technology.The early adopters of this technology have mainlybeen vertical applicationthat places a premium on the mobility offered by such systems. Examples of thesetypes of applications consist of stocking control in depot environments,point of sale terminals, and rentalcar check-in. Furthermore to the mobility thatbecomes possible with wireless LANs; these systemsh...
SIGMA - A Stochastic-Integration Global Minimization Algorithm.
1985-03-01
di Matematica , Universita di Bari, 70125 Bari (Italy). Istituto di Fisica, 2da Unversita dl Roma "Tor Vergata", Via Orazio Raimondo, 00173 (La...Romanina) Roma (Italy). **Istituto di Matematica , Universita dl Salerno, 84100 Salerno (Italy). Sponsored by the United States Army under Contract No. DAAG29...Dipartimento di Matematica , Universit di Bari, 70125 Bari (Italy). Istituto dl Fisica, 2a Universitl di Roma "Tor Vergata", Via Orazio Raimondo, 00173 (La
Hybrid genetic algorithm for minimizing non productive machining ...
African Journals Online (AJOL)
user
A Bi-criteria M-Machine SDST Flow Shop Scheduling using Modified Heuristic Genetic ... He has more than 35 research papers in international/national journals and ... supply chain management, inventory management, machine learning, etc.
Maity, Debaprasad
2016-01-01
In this paper we propose two simple minimal Higgs inflation scenarios through a simple modification of the Higgs potential, as opposed to the usual non-minimal Higgs-gravity coupling prescription. The modification is done in such a way that it creates a flat plateau for a huge range of field values at the inflationary energy scale $\\mu \\simeq (\\lambda)^{1/4} \\alpha$. Assuming the perturbative Higgs quartic coupling, $\\lambda \\simeq {\\cal O}(1)$, for both the models inflation energy scale turned out to be $\\mu \\simeq (10^{14}, 10^{15})$ GeV, and prediction of all the cosmologically relevant quantities, $(n_s,r,dn_s^k)$, fit extremely well with observations made by PLANCK. Considering observed central value of the scalar spectral index, $n_s= 0.968$, our two models predict efolding number, $N = (52,47)$. Within a wide range of viable parameter space, we found that the prediction of tensor to scalar ratio $r (\\leq 10^{-5})$ is far below the current experimental sensitivity to be observed in the near future. The ...
Abrams, D.; Williams, C.
1999-01-01
This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases for which all know classical algorithms require exponential time.
Tel, G.
1993-01-01
We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of distri
Logarithmic superconformal minimal models
Pearce, Paul A.; Rasmussen, Jørgen; Tartaglia, Elena
2014-05-01
The higher fusion level logarithmic minimal models {\\cal LM}(P,P';n) have recently been constructed as the diagonal GKO cosets {(A_1^{(1)})_k\\oplus (A_1^ {(1)})_n}/ {(A_1^{(1)})_{k+n}} where n ≥ 1 is an integer fusion level and k = nP/(P‧- P) - 2 is a fractional level. For n = 1, these are the well-studied logarithmic minimal models {\\cal LM}(P,P')\\equiv {\\cal LM}(P,P';1). For n ≥ 2, we argue that these critical theories are realized on the lattice by n × n fusion of the n = 1 models. We study the critical fused lattice models {\\cal LM}(p,p')_{n\\times n} within a lattice approach and focus our study on the n = 2 models. We call these logarithmic superconformal minimal models {\\cal LSM}(p,p')\\equiv {\\cal LM}(P,P';2) where P = |2p - p‧|, P‧ = p‧ and p, p‧ are coprime. These models share the central charges c=c^{P,P';2}=\\frac {3}{2}\\big (1-{2(P'-P)^2}/{P P'}\\big ) of the rational superconformal minimal models {\\cal SM}(P,P'). Lattice realizations of these theories are constructed by fusing 2 × 2 blocks of the elementary face operators of the n = 1 logarithmic minimal models {\\cal LM}(p,p'). Algebraically, this entails the fused planar Temperley-Lieb algebra which is a spin-1 Birman-Murakami-Wenzl tangle algebra with loop fugacity β2 = [x]3 = x2 + 1 + x-2 and twist ω = x4 where x = eiλ and λ = (p‧- p)π/p‧. The first two members of this n = 2 series are superconformal dense polymers {\\cal LSM}(2,3) with c=-\\frac {5}{2}, β2 = 0 and superconformal percolation {\\cal LSM}(3,4) with c = 0, β2 = 1. We calculate the bulk and boundary free energies analytically. By numerically studying finite-size conformal spectra on the strip with appropriate boundary conditions, we argue that, in the continuum scaling limit, these lattice models are associated with the logarithmic superconformal models {\\cal LM}(P,P';2). For system size N, we propose finitized Kac character formulae of the form q^{-{c^{P,P';2}}/{24}+\\Delta ^{P,P';2} _{r
Minimization of Power Consumption in Mobile Adhoc Networks
Directory of Open Access Journals (Sweden)
B.Ruxanayasmin
2014-01-01
Full Text Available An ad hoc network is a mobile wireless network that has no fixed access point or centralized infrastructure. Each node in the network functions as a mobile router of data packets for other nodes and should maintain the network routes for long standing which is not possible due to limited battery source. Also, due to node mobility, link failures in such networks are very frequent and render certain standard protocols inefficient resulting in wastage of power and loss in throughput. The power consumption is an important issue with the goal to maintain the network lives for long by consuming less power. The power consumption can be achieved by modifying algorithms such as cryptographic algorithms,Routing algorithms, Multicast Algorithms, Energy Efficient Algorithms and Power Consumption Techniques in High Performance Computing, Compression and decompression algorithms, minimizing link failure algorithms, and by power control algorithms. In this work, we have proposed a new algorithm for minimization of power consumption in Ad hoc networks. The performance of the proposed model is analyzed and it is observed that, information could be sent with security consuminglesscomputational power, thereby increasing the battery life.
SIFT based algorithm for point feature tracking
Directory of Open Access Journals (Sweden)
Adrian BURLACU
2007-12-01
Full Text Available In this paper a tracking algorithm for SIFT features in image sequences is developed. For each point feature extracted using SIFT algorithm a descriptor is computed using information from its neighborhood. Using an algorithm based on minimizing the distance between two descriptors tracking point features throughout image sequences is engaged. Experimental results, obtained from image sequences that capture scaling of different geometrical type object, reveal the performances of the tracking algorithm.
Enhanced Secure Algorithm for Fingerprint Recognition
Saleh, Amira Mohammad Abdel-Mawgoud
2014-01-01
Fingerprint recognition requires a minimal effort from the user, does not capture other information than strictly necessary for the recognition process, and provides relatively good performance. A critical step in fingerprint identification system is thinning of the input fingerprint image. The performance of a minutiae extraction algorithm relies heavily on the quality of the thinning algorithm. So, a fast fingerprint thinning algorithm is proposed. The algorithm works directly on the gray-s...
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
The search for patterns or motifs in data represents a problem area of key interest to finance and economic researchers. In this paper, we introduce the motif tracking algorithm (MTA), a novel immune inspired (IS) pattern identification tool that is able to identify unknown motifs of a non specified length which repeat within time series data. The power of the algorithm comes from the fact that it uses a small number of parameters with minimal assumptions regarding the data being examined or the underlying motifs. Our interest lies in applying the algorithm to financial time series data to identify unknown patterns that exist. The algorithm is tested using three separate data sets. Particular suitability to financial data is shown by applying it to oil price data. In all cases, the algorithm identifies the presence of a motif population in a fast and efficient manner due to the utilization of an intuitive symbolic representation.The resulting population of motifs is shown to have considerable potential value for other applications such as forecasting and algorithm seeding.
Algorithms for Assembly-Type Flowshop Scheduling Problem
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
An assembly-type flowshop scheduling problem with minimizing makespan is considered in this paper. The problem of scheduling for minimizing makespan is first addressed, and then a new heuristic algorithm is proposed for it.
Directory of Open Access Journals (Sweden)
T. Karpagam
2012-01-01
Full Text Available Problem statement: Network topology design problems find application in several real life scenario. Approach: Most designs in the past either optimize for a single criterion like shortest or cost minimization or maximum flow. Results: This study discussed about solving a multi objective network topology design problem for a realistic traffic model specifically in the pipeline transportation. Here flow based algorithm focusing to transport liquid goods with maximum capacity with shortest distance, this algorithm developed with the sense of basic pert and critical path method. Conclusion/Recommendations: This flow based algorithm helps to give optimal result for transporting maximum capacity with minimum cost. It could be used in the juice factory, milk industry and its best alternate for the vehicle routing problem.
Fabbrichesi, Marco
2015-01-01
We show how the Higgs boson mass is protected from the potentially large corrections due to the introduction of minimal dark matter if the new physics sector is made supersymmetric. The fermionic dark matter candidate (a 5-plet of $SU(2)_L$) is accompanied by a scalar state. The weak gauge sector is made supersymmetric and the Higgs boson is embedded in a supersymmetric multiplet. The remaining standard model states are non-supersymmetric. Non vanishing corrections to the Higgs boson mass only appear at three-loop level and the model is natural for dark matter masses up to 15 TeV--a value larger than the one required by the cosmological relic density. The construction presented stands as an example of a general approach to naturalness that solves the little hierarchy problem which arises when new physics is added beyond the standard model at an energy scale around 10 TeV.
Barbieri, Riccardo; Harigaya, Keisuke
2016-01-01
In a Mirror Twin World with a maximally symmetric Higgs sector the little hierarchy of the Standard Model can be significantly mitigated, perhaps displacing the cutoff scale above the LHC reach. We show that consistency with observations requires that the Z2 parity exchanging the Standard Model with its mirror be broken in the Yukawa couplings. A minimal such effective field theory, with this sole Z2 breaking, can generate the Z2 breaking in the Higgs sector necessary for the Twin Higgs mechanism, and has constrained and correlated signals in invisible Higgs decays, direct Dark Matter Detection and Dark Radiation, all within reach of foreseen experiments. For dark matter, both mirror neutrons and a variety of self-interacting mirror atoms are considered. Neutrino mass signals and the effects of a possible additional Z2 breaking from the vacuum expectation values of B-L breaking fields are also discussed.
Minimal Hepatic Encephalopathy
Directory of Open Access Journals (Sweden)
Laura M Stinton
2013-01-01
Full Text Available Minimal hepatic encephalopathy (MHE is the earliest form of hepatic encephalopathy and can affect up to 80% of cirrhotic patients. By definition, it has no obvious clinical manifestation and is characterized by neurocognitive impairment in attention, vigilance and integrative function. Although often not considered to be clinically relevant and, therefore, not diagnosed or treated, MHE has been shown to affect daily functioning, quality of life, driving and overall mortality. The diagnosis of MHE has traditionally been achieved through neuropsychological examination, psychometric tests or the newer critical flicker frequency test. A new smartphone application (EncephalApp Stroop Test may serve to function as a screening tool for patients requiring further testing. In addition to physician reporting and driving restrictions, medical treatment for MHE includes non-absorbable disaccharides (eg, lactulose, probiotics or rifaximin. Liver transplantation may not result in reversal of the cognitive deficits associated with MHE.
Energy Technology Data Exchange (ETDEWEB)
Chala, Mikael [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Valencia Univ. (Spain). Dept. de Fisica Teorica y IFIC; Durieux, Gauthier; Matsedonskyi, Oleksii [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Grojean, Christophe [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Humboldt-Univ. Berlin (Germany). Inst. fuer Physik; Lima, Leonardo de [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Univ. Estadual Paulista, Sao Paulo (Brazil). Inst. de Fisica Teorica
2017-03-15
Higgs boson compositeness is a phenomenologically viable scenario addressing the hierarchy problem. In minimal models, the Higgs boson is the only degree of freedom of the strong sector below the strong interaction scale. We present here the simplest extension of such a framework with an additional composite spin-zero singlet. To this end, we adopt an effective field theory approach and develop a set of rules to estimate the size of the various operator coefficients, relating them to the parameters of the strong sector and its structural features. As a result, we obtain the patterns of new interactions affecting both the new singlet and the Higgs boson's physics. We identify the characteristics of the singlet field which cause its effects on Higgs physics to dominate over the ones inherited from the composite nature of the Higgs boson. Our effective field theory construction is supported by comparisons with explicit UV models.
A Trust-region-based Sequential Quadratic Programming Algorithm
DEFF Research Database (Denmark)
Henriksen, Lars Christian; Poulsen, Niels Kjølstad
This technical note documents the trust-region-based sequential quadratic programming algorithm used in other works by the authors. The algorithm seeks to minimize a convex nonlinear cost function subject to linear inequalty constraints and nonlinear equality constraints....
Minimizing Communication for Eigenproblems and the Singular Value Decomposition
Ballard, Grey; Dumitriu, Ioana
2010-01-01
Algorithms have two costs: arithmetic and communication. The latter represents the cost of moving data, either between levels of a memory hierarchy, or between processors over a network. Communication often dominates arithmetic and represents a rapidly increasing proportion of the total cost, so we seek algorithms that minimize communication. In \\cite{BDHS10} lower bounds were presented on the amount of communication required for essentially all $O(n^3)$-like algorithms for linear algebra, including eigenvalue problems and the SVD. Conventional algorithms, including those currently implemented in (Sca)LAPACK, perform asymptotically more communication than these lower bounds require. In this paper we present parallel and sequential eigenvalue algorithms (for pencils, nonsymmetric matrices, and symmetric matrices) and SVD algorithms that do attain these lower bounds, and analyze their convergence and communication costs.
Minimization of Decision Tree Average Depth for Decision Tables with Many-valued Decisions
Azad, Mohammad
2014-09-13
The paper is devoted to the analysis of greedy algorithms for the minimization of average depth of decision trees for decision tables such that each row is labeled with a set of decisions. The goal is to find one decision from the set of decisions. When we compare with the optimal result obtained from dynamic programming algorithm, we found some greedy algorithms produces results which are close to the optimal result for the minimization of average depth of decision trees.
Optimized QoS Routing Algorithm
Institute of Scientific and Technical Information of China (English)
石明洪; 王思兵; 白英彩
2004-01-01
QoS routing is one of the key technologies for providing guaranteed service in IP networks. The paper focuses on the optimization problem for bandwidth constrained QoS routing, and proposes an optimal algorithm based on the global optimization of path bandwidth and hop counts. The main goal of the algorithm is to minimize the consumption of network resource, and at the same time to minimize the network congestion caused by irrational path selection. The simulation results show that our algorithm has lower call blocking rate and higher throughput than traditional algorithms.
Minimally invasive surgical techniques in periodontal regeneration.
Cortellini, Pierpaolo
2012-09-01
A review of the current scientific literature was undertaken to evaluate the efficacy of minimally invasive periodontal regenerative surgery in the treatment of periodontal defects. The impact on clinical outcomes, surgical chair-time, side effects and patient morbidity were evaluated. An electronic search of PUBMED database from January 1987 to December 2011 was undertaken on dental journals using the key-word "minimally invasive surgery". Cohort studies, retrospective studies and randomized controlled clinical trials referring to treatment of periodontal defects with at least 6 months of follow-up were selected. Quality assessment of the selected studies was done through the Strength of Recommendation Taxonomy Grading (SORT) System. Ten studies (1 retrospective, 5 cohorts and 4 RCTs) were included. All the studies consistently support the efficacy of minimally invasive surgery in the treatment of periodontal defects in terms of clinical attachment level gain, probing pocket depth reduction and minimal gingival recession. Six studies reporting on side effects and patient morbidity consistently indicate very low levels of pain and discomfort during and after surgery resulting in a reduced intake of pain-killers and very limited interference with daily activities in the post-operative period. Minimally invasive surgery might be considered a true reality in the field of periodontal regeneration. The observed clinical improvements are consistently associated with very limited morbidity to the patient during the surgical procedure as well as in the post-operative period. Minimally invasive surgery, however, cannot be applied at all cases. A stepwise decisional algorithm should support clinicians in choosing the treatment approach.
A numerical efficient way to minimize classical density functional theory.
Edelmann, Markus; Roth, Roland
2016-02-21
The minimization of the functional of the grand potential within the framework of classical density functional theory in three spatial dimensions can be numerically very demanding. The Picard iteration, that is often employed, is very simple and robust but can be rather slow. While a number of different algorithms for optimization problems have been suggested, there is still great need for additional strategies. Here, we present an approach based on the limited memory Broyden algorithm that is efficient and relatively simple to implement. We demonstrate the performance of this algorithm with the minimization of an inhomogeneous bulk structure of a fluid with competing interactions. For the problems we studied, we find that the presented algorithm improves performance by roughly a factor of three.
A proximal method for composite minimization
Lewis, A S
2008-01-01
We consider minimization of functions that are compositions of prox-regular functions with smooth vector functions. A wide variety of important optimization problems can be formulated in this way. We describe a subproblem constructed from a linearized approximation to the objective and a regularization term, investigating the properties of local solutions of this subproblem and showing that they eventually identify a manifold containing the solution of the original problem. We propose an algorithmic framework based on this subproblem and prove a global convergence result.
Giribet, Gaston
2014-01-01
Minimal Massive Gravity (MMG) is an extension of three-dimensional Topologically Massive Gravity that, when formulated about Anti-de Sitter space, accomplishes to solve the tension between bulk and boundary unitarity that other models in three dimensions suffer from. We study this theory at the chiral point, i.e. at the point of the parameter space where one of the central charges of the dual conformal field theory vanishes. We investigate the non-linear regime of the theory, meaning that we study exact solutions to the MMG field equations that are not Einstein manifolds. We exhibit a large class of solutions of this type, which behave asymptotically in different manners. In particular, we find analytic solutions that represent two-parameter deformations of extremal Banados-Teitelboim-Zanelli (BTZ) black holes. These geometries behave asymptotically as solutions of the so-called Log Gravity, and, despite the weakened falling-off close to the boundary, they have finite mass and finite angular momentum, which w...
Directory of Open Access Journals (Sweden)
Oda Kin-ya
2013-05-01
Full Text Available Both the ATLAS and CMS experiments at the LHC have reported the observation of the particle of mass around 125 GeV which is consistent to the Standard Model (SM Higgs boson, but with an excess of events beyond the SM expectation in the diphoton decay channel at each of them. There still remains room for a logical possibility that we are not seeing the SM Higgs but something else. Here we introduce the minimal dilaton model in which the LHC signals are explained by an extra singlet scalar of the mass around 125 GeV that slightly mixes with the SM Higgs heavier than 600 GeV. When this scalar has a vacuum expectation value well beyond the electroweak scale, it can be identified as a linearly realized version of a dilaton field. Though the current experimental constraints from the Higgs search disfavors such a region, the singlet scalar model itself still provides a viable alternative to the SM Higgs in interpreting its search results.
Giribet, Gaston; Vásquez, Yerko
2015-01-01
Minimal massive gravity (MMG) is an extension of three-dimensional topologically massive gravity that, when formulated about anti-de Sitter space, accomplishes solving the tension between bulk and boundary unitarity that other models in three dimensions suffer from. We study this theory at the chiral point, i.e. at the point of the parameter space where one of the central charges of the dual conformal field theory vanishes. We investigate the nonlinear regime of the theory, meaning that we study exact solutions to the MMG field equations that are not Einstein manifolds. We exhibit a large class of solutions of this type, which behave asymptotically in different manners. In particular, we find analytic solutions that represent two-parameter deformations of extremal Bañados-Teitelboim-Zanelli black holes. These geometries behave asymptotically as solutions of the so-called log gravity, and, despite the weakened falling off close to the boundary, they have finite mass and finite angular momentum, which we compute. We also find time-dependent deformations of Bañados-Teitelboim-Zanelli that obey Brown-Henneaux asymptotic boundary conditions. The existence of such solutions shows that the Birkhoff theorem does not hold in MMG at the chiral point. Other peculiar features of the theory at the chiral point, such as the degeneracy it exhibits in the decoupling limit, are discussed.
Minimal distances between SCFTs
Energy Technology Data Exchange (ETDEWEB)
Buican, Matthew [Department of Physics and Astronomy, Rutgers University,Piscataway, NJ 08854 (United States)
2014-01-28
We study lower bounds on the minimal distance in theory space between four-dimensional superconformal field theories (SCFTs) connected via broad classes of renormalization group (RG) flows preserving various amounts of supersymmetry (SUSY). For N=1 RG flows, the ultraviolet (UV) and infrared (IR) endpoints of the flow can be parametrically close. On the other hand, for RG flows emanating from a maximally supersymmetric SCFT, the distance to the IR theory cannot be arbitrarily small regardless of the amount of (non-trivial) SUSY preserved along the flow. The case of RG flows from N=2 UV SCFTs is more subtle. We argue that for RG flows preserving the full N=2 SUSY, there are various obstructions to finding examples with parametrically close UV and IR endpoints. Under reasonable assumptions, these obstructions include: unitarity, known bounds on the c central charge derived from associativity of the operator product expansion, and the central charge bounds of Hofman and Maldacena. On the other hand, for RG flows that break N=2→N=1, it is possible to find IR fixed points that are parametrically close to the UV ones. In this case, we argue that if the UV SCFT possesses a single stress tensor, then such RG flows excite of order all the degrees of freedom of the UV theory. Furthermore, if the UV theory has some flavor symmetry, we argue that the UV central charges should not be too large relative to certain parameters in the theory.
Wilson, William; Aickelin, Uwe; 10.1007/s11633.008.0032.0
2010-01-01
The search for patterns or motifs in data represents a problem area of key interest to finance and economic researchers. In this paper we introduce the Motif Tracking Algorithm, a novel immune inspired pattern identification tool that is able to identify unknown motifs of a non specified length which repeat within time series data. The power of the algorithm comes from the fact that it uses a small number of parameters with minimal assumptions regarding the data being examined or the underlying motifs. Our interest lies in applying the algorithm to financial time series data to identify unknown patterns that exist. The algorithm is tested using three separate data sets. Particular suitability to financial data is shown by applying it to oil price data. In all cases the algorithm identifies the presence of a motif population in a fast and efficient manner due to the utilisation of an intuitive symbolic representation. The resulting population of motifs is shown to have considerable potential value for other ap...
Iterative Schemes for Convex Minimization Problems with Constraints
Directory of Open Access Journals (Sweden)
Lu-Chuan Ceng
2014-01-01
Full Text Available We first introduce and analyze one implicit iterative algorithm for finding a solution of the minimization problem for a convex and continuously Fréchet differentiable functional, with constraints of several problems: the generalized mixed equilibrium problem, the system of generalized equilibrium problems, and finitely many variational inclusions in a real Hilbert space. We prove strong convergence theorem for the iterative algorithm under suitable conditions. On the other hand, we also propose another implicit iterative algorithm for finding a fixed point of infinitely many nonexpansive mappings with the same constraints, and derive its strong convergence under mild assumptions.
Rational approximations and quantum algorithms with postselection
Mahadev, U.; de Wolf, R.
2015-01-01
We study the close connection between rational functions that approximate a given Boolean function, and quantum algorithms that compute the same function using post-selection. We show that the minimal degree of the former equals (up to a factor of 2) the minimal query complexity of the latter. We gi
A Minimal State Map for Strictly Lossless Positive Real Behaviors
Rao, Shodhan
In this paper, behaviors with equal input and output cardinalities and with lossless positive real transfer functions corresponding to certain input-output partitions are considered. Based on Cauer synthesis method, an algorithm to obtain a minimal state map of such behaviors starting from an
One-machine sequencing problem for minimizing total tardiness
Energy Technology Data Exchange (ETDEWEB)
Lin Yi-Xun
1983-01-01
The sequencing of n jobs on one machine to minimize total tardiness is discussed. Some theorems in the sense of existential properties have been proposed by Emmons (1969). The author proves theorems in the sense of universal properties so as to revise the dominance and elimination criterion thus presenting an efficient heuristic method and a branch-bound algorithm. 11 references.
A Minimal State Map for Strictly Lossless Positive Real Behaviors
Rao, Shodhan
2012-01-01
In this paper, behaviors with equal input and output cardinalities and with lossless positive real transfer functions corresponding to certain input-output partitions are considered. Based on Cauer synthesis method, an algorithm to obtain a minimal state map of such behaviors starting from an observ
On stable compact minimal submanifolds
Torralbo, Francisco
2010-01-01
Stable compact minimal submanifolds of the product of a sphere and any Riemannian manifold are classified whenever the dimension of the sphere is at least three. The complete classification of the stable compact minimal submanifolds of the product of two spheres is obtained. Also, it is proved that the only stable compact minimal surfaces of the product of a 2-sphere and any Riemann surface are the complex ones.
SPEECH SEPARATION ALGORITHM FOR AUDITORY SCENE ANALYSIS
Institute of Scientific and Technical Information of China (English)
Huang Xiuxuan; Wei Gang
2004-01-01
A simple and efficient algorithm is presented to separate concurrent speeches. The parameters of mixed speeches are estimated by searching in the neighbor area of given pitches to minimize the error between the original and the synthetic spectrums. The effectiveness of the proposed algorithm to separate close frequencies is demonstrated.
SAR image regularization with fast approximate discrete minimization.
Denis, Loïc; Tupin, Florence; Darbon, Jérôme; Sigelle, Marc
2009-07-01
Synthetic aperture radar (SAR) images, like other coherent imaging modalities, suffer from speckle noise. The presence of this noise makes the automatic interpretation of images a challenging task and noise reduction is often a prerequisite for successful use of classical image processing algorithms. Numerous approaches have been proposed to filter speckle noise. Markov random field (MRF) modelization provides a convenient way to express both data fidelity constraints and desirable properties of the filtered image. In this context, total variation minimization has been extensively used to constrain the oscillations in the regularized image while preserving its edges. Speckle noise follows heavy-tailed distributions, and the MRF formulation leads to a minimization problem involving nonconvex log-likelihood terms. Such a minimization can be performed efficiently by computing minimum cuts on weighted graphs. Due to memory constraints, exact minimization, although theoretically possible, is not achievable on large images required by remote sensing applications. The computational burden of the state-of-the-art algorithm for approximate minimization (namely the alpha -expansion) is too heavy specially when considering joint regularization of several images. We show that a satisfying solution can be reached, in few iterations, by performing a graph-cut-based combinatorial exploration of large trial moves. This algorithm is applied to joint regularization of the amplitude and interferometric phase in urban area SAR images.
Global Analysis of Minimal Surfaces
Dierkes, Ulrich; Tromba, Anthony J
2010-01-01
Many properties of minimal surfaces are of a global nature, and this is already true for the results treated in the first two volumes of the treatise. Part I of the present book can be viewed as an extension of these results. For instance, the first two chapters deal with existence, regularity and uniqueness theorems for minimal surfaces with partially free boundaries. Here one of the main features is the possibility of 'edge-crawling' along free parts of the boundary. The third chapter deals with a priori estimates for minimal surfaces in higher dimensions and for minimizers of singular integ
Minimal surfaces for architectural constructions
Directory of Open Access Journals (Sweden)
Velimirović Ljubica S.
2008-01-01
Full Text Available Minimal surfaces are the surfaces of the smallest area spanned by a given boundary. The equivalent is the definition that it is the surface of vanishing mean curvature. Minimal surface theory is rapidly developed at recent time. Many new examples are constructed and old altered. Minimal area property makes this surface suitable for application in architecture. The main reasons for application are: weight and amount of material are reduced on minimum. Famous architects like Otto Frei created this new trend in architecture. In recent years it becomes possible to enlarge the family of minimal surfaces by constructing new surfaces.
On minimal artinian modules and minimal artinian linear groups
Directory of Open Access Journals (Sweden)
Leonid A. Kurdachenko
2001-01-01
minimal artinian linear groups. The authors prove that in such classes of groups as hypercentral groups (so also, nilpotent and abelian groups and FC-groups, minimal artinian linear groups have precisely the same structure as the corresponding irreducible linear groups.
Image Compression Using Harmony Search Algorithm
Directory of Open Access Journals (Sweden)
Ryan Rey M. Daga
2012-09-01
Full Text Available Image compression techniques are important and useful in data storage and image transmission through the Internet. These techniques eliminate redundant information in an image which minimizes the physical space requirement of the image. Numerous types of image compression algorithms have been developed but the resulting image is still less than the optimal. The Harmony search algorithm (HSA, a meta-heuristic optimization algorithm inspired by the music improvisation process of musicians, was applied as the underlying algorithm for image compression. Experiment results show that it is feasible to use the harmony search algorithm as an algorithm for image compression. The HSA-based image compression technique was able to compress colored and grayscale images with minimal visual information loss.
Randomized robot navigation algorithms
Energy Technology Data Exchange (ETDEWEB)
Berman, P. [Penn State Univ., University Park, PA (United States); Blum, A. [Carnegie Mellon Univ., Pittsburgh, PA (United States); Fiat, A. [Tel-Aviv Univ. (Israel)] [and others
1996-12-31
We consider the problem faced by a mobile robot that has to reach a given target by traveling through an unmapped region in the plane containing oriented rectangular obstacles. We assume the robot has no prior knowledge about the positions or sizes of the obstacles, and acquires such knowledge only when obstacles are encountered. Our goal is to minimize the distance the robot must travel, using the competitive ratio as our measure. We give a new randomized algorithm for this problem whose competitive ratio is O(n4/9 log n), beating the deterministic {Omega}({radical}n) lower bound of [PY], and answering in the affirmative an open question of [BRS] (which presented an optimal deterministic algorithm). We believe the techniques introduced here may prove useful in other on-line situations in which information gathering is part of the on-line process.
Institute of Scientific and Technical Information of China (English)
LIU Shan; LIAO Yongyi
2007-01-01
In this paper,We study the Apriori and FP-growth algorithm in mining association rules and give a method for computing all the frequent item-sets in a database.Its basic idea is giving a concept based on the boolean vector business product,which be computed between all the businesses,then we can get all the two frequent item-sets (min_sup=2).We basis their inclusive relation to construct a set-tree of item-sets in database transaction,and then traverse path in it and get all the frequent item-sets.Therefore,we can get minimal frequent item sets between transactions and items in the database without scanning the database and iteratively computing in Apriori algorithm.
Weighted learning of bidirectional associative memories by global minimization.
Wang, T; Zhuang, X; Xing, X
1992-01-01
A weighted learning algorithm for bidirectional associative memories (BAMs) by means of global minimization, where each desired pattern is weighted, is described. According to the cost function that measures the goodness of the BAM, the learning algorithm is formulated as a global minimization problem and solved by a gradient descent rule. The learning approach guarantees not only that each desired pattern is stored as a stable state, but also that the basin of attraction is constructed as large as possible around each desired pattern. The existence of the weights, the asymptotic stability of each desired pattern and its basin of attraction, and the convergence of the proposed learning algorithm are investigated in an analytic way. A large number of computer experiments are reported to demonstrate the efficiency of the learning rule.
An object-oriented cluster search algorithm
Energy Technology Data Exchange (ETDEWEB)
Silin, Dmitry; Patzek, Tad
2003-01-24
In this work we describe two object-oriented cluster search algorithms, which can be applied to a network of an arbitrary structure. First algorithm calculates all connected clusters, whereas the second one finds a path with the minimal number of connections. We estimate the complexity of the algorithm and infer that the number of operations has linear growth with respect to the size of the network.
Hromkovic, Juraj
2009-01-01
Explores the science of computing. This book starts with the development of computer science, algorithms and programming, and then explains and shows how to exploit the concepts of infinity, computability, computational complexity, nondeterminism and randomness.
Efficient algorithms for the laboratory discovery of optimal quantum controls.
Turinici, Gabriel; Le Bris, Claude; Rabitz, Herschel
2004-01-01
The laboratory closed-loop optimal control of quantum phenomena, expressed as minimizing a suitable cost functional, is currently implemented through an optimization algorithm coupled to the experimental apparatus. In practice, the most commonly used search algorithms are variants of genetic algorithms. As an alternative choice, a direct search deterministic algorithm is proposed in this paper. For the simple simulations studied here, it outperforms the existing approaches. An additional algorithm is introduced in order to reveal some properties of the cost functional landscape.
Bergshoeff, Eric; Hohm, Olaf; Merbis, Wout; Routh, Alasdair J.; Townsend, Paul K.
2014-01-01
We present an alternative to topologically massive gravity (TMG) with the same 'minimal' bulk properties; i.e. a single local degree of freedom that is realized as a massive graviton in linearization about an anti-de Sitter (AdS) vacuum. However, in contrast to TMG, the new 'minimal massive gravity'
Uniqueness of PL Minimal Surfaces
Institute of Scientific and Technical Information of China (English)
Yi NI
2007-01-01
Using a standard fact in hyperbolic geometry, we give a simple proof of the uniqueness of PL minimal surfaces, thus filling in a gap in the original proof of Jaco and Rubinstein. Moreover, in order to clarify some ambiguity, we sharpen the definition of PL minimal surfaces, and prove a technical lemma on the Plateau problem in the hyperbolic space.
Guidelines for mixed waste minimization
Energy Technology Data Exchange (ETDEWEB)
Owens, C.
1992-02-01
Currently, there is no commercial mixed waste disposal available in the United States. Storage and treatment for commercial mixed waste is limited. Host States and compacts region officials are encouraging their mixed waste generators to minimize their mixed wastes because of management limitations. This document provides a guide to mixed waste minimization.
Influenza SIRS with Minimal Pneumonitis.
Erramilli, Shruti; Mannam, Praveen; Manthous, Constantine A
2016-01-01
Although systemic inflammatory response syndrome (SIRS) is a known complication of severe influenza pneumonia, it has been reported very rarely in patients with minimal parenchymal lung disease. We here report a case of severe SIRS, anasarca, and marked vascular phenomena with minimal or no pneumonitis. This case highlights that viruses, including influenza, may cause vascular dysregulation causing SIRS, even without substantial visceral organ involvement.
Directory of Open Access Journals (Sweden)
Knol Dirk L
2006-08-01
Full Text Available Abstract Changes in scores on health status questionnaires are difficult to interpret. Several methods to determine minimally important changes (MICs have been proposed which can broadly be divided in distribution-based and anchor-based methods. Comparisons of these methods have led to insight into essential differences between these approaches. Some authors have tried to come to a uniform measure for the MIC, such as 0.5 standard deviation and the value of one standard error of measurement (SEM. Others have emphasized the diversity of MIC values, depending on the type of anchor, the definition of minimal importance on the anchor, and characteristics of the disease under study. A closer look makes clear that some distribution-based methods have been merely focused on minimally detectable changes. For assessing minimally important changes, anchor-based methods are preferred, as they include a definition of what is minimally important. Acknowledging the distinction between minimally detectable and minimally important changes is useful, not only to avoid confusion among MIC methods, but also to gain information on two important benchmarks on the scale of a health status measurement instrument. Appreciating the distinction, it becomes possible to judge whether the minimally detectable change of a measurement instrument is sufficiently small to detect minimally important changes.
Minimization of decision tree depth for multi-label decision tables
Azad, Mohammad
2014-10-01
In this paper, we consider multi-label decision tables that have a set of decisions attached to each row. Our goal is to find one decision from the set of decisions for each row by using decision tree as our tool. Considering our target to minimize the depth of the decision tree, we devised various kinds of greedy algorithms as well as dynamic programming algorithm. When we compare with the optimal result obtained from dynamic programming algorithm, we found some greedy algorithms produces results which are close to the optimal result for the minimization of depth of decision trees.
Minimal Webs in Riemannian Manifolds
DEFF Research Database (Denmark)
Markvorsen, Steen
2008-01-01
are of instrumental importance for the applications. We apply these properties to show that minimal webs in ambient Riemannian spaces share several analytic and geometric properties with their smooth (minimal submanifold) counterparts in such spaces. In particular we use appropriate versions of the divergence......)$ into Riemannian manifolds $(N^{n}, h)$. Such immersions we call {\\em{minimal webs}}. They admit a natural 'geometric' extension of the intrinsic combinatorial discrete Laplacian. The geometric Laplacian on minimal webs enjoys standard properties such as the maximum principle and the divergence theorems, which...... theorems together with the comparison techniques for distance functions in Riemannian geometry and obtain bounds for the first Dirichlet eigenvalues, the exit times and the capacities as well as isoperimetric type inequalities for so-called extrinsic $R-$webs of minimal webs in ambient Riemannian manifolds...
Waste minimization handbook, Volume 1
Energy Technology Data Exchange (ETDEWEB)
Boing, L.E.; Coffey, M.J.
1995-12-01
This technical guide presents various methods used by industry to minimize low-level radioactive waste (LLW) generated during decommissioning and decontamination (D and D) activities. Such activities generate significant amounts of LLW during their operations. Waste minimization refers to any measure, procedure, or technique that reduces the amount of waste generated during a specific operation or project. Preventive waste minimization techniques implemented when a project is initiated can significantly reduce waste. Techniques implemented during decontamination activities reduce the cost of decommissioning. The application of waste minimization techniques is not limited to D and D activities; it is also useful during any phase of a facility`s life cycle. This compendium will be supplemented with a second volume of abstracts of hundreds of papers related to minimizing low-level nuclear waste. This second volume is expected to be released in late 1996.
Institute of Scientific and Technical Information of China (English)
DavidP Piero; VicenteJ Camps; MaraL Ramn; VernicaMateo; Roberto Soto-Negro
2016-01-01
. 38 to +0. 75 D in groups A and B, respectively. No statistically significant differences were found in groups A (P=0. 64) and B (P=0. 82 ) between PIOLadj and the IOL power implanted ( PIOLReal ) . The Bland and Altman analysis showed ranges of agreement between PIOLadj and PIOLReal of +1. 11 to -0. 96 D and +1. 14 to -1. 18 D in groups A and B, respectively. Clinically and statistically significant differences were found between PIOLadj and PIOL obtained with Hoffer Q and Holladay I formulas (P<0. 01).•CONCLUSION: The refractive predictability of cataract surgery with implantation of an aspheric IOL can be optimized using paraxial optics combined with linear algorithms to minimize the error associated to the estimation of corneal power and ELP.
Superspace geometry and the minimal, non minimal, and new minimal supergravity multiplets
Energy Technology Data Exchange (ETDEWEB)
Girardi, G.; Grimm, R.; Mueller, M.; Wess, J.
1984-11-01
We analyse superspace constraints in a systematic way and define a set of natural constraints. We give a complete solution of the Bianchi identities subject to these constraints and obtain a reducible, but not fully reducible multiplet. By additional constraints it can be reduced to either the minimal nonminimal or new minimal multiplet. We discuss the superspace actions for the various multiplets.
A new traffic allocation algorithm in Ad hoc networks
Institute of Scientific and Technical Information of China (English)
LI Xin; MIAO Jian-song; SUN Dan-dan; ZHOU Li-gang; DING Wei
2006-01-01
A dynamic traffic distribution algorithm based on the minimization product of packet delay and packet energy consumption is proposed. The algorithm is based on packet delay and energy consumption in allocating traffic, which can optimize the network performance. Simulation demonstrated that the algorithm could dynamically adjust to the traffic distribution between paths, which can minimize the product of packet delay and energy consumption in mobile Ad hoc networks.
Locally minimal topological groups 1
Chasco, María Jesús; Dikranjan, Dikran N.; Außenhofer, Lydia; Domínguez, Xabier
2015-01-01
The aim of this paper is to go deeper into the study of local minimality and its connection to some naturally related properties. A Hausdorff topological group ▫$(G,tau)$▫ is called locally minimal if there exists a neighborhood ▫$U$▫ of 0 in ▫$tau$▫ such that ▫$U$▫ fails to be a neighborhood of zero in any Hausdorff group topology on ▫$G$▫ which is strictly coarser than ▫$tau$▫. Examples of locally minimal groups are all subgroups of Banach-Lie groups, all locally compact groups and all mini...
Minimal flows and their extensions
Auslander, J
1988-01-01
This monograph presents developments in the abstract theory of topological dynamics, concentrating on the internal structure of minimal flows (actions of groups on compact Hausdorff spaces for which every orbit is dense) and their homomorphisms (continuous equivariant maps). Various classes of minimal flows (equicontinuous, distal, point distal) are intensively studied, and a general structure theorem is obtained. Another theme is the ``universal'' approach - entire classes of minimal flows are studied, rather than flows in isolation. This leads to the consideration of disjointness of flows, w
Hu, T C
2002-01-01
Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9
Minimally disruptive schedule repair for MCM missions
Molineaux, Matthew; Auslander, Bryan; Moore, Philip G.; Gupta, Kalyan M.
2015-05-01
Mine countermeasures (MCM) missions entail planning and operations in very dynamic and uncertain operating environments, which pose considerable risk to personnel and equipment. Frequent schedule repairs are needed that consider the latest operating conditions to keep mission on target. Presently no decision support tools are available for the challenging task of MCM mission rescheduling. To address this capability gap, we have developed the CARPE system to assist operation planners. CARPE constantly monitors the operational environment for changes and recommends alternative repaired schedules in response. It includes a novel schedule repair algorithm called Case-Based Local Schedule Repair (CLOSR) that automatically repairs broken schedules while satisfying the requirement of minimal operational disruption. It uses a case-based approach to represent repair strategies and apply them to new situations. Evaluation of CLOSR on simulated MCM operations demonstrates the effectiveness of case-based strategy. Schedule repairs are generated rapidly, ensure the elimination of all mines, and achieve required levels of clearance.
Minimal residual method stronger than polynomial preconditioning
Energy Technology Data Exchange (ETDEWEB)
Faber, V.; Joubert, W.; Knill, E. [Los Alamos National Lab., NM (United States)] [and others
1994-12-31
Two popular methods for solving symmetric and nonsymmetric systems of equations are the minimal residual method, implemented by algorithms such as GMRES, and polynomial preconditioning methods. In this study results are given on the convergence rates of these methods for various classes of matrices. It is shown that for some matrices, such as normal matrices, the convergence rates for GMRES and for the optimal polynomial preconditioning are the same, and for other matrices such as the upper triangular Toeplitz matrices, it is at least assured that if one method converges then the other must converge. On the other hand, it is shown that matrices exist for which restarted GMRES always converges but any polynomial preconditioning of corresponding degree makes no progress toward the solution for some initial error. The implications of these results for these and other iterative methods are discussed.
Sludge minimization technologies - an overview
Energy Technology Data Exchange (ETDEWEB)
Oedegaard, Hallvard
2003-07-01
The management of wastewater sludge from wastewater treatment plants represents one of the major challenges in wastewater treatment today. The cost of the sludge treatment amounts to more that the cost of the liquid in many cases. Therefore the focus on and interest in sludge minimization is steadily increasing. In the paper an overview is given for sludge minimization (sludge mass reduction) options. It is demonstrated that sludge minimization may be a result of reduced production of sludge and/or disintegration processes that may take place both in the wastewater treatment stage and in the sludge stage. Various sludge disintegration technologies for sludge minimization are discussed, including mechanical methods (focusing on stirred ball-mill, high-pressure homogenizer, ultrasonic disintegrator), chemical methods (focusing on the use of ozone), physical methods (focusing on thermal and thermal/chemical hydrolysis) and biological methods (focusing on enzymatic processes). (author)
Newton Algorithms for Analytic Rotation: An Implicit Function Approach
Boik, Robert J.
2008-01-01
In this paper implicit function-based parameterizations for orthogonal and oblique rotation matrices are proposed. The parameterizations are used to construct Newton algorithms for minimizing differentiable rotation criteria applied to "m" factors and "p" variables. The speed of the new algorithms is compared to that of existing algorithms and to…
Derandomization of Online Assignment Algorithms for Dynamic Graphs
Sahai, Ankur
2011-01-01
This paper analyzes different online algorithms for the problem of assigning weights to edges in a fully-connected bipartite graph that minimizes the overall cost while satisfying constraints. Edges in this graph may disappear and reappear over time. Performance of these algorithms is measured using simulations. This paper also attempts to derandomize the randomized online algorithm for this problem.
Newton Algorithms for Analytic Rotation: An Implicit Function Approach
Boik, Robert J.
2008-01-01
In this paper implicit function-based parameterizations for orthogonal and oblique rotation matrices are proposed. The parameterizations are used to construct Newton algorithms for minimizing differentiable rotation criteria applied to "m" factors and "p" variables. The speed of the new algorithms is compared to that of existing algorithms and to…
Minimally invasive surgery. Future developments.
1994-01-01
The rapid development of minimally invasive surgery means that there will be fundamental changes in interventional treatment. Technological advances will allow new minimally invasive procedures to be developed. Application of robotics will allow some procedures to be done automatically, and coupling of slave robotic instruments with virtual reality images will allow surgeons to perform operations by remote control. Miniature motors and instruments designed by microengineering could be introdu...
Cost minimization and asset pricing
Robert G. Chambers; John Quiggin
2005-01-01
A cost-based approach to asset-pricing equilibrium relationships is developed. A cost function induces a stochastic discount factor (pricing kernel) that is a function of random output, prices, and capital stockt. By eliminating opportunities for arbitrage between financial markets and the production technology, firms minimize the current cost of future consumption. The first-order conditions for this cost minimization problem generate the stochastic discount factor. The cost-based approach i...
Influenza SIRS with minimal pneumonitis
Directory of Open Access Journals (Sweden)
Shruti Erramilli
2016-08-01
Full Text Available While systemic inflammatory response syndrome (SIRS, is a known complication of severe influenza pneumonia, it has been reported very rarely in patients with minimal parenchymal lung disease. We here report a case of severe SIRS, anasarca and marked vascular phenomena with minimal or no pneumonitis. This case highlights that viruses, including influenza, may cause vascular dysregulation causing SIRS, even without substantial visceral organ involvement.
Order Reduction of Linear Interval Systems Using Genetic Algorithm
Directory of Open Access Journals (Sweden)
Dr. Rajendra Prasad
2010-10-01
Full Text Available This paper presents an algorithm for order reduction of higher order linear interval system into stable lower order linear interval system by means of Genetic algorithm. In this algorithm the numerator and denominator polynomials are determined by minimizing the Integral square error (ISE using genetic algorithm (GA. The algorithm is simple, rugged and computer oriented. It is shown that the algorithm has several advantages, e.g. the reduced order models retain the steady-state value and stability of the original system. A numerical example illustrates the proposed algorithm.
Economic Dispatch Using Modified Bat Algorithm
Directory of Open Access Journals (Sweden)
Aadil Latif
2014-07-01
Full Text Available Economic dispatch is an important non-linear optimization task in power systems. In this process, the total power demand is distributed amongst the generating units such that each unit satisfies its generation limit constraints and the cost of power production is minimized. This paper presents an over view of three optimization algorithms namely real coded genetic algorithm, particle swarm optimization and a relatively new optimization technique called bat algorithm. This study will further propose modifications to the original bat. Simulations are carried out for two test cases. First is a six-generator power system with a simplified convex objective function. The second test case is a five-generator system with a non-convex objective function. Finally the results of the modified algorithm are compared with the results of genetic algorithm, particle swarm and the original bat algorithm. The results demonstrate the improvement in the Bat Algorithm.
A Robust Parsing Algorithm For Link Grammars
Grinberg, D; Sleator, D; Grinberg, Dennis; Lafferty, John; Sleator, Daniel
1995-01-01
In this paper we present a robust parsing algorithm based on the link grammar formalism for parsing natural languages. Our algorithm is a natural extension of the original dynamic programming recognition algorithm which recursively counts the number of linkages between two words in the input sentence. The modified algorithm uses the notion of a null link in order to allow a connection between any pair of adjacent words, regardless of their dictionary definitions. The algorithm proceeds by making three dynamic programming passes. In the first pass, the input is parsed using the original algorithm which enforces the constraints on links to ensure grammaticality. In the second pass, the total cost of each substring of words is computed, where cost is determined by the number of null links necessary to parse the substring. The final pass counts the total number of parses with minimal cost. All of the original pruning techniques have natural counterparts in the robust algorithm. When used together with memoization...
Directory of Open Access Journals (Sweden)
Anna Bourmistrova
2011-02-01
Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.
A Minimal Solution to Radial Distortion Autocalibration.
Kukelova, Zuzana; Pajdla, Tomas
2011-12-01
Simultaneous estimation of radial distortion, epipolar geometry, and relative camera pose can be formulated as a minimal problem and solved from a minimal number of image points. Finding the solution to this problem leads to solving a system of algebraic equations. In this paper, we provide two different solutions to the problem of estimating radial distortion and epipolar geometry from eight point correspondences in two images. Unlike previous algorithms which were able to solve the problem from nine correspondences only, we enforce the determinant of the fundamental matrix be zero. This leads to a system of eight quadratic and one cubic equation in nine variables. We first simplify this system by eliminating six of these variables and then solve the system by two alternative techniques. The first one is based on the Gröbner basis method and the second one on the polynomial eigenvalue computation. We demonstrate that our solutions are efficient, robust, and practical by experiments on synthetic and real data.
Risk-optimized proton therapy to minimize radiogenic second cancers
DEFF Research Database (Denmark)
Rechner, Laura A; Eley, John G; Howell, Rebecca M
2015-01-01
Proton therapy confers substantially lower predicted risk of second cancer compared with photon therapy. However, no previous studies have used an algorithmic approach to optimize beam angle or fluence-modulation for proton therapy to minimize those risks. The objectives of this study were...... to demonstrate the feasibility of risk-optimized proton therapy and to determine the combination of beam angles and fluence weights that minimizes the risk of second cancer in the bladder and rectum for a prostate cancer patient. We used 6 risk models to predict excess relative risk of second cancer. Treatment...
Colorimetric characterization of imaging device by total color difference minimization
Institute of Scientific and Technical Information of China (English)
MOU Tong-sheng; SHEN Hui-liang
2006-01-01
Colorimetric characterization is to transform the device-dependent responses to device-independent colorimetric values, and is usually conducted in CIEXYZ space. However, the optimal solution in CIEXYZ space does not mean the minimization of perceptual error. A novel method for colorimetric characterization of imaging device based on the minimization of total color difference is proposed. The method builds the transform between RGB space and CIELAB space directly using the downhill simplex algorithm. Experimental results showed that the proposed method performs better than traditional least-square (LS) and total-least-square (TLS) methods, especially for colors with low luminance values.
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
In this paper, we consider nonlinear infinity-norm minimization problems. We device a reliable Lagrangian dual approach for solving this kind of problems and based on this method we propose an algorithm for the mixed linear and nonlinear infinitynorm minimization problems. Numerical results are presented.
Gharibi, Wajeb
2011-01-01
In this paper, we focus on nonlinear infinite-norm minimization problems that have many applications, especially in computer science and operations research. We set a reliable Lagrangian dual aproach for solving this kind of problems in general, and based on this method, we propose an algorithm for the mixed linear and nonlinear infinite-norm minimization cases with numerical results.
Minimal Degrees of Faithful Quasi-Permutation Representations for Direct Products of -Groups
Indian Academy of Sciences (India)
Ghodrat Ghaffarzadeh; Mohammad Hassan Abbaspour
2012-08-01
In [2], the algorithms of $c(G), q(G)$ and $p(G)$, the minimal degrees of faithful quasi-permutation and permutation representations of a finite group are given. The main purpose of this paper is to consider the relationship between these minimal degrees of non-trivial -groups and with the group × .
Electron tomography based on a total variation minimization reconstruction technique
Energy Technology Data Exchange (ETDEWEB)
Goris, B., E-mail: bart.goris@ua.ac.be [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Van den Broek, W. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Batenburg, K.J. [Centrum Wiskunde and Informatica, Science Park 123, NL-1098XG Amsterdam (Netherlands); Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Heidari Mezerji, H.; Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium)
2012-02-15
The 3D reconstruction of a tilt series for electron tomography is mostly carried out using the weighted backprojection (WBP) algorithm or using one of the iterative algorithms such as the simultaneous iterative reconstruction technique (SIRT). However, it is known that these reconstruction algorithms cannot compensate for the missing wedge. Here, we apply a new reconstruction algorithm for electron tomography, which is based on compressive sensing. This is a field in image processing specialized in finding a sparse solution or a solution with a sparse gradient to a set of ill-posed linear equations. Therefore, it can be applied to electron tomography where the reconstructed objects often have a sparse gradient at the nanoscale. Using a combination of different simulated and experimental datasets, it is shown that missing wedge artefacts are reduced in the final reconstruction. Moreover, it seems that the reconstructed datasets have a higher fidelity and are easier to segment in comparison to reconstructions obtained by more conventional iterative algorithms. -- Highlights: Black-Right-Pointing-Pointer A reconstruction algorithm for electron tomography is investigated based on total variation minimization. Black-Right-Pointing-Pointer Missing wedge artefacts are reduced by this algorithm. Black-Right-Pointing-Pointer The reconstruction is easier to segment. Black-Right-Pointing-Pointer More reliable quantitative information can be obtained.
Parallel Coordinate Descent for L1-Regularized Loss Minimization
Bradley, Joseph K; Bickson, Danny; Guestrin, Carlos
2011-01-01
We propose Shotgun, a parallel coordinate descent algorithm for minimizing L1-regularized losses. Though coordinate descent seems inherently sequential, we prove convergence bounds for Shotgun which predict linear speedups, up to a problem-dependent limit. We present a comprehensive empirical study of Shotgun for Lasso and sparse logistic regression. Our theoretical predictions on the potential for parallelism closely match behavior on real data. Shotgun outperforms other published solvers on a range of large problems, proving to be one of the most scalable algorithms for L1.
Real-time minimal bit error probability decoding of convolutional codes
Lee, L. N.
1973-01-01
A recursive procedure is derived for decoding of rate R=1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e. fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications such as in the inner coding system for concatenated coding.
Real-time minimal-bit-error probability decoding of convolutional codes
Lee, L.-N.
1974-01-01
A recursive procedure is derived for decoding of rate R = 1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit, subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e., fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications, such as in the inner coding system for concatenated coding.
Hocaoğlu, C; Sanderson, A C
1997-01-01
A novel genetic algorithm (GA) using minimal representation size cluster (MRSC) analysis is designed and implemented for solving multimodal function optimization problems. The problem of multimodal function optimization is framed within a hypothesize-and-test paradigm using minimal representation size (minimal complexity) for species formation and a GA. A multiple-population GA is developed to identify different species. The number of populations, thus the number of different species, is determined by the minimal representation size criterion. Therefore, the proposed algorithm reveals the unknown structure of the multimodal function when a priori knowledge about the function is unknown. The effectiveness of the algorithm is demonstrated on a number of multimodal test functions. The proposed scheme results in a highly parallel algorithm for finding multiple local minima. In this paper, a path-planning algorithm is also developed based on the MRSC_GA algorithm. The algorithm utilizes MRSC_GA for planning paths for mobile robots, piano-mover problems, and N-link manipulators. The MRSC_GA is used for generating multipaths to provide alternative solutions to the path-planning problem. The generation of alternative solutions is especially important for planning paths in dynamic environments. A novel iterative multiresolution path representation is used as a basis for the GA coding. The effectiveness of the algorithm is demonstrated on a number of two-dimensional path-planning problems.
Simulating granular materials by energy minimization
Krijgsman, D.; Luding, S.
2016-11-01
Discrete element methods are extremely helpful in understanding the complex behaviors of granular media, as they give valuable insight into all internal variables of the system. In this paper, a novel discrete element method for performing simulations of granular media is presented, based on the minimization of the potential energy in the system. Contrary to most discrete element methods (i.e., soft-particle method, event-driven method, and non-smooth contact dynamics), the system does not evolve by (approximately) integrating Newtons equations of motion in time, but rather by searching for mechanical equilibrium solutions for the positions of all particles in the system, which is mathematically equivalent to locally minimizing the potential energy. The new method allows for the rapid creation of jammed initial conditions (to be used for further studies) and for the simulation of quasi-static deformation problems. The major advantage of the new method is that it allows for truly static deformations. The system does not evolve with time, but rather with the externally applied strain or load, so that there is no kinetic energy in the system, in contrast to other quasi-static methods. The performance of the algorithm for both types of applications of the method is tested. Therefore we look at the required number of iterations, for the system to converge to a stable solution. For each single iteration, the required computational effort scales linearly with the number of particles. During the process of creating initial conditions, the required number of iterations for two-dimensional systems scales with the square root of the number of particles in the system. The required number of iterations increases for systems closer to the jamming packing fraction. For a quasi-static pure shear deformation simulation, the results of the new method are validated by regular soft-particle dynamics simulations. The energy minimization algorithm is able to capture the evolution of the
Online Speed Scaling Based on Active Job Count to Minimize Flow Plus Energy
DEFF Research Database (Denmark)
Lam, Tak-Wah; Lee, Lap Kei; To, Isaac K. K.;
2013-01-01
) that changes speed discretely. This is in contrast to the previous algorithms which change the speed continuously. More interestingly, AJC admits a better competitive ratio, and without using extra speed. In the second part, we extend the study to a more general speed scaling model where the processor can......This paper is concerned with online scheduling algorithms that aim at minimizing the total flow time plus energy usage. The results are divided into two parts. First, we consider the well-studied “simple” speed scaling model and show how to analyze a speed scaling algorithm (called AJC...... enter a sleep state to further save energy. A new sleep management algorithm called IdleLonger is presented. This algorithm, when coupled with AJC, gives the first competitive algorithm for minimizing total flow time plus energy in the general model....
DEFF Research Database (Denmark)
Markham, Annette
layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also......This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....
Minimally Invasive Video-Assisted versus Minimally Invasive Nonendoscopic Thyroidectomy
Directory of Open Access Journals (Sweden)
Zdeněk Fík
2014-01-01
Full Text Available Minimally invasive video-assisted thyroidectomy (MIVAT and minimally invasive nonendoscopic thyroidectomy (MINET represent well accepted and reproducible techniques developed with the main goal to improve cosmetic outcome, accelerate healing, and increase patient’s comfort following thyroid surgery. Between 2007 and 2011, a prospective nonrandomized study of patients undergoing minimally invasive thyroid surgery was performed to compare advantages and disadvantages of the two different techniques. There were no significant differences in the length of incision to perform surgical procedures. Mean duration of hemithyroidectomy was comparable in both groups, but it was more time consuming to perform total thyroidectomy by MIVAT. There were more patients undergoing MIVAT procedures without active drainage in the postoperative course and we also could see a trend for less pain in the same group. This was paralleled by statistically significant decreased administration of both opiates and nonopiate analgesics. We encountered two cases of recurrent laryngeal nerve palsies in the MIVAT group only. MIVAT and MINET represent safe and feasible alternative to conventional thyroid surgery in selected cases and this prospective study has shown minimal differences between these two techniques.
Minimizing Costs Can Be Costly
Directory of Open Access Journals (Sweden)
Rasmus Rasmussen
2010-01-01
Full Text Available A quite common practice, even in academic literature, is to simplify a decision problem and model it as a cost-minimizing problem. In fact, some type of models has been standardized to minimization problems, like Quadratic Assignment Problems (QAPs, where a maximization formulation would be treated as a “generalized” QAP and not solvable by many of the specially designed softwares for QAP. Ignoring revenues when modeling a decision problem works only if costs can be separated from the decisions influencing revenues. More often than we think this is not the case, and minimizing costs will not lead to maximized profit. This will be demonstrated using spreadsheets to solve a small example. The example is also used to demonstrate other pitfalls in network models: the inability to generally balance the problem or allocate costs in advance, and the tendency to anticipate a specific type of solution and thereby make constraints too limiting when formulating the problem.
Minimal Marking: A Success Story
Directory of Open Access Journals (Sweden)
Anne McNeilly
2014-11-01
Full Text Available The minimal-marking project conducted in Ryerson’s School of Journalism throughout 2012 and early 2013 resulted in significantly higher grammar scores in two first-year classes of minimally marked university students when compared to two traditionally marked classes. The “minimal-marking” concept (Haswell, 1983, which requires dramatically more student engagement, resulted in more successful learning outcomes for surface-level knowledge acquisition than the more traditional approach of “teacher-corrects-all.” Results suggest it would be effective, not just for grammar, punctuation, and word usage, the objective here, but for any material that requires rote-memory learning, such as the Associated Press or Canadian Press style rules used by news publications across North America.
An Object-oriented minimization package for HEP
Energy Technology Data Exchange (ETDEWEB)
Mark S Fischler and David Sachs
2003-07-02
A portion of the HEP community has perceived the need for a minimization package written in C++ and taking advantage of the Object-Oriented nature of that language. To be acceptable for HEP, such a package must at least encompass all the capabilities of Minuit. Aside from the slight plus of not relying on outside Fortran compilation, the advantages that a C++ package based on O-O design would confer over the multitude of available C++ Minuit-wrappers include: Easier extensibility to different algorithms and forms of constraints; and usage modes which would not be available in the global-common-based Minuit design. An example of the latter is a job pursuing two ongoing minimization problems simultaneously. We discuss the design and implementation of such a package, which extends Minuit only in minor ways but which greatly diminishes the programming effort (if not the algorithm thought) needed to make more significant extensions.
Mining Functional Dependency from Relational Databases Using Equivalent Classes and Minimal Cover
Directory of Open Access Journals (Sweden)
Jalal Atoum
2008-01-01
Full Text Available Data Mining (DM represents the process of extracting interesting and previously unknown knowledge from data. This study proposes a new algorithm called FD_Discover for discovering Functional Dependencies (FDs from databases. This algorithm employs some concepts from relational databases design theory specifically the concepts of equivalences and the minimal cover. It has resulted in large improvement in performance in comparison with a recent and similar algorithm called FD_MINE.
A Fast Algorithm for the Linear Complexity of Periodic Sequences
Institute of Scientific and Technical Information of China (English)
WEIShimin; CHENZhong; WANGZhao
2004-01-01
An efficient algorithm for determining the linear complexity and the minimal polynomial of a sequence with period 2pmqn over a finite field GF(q) is proposed, where p and q are primes, and q is a primitive root modulo p2. The new algorithm generalizes the algorithm for computing the linear complexity of a sequence with period qn over GF(q) and the algorithm for computing one of a sequence with period 2pm over GF(q).
Waste Minimization Through Process Integration and Multi-objective Optimization
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
By avoiding or reducing the production of waste, waste minimization is an effective approach to solve the pollution problem in chemical industry. Process integration supported by multi-objective optimization provides a framework for process design or process retrofit by simultaneously optimizing on the aspects of environment and economics. Multi-objective genetic algorithm is applied in this area as the solution approach for the multi-objective optimization problem.
Mean-Reverting Portfolio Design via Majorization-Minimization Method
Zhao, Ziping; Palomar, Daniel P.
2016-01-01
This paper considers the mean-reverting portfolio design problem arising from statistical arbitrage in the financial markets. The problem is formulated by optimizing a criterion characterizing the mean-reversion strength of the portfolio and taking into consideration the variance of the portfolio and an investment budget constraint at the same time. An efficient algorithm based on the majorization-minimization (MM) method is proposed to solve the problem. Numerical results show that our propo...
Qualifying and quantifying minimal hepatic encephalopathy.
Morgan, Marsha Y; Amodio, Piero; Cook, Nicola A; Jackson, Clive D; Kircheis, Gerald; Lauridsen, Mette M; Montagnese, Sara; Schiff, Sami; Weissenborn, Karin
2016-12-01
Minimal hepatic encephalopathy is the term applied to the neuropsychiatric status of patients with cirrhosis who are unimpaired on clinical examination but show alterations in neuropsychological tests exploring psychomotor speed/executive function and/or in neurophysiological variables. There is no gold standard for the diagnosis of this syndrome. As these patients have, by definition, no recognizable clinical features of brain dysfunction, the primary prerequisite for the diagnosis is careful exclusion of clinical symptoms and signs. A large number of psychometric tests/test systems have been evaluated in this patient group. Of these the best known and validated is the Portal Systemic Hepatic Encephalopathy Score (PHES) derived from a test battery of five paper and pencil tests; normative reference data are available in several countries. The electroencephalogram (EEG) has been used to diagnose hepatic encephalopathy since the 1950s but, once popular, the technology is not as accessible now as it once was. The performance characteristics of the EEG are critically dependent on the type of analysis undertaken; spectral analysis has better performance characteristics than visual analysis; evolving analytical techniques may provide better diagnostic information while the advent of portable wireless headsets may facilitate more widespread use. A large number of other diagnostic tools have been validated for the diagnosis of minimal hepatic encephalopathy including Critical Flicker Frequency, the Inhibitory Control Test, the Stroop test, the Scan package and the Continuous Reaction Time; each has its pros and cons; strengths and weaknesses; protagonists and detractors. Recent AASLD/EASL Practice Guidelines suggest that the diagnosis of minimal hepatic encephalopathy should be based on the PHES test together with one of the validated alternative techniques or the EEG. Minimal hepatic encephalopathy has a detrimental effect on the well-being of patients and their care
Minimization of Linear Functionals Defined on| Solutions of Large-Scale Discrete Ill-Posed Problems
DEFF Research Database (Denmark)
Elden, Lars; Hansen, Per Christian; Rojas, Marielba
2003-01-01
The minimization of linear functionals de ned on the solutions of discrete ill-posed problems arises, e.g., in the computation of con dence intervals for these solutions. In 1990, Elden proposed an algorithm for this minimization problem based on a parametric-programming reformulation involving...... the solution of a sequence of trust-region problems, and using matrix factorizations. In this paper, we describe MLFIP, a large-scale version of this algorithm where a limited-memory trust-region solver is used on the subproblems. We illustrate the use of our algorithm in connection with an inverse heat...
A minimal descriptor of an ancestral recombinations graph
Directory of Open Access Journals (Sweden)
Palamara Pier
2011-02-01
Full Text Available Abstract Background Ancestral Recombinations Graph (ARG is a phylogenetic structure that encodes both duplication events, such as mutations, as well as genetic exchange events, such as recombinations: this captures the (genetic dynamics of a population evolving over generations. Results In this paper, we identify structure-preserving and samples-preserving core of an ARG G and call it the minimal descriptor ARG of G. Its structure-preserving characteristic ensures that all the branch lengths of the marginal trees of the minimal descriptor ARG are identical to that of G and the samples-preserving property asserts that the patterns of genetic variation in the samples of the minimal descriptor ARG are exactly the same as that of G. We also prove that even an unbounded G has a finite minimal descriptor, that continues to preserve certain (graph-theoretic properties of G and for an appropriate class of ARGs, our estimate (Eqn 8 as well as empirical observation is that the expected reduction in the number of vertices is exponential. Conclusions Based on the definition of this lossless and bounded structure, we derive local properties of the vertices of a minimal descriptor ARG, which lend itself very naturally to the design of efficient sampling algorithms. We further show that a class of minimal descriptors, that of binary ARGs, models the standard coalescent exactly (Thm 6.
$\\ell_1$-K-SVD: A Robust Dictionary Learning Algorithm With Simultaneous Update
Mukherjee, Subhadip; Basu, Rupam; Seelamantula, Chandra Sekhar
2014-01-01
We develop a dictionary learning algorithm by minimizing the $\\ell_1$ distortion metric on the data term, which is known to be robust for non-Gaussian noise contamination. The proposed algorithm exploits the idea of iterative minimization of weighted $\\ell_2$ error. We refer to this algorithm as $\\ell_1$-K-SVD, where the dictionary atoms and the corresponding sparse coefficients are simultaneously updated to minimize the $\\ell_1$ objective, resulting in noise-robustness. We demonstrate throug...
Minimal Flavor Constraints for Technicolor
DEFF Research Database (Denmark)
Sakuma, Hidenori; Sannino, Francesco
2010-01-01
We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self-coupling and mas...
Dubin's Minimal Linkage Construct Revisited.
Rogers, Donald P.
This paper contains a theoretical analysis and empirical study that support the major premise of Robert Dubin's minimal-linkage construct-that restricting communication links increases organizational stability. The theoretical analysis shows that fewer communication links are associated with less uncertainty, more redundancy, and greater…
Minimal Surfaces for Hitchin Representations
DEFF Research Database (Denmark)
Li, Qiongling; Dai, Song
2016-01-01
Given a reductive representation $\\rho: \\pi_1(S)\\rightarrow G$, there exists a $\\rho$-equivariant harmonic map $f$ from the universal cover of a fixed Riemann surface $\\Sigma$ to the symmetric space $G/K$ associated to $G$. If the Hopf differential of $f$ vanishes, the harmonic map is then minimal...
Acquiring minimally invasive surgical skills
Hiemstra, Ellen
2012-01-01
Many topics in surgical skills education have been implemented without a solid scientific basis. For that reason we have tried to find this scientific basis. We have focused on training and evaluation of minimally invasive surgical skills in a training setting and in practice in the operating room.
What is minimally invasive dentistry?
Ericson, Dan
2004-01-01
Minimally Invasive Dentistry is the application of "a systematic respect for the original tissue." This implies that the dental profession recognizes that an artifact is of less biological value than the original healthy tissue. Minimally invasive dentistry is a concept that can embrace all aspects of the profession. The common delineator is tissue preservation, preferably by preventing disease from occurring and intercepting its progress, but also removing and replacing with as little tissue loss as possible. It does not suggest that we make small fillings to restore incipient lesions or surgically remove impacted third molars without symptoms as routine procedures. The introduction of predictable adhesive technologies has led to a giant leap in interest in minimally invasive dentistry. The concept bridges the traditional gap between prevention and surgical procedures, which is just what dentistry needs today. The evidence-base for survival of restorations clearly indicates that restoring teeth is a temporary palliative measure that is doomed to fail if the disease that caused the condition is not addressed properly. Today, the means, motives and opportunities for minimally invasive dentistry are at hand, but incentives are definitely lacking. Patients and third parties seem to be convinced that the only things that count are replacements. Namely, they are prepared to pay for a filling but not for a procedure that can help avoid having one.
Directory of Open Access Journals (Sweden)
Madjid Mirzavaziri
2007-01-01
norms ‖⋅‖1 and ‖⋅‖2 on ℂn such that N(A=max{‖Ax‖2:‖x‖1=1, x∈ℂn} for all A∈ℳn. This may be regarded as an extension of a known result on characterization of minimal algebra norms.
Implications of minimally invasive therapy
Banta, H.D.; Schersten, T.; Jonsson, E.
1993-01-01
The field of minimally invasive therapy (MIT) raises many important issues for the future of health care. It seems inevitable that MIT will replace much conventional surgery. This trend is good for society and good for patients. The health care system, however, may find the change disruptive. The
An Algorithmic Framework for Multiobjective Optimization
Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.
2013-01-01
Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795
An algorithmic framework for multiobjective optimization.
Ganesan, T; Elamvazuthi, I; Shaari, Ku Zilati Ku; Vasant, P
2013-01-01
Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization.
An Algorithmic Framework for Multiobjective Optimization
Directory of Open Access Journals (Sweden)
T. Ganesan
2013-01-01
Full Text Available Multiobjective (MO optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE, genetic algorithm (GA, gravitational search algorithm (GSA, and particle swarm optimization (PSO have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two. In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization.
Casanova, Henri; Robert, Yves
2008-01-01
""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi
DEFF Research Database (Denmark)
Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy
2007-01-01
variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel......We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...
Design and implementation of two concurrent multi-sensor integration algorithms for mobile robots
Energy Technology Data Exchange (ETDEWEB)
Jones, J.P.; Beckerman, M.; Mann, R.C.
1989-01-01
Two multi-sensor integration algorithms useful in mobile robotics applications are reviewed. A minimal set of utilities are then developed which enable implementation of these algorithms on a distributed memory concurrent computer. 14 refs., 3 figs.
Duality based optical flow algorithms with applications
DEFF Research Database (Denmark)
Rakêt, Lars Lau
We consider the popular TV-L1 optical flow formulation, and the so-called duality based algorithm for minimizing the TV-L1 energy. The original formulation is extended to allow for vector valued images, and minimization results are given. In addition we consider different definitions of total...... variation regularization, and related formulations of the optical flow problem that may be used with a duality based algorithm. We present a highly optimized algorithmic setup to estimate optical flows, and give five novel applications. The first application is registration of medical images, where X......-ray images of different hands, taken using different imaging devices are registered using a TV-L1 optical flow algorithm. We propose to regularize the input images, using sparsity enhancing regularization of the image gradient to improve registration results. The second application is registration of 2D...
Modified Bully Algorithm using Election Commission
Rahman, Muhammad Mahbubur
2010-01-01
Electing leader is a vital issue not only in distributed computing but also in communication network [1, 2, 3, 4, 5], centralized mutual exclusion algorithm [6, 7], centralized control IPC, etc. A leader is required to make synchronization between different processes. And different election algorithms are used to elect a coordinator among the available processes in the system such a way that there will be only one coordinator at any time. Bully election algorithm is one of the classical and well-known approaches in coordinator election process. This paper will present a modified version of bully election algorithm using a new concept called election commission. This approach will not only reduce redundant elections but also minimize total number of elections and hence it will minimize message passing, network traffic, and complexity of the existing system.
Duality based optical flow algorithms with applications
DEFF Research Database (Denmark)
Rakêt, Lars Lau
We consider the popular TV-L1 optical flow formulation, and the so-called duality based algorithm for minimizing the TV-L1 energy. The original formulation is extended to allow for vector valued images, and minimization results are given. In addition we consider different definitions of total...... variation regularization, and related formulations of the optical flow problem that may be used with a duality based algorithm. We present a highly optimized algorithmic setup to estimate optical flows, and give five novel applications. The first application is registration of medical images, where X......-ray images of different hands, taken using different imaging devices are registered using a TV-L1 optical flow algorithm. We propose to regularize the input images, using sparsity enhancing regularization of the image gradient to improve registration results. The second application is registration of 2D...
On the Value of Job Migration in Online Makespan Minimization
Albers, Susanne
2011-01-01
Makespan minimization on identical parallel machines is a classical scheduling problem. We consider the online scenario where a sequence of $n$ jobs has to be scheduled non-preemptively on $m$ machines so as to minimize the maximum completion time of any job. The best competitive ratio that can be achieved by deterministic online algorithms is in the range $[1.88,1.9201]$. Currently no randomized online algorithm with a smaller competitiveness is known, for general $m$. In this paper we explore the power of job migration, i.e.\\ an online scheduler is allowed to perform a limited number of job reassignments. Migration is a common technique used in theory and practice to balance load in parallel processing environments. As our main result we settle the performance that can be achieved by deterministic online algorithms. We develop an algorithm that is $\\alpha_m$-competitive, for any $m\\geq 2$, where $\\alpha_m$ is the solution of a certain equation. For $m=2$, $\\alpha_2 = 4/3$ and $\\lim_{m\\rightarrow \\infty} \\al...
PERFORMANCE ANALYSIS OF MINIMAL PATH FAULT TOLERANT ROUTING IN NOC
Institute of Scientific and Technical Information of China (English)
M. Ahmed; V. Laxmi; M. S. Gaur
2011-01-01
Occurrence of faults in Network on Chip (NoC) is inevitable as the feature size is continuously decreasing and processing elements are increasing in numbers.Faults can be revocable if it is transient.Transient fault may occur inside router,or in the core or in communication wires.Examples of transient faults are overflow of buffers in router,clock skew,cross talk,etc..Revocation of transient faults can be done by retransmission of faulty packets using oblivious or adaptive routing algorithms.Irrevocable faults causes non-functionality of segment and mainly occurs during fabrication process.NoC reliability increases with the efficient routing algorithms,which can handle the maximum faults without deadlock in network.As transient faults are temporary and can be easily revoked using retransmission of packet,permanent faults require efficient routing to route the packet by bypassing the nonfunctional segments.Thus,our focus is on the analysis of adaptive minimal path fault tolerant routing to handle the permanent faults.Comparative analysis between partial adaptive fault tolerance routing West-First,North-Last,Negative-First,Odd Even,and Minimal path Fault Tolerant routing (MinFT) algorithms with the nodes and links failure is performed using NoC Interconnect RoutinG and Application Modeling simulator (NIRGAM) for the 2D Mesh topology.Result suggests that MinFT ensures data transmission under worst conditions as compared to other adaptive routing algorithms.
Harm minimization among teenage drinkers
DEFF Research Database (Denmark)
Jørgensen, Morten Hulvej; Curtis, Tine; Christensen, Pia Haudrup
2007-01-01
AIM: To examine strategies of harm minimization employed by teenage drinkers. DESIGN, SETTING AND PARTICIPANTS: Two periods of ethnographic fieldwork were conducted in a rural Danish community of approximately 2000 inhabitants. The fieldwork included 50 days of participant observation among 13......-16-year-olds (n = 93) as well as 26 semistructured interviews with small self-selected friendship groups of 15-16-year-olds (n = 32). FINDINGS: The teenagers participating in the present study were more concerned about social than health risks. The informants monitored their own level of intoxication....... In regulating the social context of drinking they relied on their personal experiences more than on formalized knowledge about alcohol and harm, which they had learned from prevention campaigns and educational programmes. CONCLUSIONS: In this study we found that teenagers may help each other to minimize alcohol...
A Minimally Symmetric Higgs Boson
Low, Ian
2014-01-01
Models addressing the naturalness of a light Higgs boson typically employ symmetries, either bosonic or fermionic, to stabilize the Higgs mass. We consider a setup with the minimal amount of symmetries: four shift symmetries acting on the four components of the Higgs doublet, subject to the constraints of linearly realized SU(2)xU(1) electroweak symmetry. Up to terms that explicitly violate the shift symmetries, the effective lagrangian can be derived, irrespective of the spontaneously broken group G in the ultraviolet, and is universal in all models where the Higgs arises as a pseudo-Nambu-Goldstone boson (PNGB). Very high energy scatterings of vector bosons could provide smoking gun signals of a minimally symmetric Higgs boson.
Minimizing forced outage risk in generator bidding
Das, Dibyendu
Competition in power markets has exposed the participating companies to physical and financial uncertainties. Generator companies bid to supply power in a day-ahead market. Once their bids are accepted by the ISO they are bound to supply power. A random outage after acceptance of bids forces a generator to buy power from the expensive real-time hourly spot market and sell to the ISO at the set day-ahead market clearing price, incurring losses. A risk management technique is developed to assess this financial risk associated with forced outages of generators and then minimize it. This work presents a risk assessment module which measures the financial risk of generators bidding in an open market for different bidding scenarios. The day-ahead power market auction is modeled using a Unit Commitment algorithm and a combination of Normal and Cauchy distributions generate the real time hourly spot market. Risk profiles are derived and VaRs are calculated at 98 percent confidence level as a measure of financial risk. Risk Profiles and VaRs help the generators to analyze the forced outage risk and different factors affecting it. The VaRs and the estimated total earning for different bidding scenarios are used to develop a risk minimization module. This module will develop a bidding strategy of the generator company such that its estimated total earning is maximized keeping the VaR below a tolerable limit. This general framework of a risk management technique for the generating companies bidding in competitive day-ahead market can also help them in decisions related to building new generators.
Consistency of trace norm minimization
Bach, Francis
2007-01-01
Regularization by the sum of singular values, also referred to as the trace norm, is a popular technique for estimating low rank rectangular matrices. In this paper, we extend some of the consistency results of the Lasso to provide necessary and sufficient conditions for rank consistency of trace norm minimization with the square loss. We also provide an adaptive version that is rank consistent even when the necessary condition for the non adaptive version is not fulfilled.
Principle of minimal work fluctuations.
Xiao, Gaoyang; Gong, Jiangbin
2015-08-01
Understanding and manipulating work fluctuations in microscale and nanoscale systems are of both fundamental and practical interest. For example, in considering the Jarzynski equality 〈e-βW〉=e-βΔF, a change in the fluctuations of e-βW may impact how rapidly the statistical average of e-βW converges towards the theoretical value e-βΔF, where W is the work, β is the inverse temperature, and ΔF is the free energy difference between two equilibrium states. Motivated by our previous study aiming at the suppression of work fluctuations, here we obtain a principle of minimal work fluctuations. In brief, adiabatic processes as treated in quantum and classical adiabatic theorems yield the minimal fluctuations in e-βW. In the quantum domain, if a system initially prepared at thermal equilibrium is subjected to a work protocol but isolated from a bath during the time evolution, then a quantum adiabatic process without energy level crossing (or an assisted adiabatic process reaching the same final states as in a conventional adiabatic process) yields the minimal fluctuations in e-βW, where W is the quantum work defined by two energy measurements at the beginning and at the end of the process. In the classical domain where the classical work protocol is realizable by an adiabatic process, then the classical adiabatic process also yields the minimal fluctuations in e-βW. Numerical experiments based on a Landau-Zener process confirm our theory in the quantum domain, and our theory in the classical domain explains our previous numerical findings regarding the suppression of classical work fluctuations [G. Y. Xiao and J. B. Gong, Phys. Rev. E 90, 052132 (2014)].
Risk minimization and portfolio diversification
Farzad Pourbabaee; Minsuk Kwak; Traian A. Pirvu
2014-01-01
We consider the problem of minimizing capital at risk in the Black-Scholes setting. The portfolio problem is studied given the possibility that a correlation constraint between the portfolio and a financial index is imposed. The optimal portfolio is obtained in closed form. The effects of the correlation constraint are explored; it turns out that this portfolio constraint leads to a more diversified portfolio.
Outcomes After Minimally Invasive Esophagectomy
Luketich, James D.; Pennathur, Arjun; Awais, Omar; Levy, Ryan M.; Keeley, Samuel; Shende, Manisha; Christie, Neil A.; Weksler, Benny; Landreneau, Rodney J.; Abbas, Ghulam; Schuchert, Matthew J.; Nason, Katie S.
2014-01-01
Background Esophagectomy is a complex operation and is associated with significant morbidity and mortality. In an attempt to lower morbidity, we have adopted a minimally invasive approach to esophagectomy. Objectives Our primary objective was to evaluate the outcomes of minimally invasive esophagectomy (MIE) in a large group of patients. Our secondary objective was to compare the modified McKeown minimally invasive approach (videothoracoscopic surgery, laparoscopy, neck anastomosis [MIE-neck]) with our current approach, a modified Ivor Lewis approach (laparoscopy, videothoracoscopic surgery, chest anastomosis [MIE-chest]). Methods We reviewed 1033 consecutive patients undergoing MIE. Elective operation was performed on 1011 patients; 22 patients with nonelective operations were excluded. Patients were stratified by surgical approach and perioperative outcomes analyzed. The primary endpoint studied was 30-day mortality. Results The MIE-neck was performed in 481 (48%) and MIE-Ivor Lewis in 530 (52%). Patients undergoing MIE-Ivor Lewis were operated in the current era. The median number of lymph nodes resected was 21. The operative mortality was 1.68%. Median length of stay (8 days) and ICU stay (2 days) were similar between the 2 approaches. Mortality rate was 0.9%, and recurrent nerve injury was less frequent in the Ivor Lewis MIE group (P < 0.001). Conclusions MIE in our center resulted in acceptable lymph node resection, postoperative outcomes, and low mortality using either an MIE-neck or an MIE-chest approach. The MIE Ivor Lewis approach was associated with reduced recurrent laryngeal nerve injury and mortality of 0.9% and is now our preferred approach. Minimally invasive esophagectomy can be performed safely, with good results in an experienced center. PMID:22668811
Minimal Length, Measurability and Gravity
Directory of Open Access Journals (Sweden)
Alexander Shalyt-Margolin
2016-03-01
Full Text Available The present work is a continuation of the previous papers written by the author on the subject. In terms of the measurability (or measurable quantities notion introduced in a minimal length theory, first the consideration is given to a quantum theory in the momentum representation. The same terms are used to consider the Markov gravity model that here illustrates the general approach to studies of gravity in terms of measurable quantities.
Optimizing Processes to Minimize Risk
Loyd, David
2017-01-01
NASA, like the other hazardous industries, has suffered very catastrophic losses. Human error will likely never be completely eliminated as a factor in our failures. When you can't eliminate risk, focus on mitigating the worst consequences and recovering operations. Bolstering processes to emphasize the role of integration and problem solving is key to success. Building an effective Safety Culture bolsters skill-based performance that minimizes risk and encourages successful engagement.
Torsional Rigidity of Minimal Submanifolds
DEFF Research Database (Denmark)
Markvorsen, Steen; Palmer, Vicente
2006-01-01
We prove explicit upper bounds for the torsional rigidity of extrinsic domains of minimal submanifolds $P^m$ in ambient Riemannian manifolds $N^n$ with a pole $p$. The upper bounds are given in terms of the torsional rigidities of corresponding Schwarz symmetrizations of the domains in warped...... for the torsional rigidity are actually attained and give conditions under which the geometric average of the stochastic mean exit time for Brownian motion at infinity is finite....
Reservoir Operation to Minimize Sedimentation
Directory of Open Access Journals (Sweden)
Dyah Ari Wulandari
2013-10-01
Full Text Available The Wonogiri Reservoir capacity decreases rapidly, caused by serious sedimentation problems. In 2007, JICA was proposed a sediment storage reservoir with a new spillway for the purpose of sediment flushing / sluicing from The Keduang River. Due to the change of reservoir storage and change of reservoir system, it requires a sustainable reservoir operation technique. This technique is aimed to minimize the deviation between the input and output of sediments. The main objective of this study is to explore the optimal Wonogiri reservoir operation by minimizing the sediment trap. The CSUDP incremental dynamic programming procedure is used for the model optimization. This new operating rules will also simulate a five years operation period, to show the effect of the implemented techniques. The result of the study are the newly developed reservoir operation system has many advantages when compared to the actual operation system and the disadvantage of this developed system is that the use is mainly designed for a wet hydrologic year, since its performance for the water supply is lower than the actual reservoir operations.Doi: 10.12777/ijse.6.1.16-23 [How to cite this article: Wulandari, D.A., Legono, D., and Darsono, S., 2014. Reservoir Operation to Minimize Sedimentation. International Journal of Science and Engineering, 5(2,61-65. Doi: 10.12777/ijse.6.1.16-23] Normal 0 false false false EN-US X-NONE X-NONE
Minimally invasive paediatric cardiac surgery.
Bacha, Emile; Kalfa, David
2014-01-01
The concept of minimally invasive surgery for congenital heart disease in paediatric patients is broad, and has the aim of reducing the trauma of the operation at each stage of management. Firstly, in the operating room using minimally invasive incisions, video-assisted thoracoscopic and robotically assisted surgery, hybrid procedures, image-guided intracardiac surgery, and minimally invasive cardiopulmonary bypass strategies. Secondly, in the intensive-care unit with neuroprotection and 'fast-tracking' strategies that involve early extubation, early hospital discharge, and less exposure to transfused blood products. Thirdly, during postoperative mid-term and long-term follow-up by providing the children and their families with adequate support after hospital discharge. Improvement of these strategies relies on the development of new devices, real-time multimodality imaging, aids to instrument navigation, miniaturized and specialized instrumentation, robotic technology, and computer-assisted modelling of flow dynamics and tissue mechanics. In addition, dedicated multidisciplinary co-ordinated teams involving congenital cardiac surgeons, perfusionists, intensivists, anaesthesiologists, cardiologists, nurses, psychologists, and counsellors are needed before, during, and after surgery to go beyond apparent technological and medical limitations with the goal to 'treat more while hurting less'.
Energy Technology Data Exchange (ETDEWEB)
Meneses, Anderson A.M. [Federal University of Western Para (Brazil); Physics Institute, Rio de Janeiro State University (Brazil); Giusti, Alessandro [IDSIA (Dalle Molle Institute for Artificial Intelligence), University of Lugano (Switzerland); Almeida, Andre P. de, E-mail: apalmeid@gmail.com [Physics Institute, Rio de Janeiro State University (Brazil); Nuclear Engineering Program, Federal University of Rio de Janeiro (Brazil); Nogueira, Liebert; Braz, Delson [Nuclear Engineering Program, Federal University of Rio de Janeiro (Brazil); Almeida, Carlos E. de [Radiological Sciences Laboratory, Rio de Janeiro State University (Brazil); Barroso, Regina C. [Physics Institute, Rio de Janeiro State University (Brazil)
2012-07-15
The research on applications of segmentation algorithms to Synchrotron Radiation X-Ray micro-Computed Tomography (SR-{mu}CT) is an open problem, due to the interesting and well-known characteristics of SR images, such as the phase contrast effect. The Energy Minimization via Graph Cuts (EMvGC) algorithm represents state-of-art segmentation algorithm, presenting an enormous potential of application in SR-{mu}CT imaging. We describe the application of the algorithm EMvGC with swap move for the segmentation of bone images acquired at the ELETTRA Laboratory (Trieste, Italy). - Highlights: Black-Right-Pointing-Pointer Microstructures of Wistar rats' ribs are investigated with Synchrotron Radiation {mu}CT imaging. Black-Right-Pointing-Pointer The present work is part of a research on the effects of radiotherapy on the thoracic region. Black-Right-Pointing-Pointer Application of the Energy Minimization via Graph Cuts algorithm for segmentation is described.
A Wiring-Aware Approach to Minimizing Built-In Self-Test Overhead
Institute of Scientific and Technical Information of China (English)
Abdil Rashid Mohamed; Zebo Peng; Petru Eles
2005-01-01
This paper describes a built-in self-test (BIST) hardware overhead minimization technique used during a BIST synthesis process. The technique inserts a minimal amount of BIST resources into a digital system to make it fully testable.The BIST resource insertion is guided by the results of symbolic testability analysis. It takes into consideration both BIST register cost and wiring overhead in order to obtain the minimal area designs. A Simulated Annealing algorithm is used to solve the overhead minimization problem. Experiments show that considering wiring area during BIST synthesis results in smaller final designs as compared to the cases when the wiring impact is ignored.
Reaction route graphs. III. Non-minimal kinetic mechanisms.
Fishtik, Ilie; Callaghan, Caitlin A; Datta, Ravindra
2005-02-24
The concept of reaction route (RR) graphs introduced recently by us for kinetic mechanisms that produce minimal graphs is extended to the problem of non-minimal kinetic mechanisms for the case of a single overall reaction (OR). A RR graph is said to be minimal if all of the stoichiometric numbers in all direct RRs of the mechanism are equal to +/-1 and non-minimal if at least one stoichiometric number in a direct RR is non-unity, e.g., equal to +/-2. For a given mechanism, four unique topological characteristics of RR graphs are defined and enumerated, namely, direct full routes (FRs), empty routes (ERs), intermediate nodes (INs), and terminal nodes (TNs). These are further utilized to construct the RR graphs. One algorithm involves viewing each IN as a central node in a RR sub-graph. As a result, the construction and enumeration of RR graphs are reduced to the problem of balancing the peripheral nodes in the RR sub-graphs according to the list of FRs, ERs, INs, and TNs. An alternate method involves using an independent set of RRs to draw the RR graph while satisfying the INs and TNs. Three examples are presented to illustrate the application of non-minimal RR graph theory.
Minimal lepton flavor violating realizations of minimal seesaw models
Sierra, Diego Aristizabal; Kamenik, Jernej F
2012-01-01
We study the implications of the global U(1)R symmetry present in minimal lepton flavor violating implementations of the seesaw mechanism for neutrino masses. In the context of minimal type I seesaw scenarios with a slightly broken U(1)R, we show that, depending on the R-charge assignments, two classes of generic models can be identified. Models where the right-handed neutrino masses and the lepton number breaking scale are decoupled, and models where the parameters that slightly break the U(1)R induce a suppression in the light neutrino mass matrix. We show that within the first class of models, contributions of right-handed neutrinos to charged lepton flavor violating processes are severely suppressed. Within the second class of models we study the charged lepton flavor violating phenomenology in detail, focusing on mu to e gamma, mu to 3e and mu to e conversion in nuclei. We show that sizable contributions to these processes are naturally obtained for right-handed neutrino masses at the TeV scale. We then ...
Szeliski, Richard; Zabih, Ramin; Scharstein, Daniel; Veksler, Olga; Kolmogorov, Vladimir; Agarwala, Aseem; Tappen, Marshall; Rother, Carsten
2008-06-01
Among the most exciting advances in early vision has been the development of efficient energy minimization algorithms for pixel-labeling tasks such as depth or texture computation. It has been known for decades that such problems can be elegantly expressed as Markov random fields, yet the resulting energy minimization problems have been widely viewed as intractable. Recently, algorithms such as graph cuts and loopy belief propagation (LBP) have proven to be very powerful: for example, such methods form the basis for almost all the top-performing stereo methods. However, the tradeoffs among different energy minimization algorithms are still not well understood. In this paper we describe a set of energy minimization benchmarks and use them to compare the solution quality and running time of several common energy minimization algorithms. We investigate three promising recent methods graph cuts, LBP, and tree-reweighted message passing in addition to the well-known older iterated conditional modes (ICM) algorithm. Our benchmark problems are drawn from published energy functions used for stereo, image stitching, interactive segmentation, and denoising. We also provide a general-purpose software interface that allows vision researchers to easily switch between optimization methods. Benchmarks, code, images, and results are available at http://vision.middlebury.edu/MRF/.
Directory of Open Access Journals (Sweden)
He Cheng
2014-02-01
Full Text Available It is known that the single machine preemptive scheduling problem of minimizing total completion time with release date and deadline constraints is NP- hard. Du and Leung solved some special cases by the generalized Baker's algorithm and the generalized Smith's algorithm in O(n2 time. In this paper we give an O(n2 algorithm for the special case where the processing times and deadlines are agreeable. Moreover, for the case where the processing times and deadlines are disagreeable, we present two properties which could enable us to reduce the range of the enumeration algorithm
Minimal families of curves on surfaces
Lubbes, Niels
2014-11-01
A minimal family of curves on an embedded surface is defined as a 1-dimensional family of rational curves of minimal degree, which cover the surface. We classify such minimal families using constructive methods. This allows us to compute the minimal families of a given surface.The classification of minimal families of curves can be reduced to the classification of minimal families which cover weak Del Pezzo surfaces. We classify the minimal families of weak Del Pezzo surfaces and present a table with the number of minimal families of each weak Del Pezzo surface up to Weyl equivalence.As an application of this classification we generalize some results of Schicho. We classify algebraic surfaces that carry a family of conics. We determine the minimal lexicographic degree for the parametrization of a surface that carries at least 2 minimal families. © 2014 Elsevier B.V.
Testing the accuracy of redshift space group finding algorithms
Frederic, J J
1994-01-01
Using simulated redshift surveys generated from a high resolution N-body cosmological structure simulation, we study algorithms used to identify groups of galaxies in redshift space. Two algorithms are investigated; both are friends-of-friends schemes with variable linking lengths in the radial and transverse dimensions. The chief difference between the algorithms is in the redshift linking length. The algorithm proposed by Huchra \\& Geller (1982) uses a generous linking length designed to find ``fingers of god'' while that of Nolthenius \\& White (1987) uses a smaller linking length to minimize contamination by projection. We find that neither of the algorithms studied is intrinsically superior to the other; rather, the ideal algorithm as well as the ideal algorithm parameters depend on the purpose for which groups are to be studied. The Huchra/Geller algorithm misses few real groups, at the cost of including some spurious groups and members, while the Nolthenius/White algorithm misses high velocity d...
Linearized Functional Minimization for Inverse Modeling
Energy Technology Data Exchange (ETDEWEB)
Wohlberg, Brendt [Los Alamos National Laboratory; Tartakovsky, Daniel M. [University of California, San Diego; Dentz, Marco [Institute of Environmental Assessment and Water Research, Barcelona, Spain
2012-06-21
Heterogeneous aquifers typically consist of multiple lithofacies, whose spatial arrangement significantly affects flow and transport. The estimation of these lithofacies is complicated by the scarcity of data and by the lack of a clear correlation between identifiable geologic indicators and attributes. We introduce a new inverse-modeling approach to estimate both the spatial extent of hydrofacies and their properties from sparse measurements of hydraulic conductivity and hydraulic head. Our approach is to minimize a functional defined on the vectors of values of hydraulic conductivity and hydraulic head fields defined on regular grids at a user-determined resolution. This functional is constructed to (i) enforce the relationship between conductivity and heads provided by the groundwater flow equation, (ii) penalize deviations of the reconstructed fields from measurements where they are available, and (iii) penalize reconstructed fields that are not piece-wise smooth. We develop an iterative solver for this functional that exploits a local linearization of the mapping from conductivity to head. This approach provides a computationally efficient algorithm that rapidly converges to a solution. A series of numerical experiments demonstrates the robustness of our approach.
Locating Minimal Fault Interaction in Combinatorial Testing
Directory of Open Access Journals (Sweden)
Wei Zheng
2016-01-01
Full Text Available Combinatorial testing (CT technique could significantly reduce testing cost and increase software system quality. By using the test suite generated by CT as input to conduct black-box testing towards a system, we are able to detect interactions that trigger the system’s faults. Given a test case, there may be only part of all its parameters relevant to the defects in system and the interaction constructed by those partial parameters is key factor of triggering fault. If we can locate those parameters accurately, this will facilitate the software diagnosing and testing process. This paper proposes a novel algorithm named complete Fault Interaction Location (comFIL to locate those interactions that cause system’s failures and meanwhile obtains the minimal set of target interactions in test suite produced by CT. By applying this method, testers can analyze and locate the factors relevant to defects of system more precisely, thus making the process of software testing and debugging easier and more efficient. The results of our empirical study indicate that comFIL performs better compared with known fault location techniques in combinatorial testing because of its improved effectiveness and precision.
Microgrids: Energy management by loss minimization technique
Directory of Open Access Journals (Sweden)
A. K. Basu, S. Chowdhury, S.P. Chowdhury
2011-03-01
Full Text Available Energy management is a techno-economic issue, which dictates, in the context of microgrids, how optimal investment in technology front could bring optimal power quality and reliability (PQR of supply to the consumers. Investment in distributed energy resources (DERs, with their connection to the utility grid at optimal locations and with optimal sizes, saves energy in the form of line loss reduction. Line loss reduction is the indirect benefit to the microgrid owner who may recover it as an incentive from utility. The present paper focuses on planning of optimal siting and sizing of DERs based on minimization of line loss. Optimal siting is done, here, on the loss sensitivity index (LSI method and optimal sizing by differential evolution (DE algorithms, which is, again, compared with particle swarm optimization (PSO technique. Studies are conducted on 6-bus and 14-bus radial networks under islanded mode of operation with electric demand profile. Islanding helps planning of DER capacity of microgrid, which is self-sufficient to cater its own consumers without utility’s support.
Zanetti, Stefano Paolo; Boeri, Luca; Gallioli, Andrea; Talso, Michele; Montanari, Emanuele
2017-01-01
Miniaturized percutaneous nephrolithotomy (mini-PCNL) has increased in popularity in recent years and is now widely used to overcome the therapeutic gap between conventional PCNL and less-invasive procedures such as shock wave lithotripsy (SWL) or flexible ureterorenoscopy (URS) for the treatment of renal stones. However, despite its minimally invasive nature, the superiority in terms of safety, as well as the similar efficacy of mini-PCNL compared to conventional procedures, is still under debate. The aim of this chapter is to present one of the most recent advancements in terms of mini-PCNL: the Karl Storz "minimally invasive PCNL" (MIP). A literature search for original and review articles either published or e-published up to December 2016 was performed using Google and the PubMed database. Keywords included: minimally invasive PCNL; MIP. The retrieved articles were gathered and examined. The complete MIP set is composed of different sized rigid metallic fiber-optic nephroscopes and different sized metallic operating sheaths, according to which the MIP is categorized into extra-small (XS), small (S), medium (M) and large (L). Dilation can be performed either in one-step or with a progressive technique, as needed. The reusable devices of the MIP and vacuum cleaner efect make PCNL with this set a cheap procedure. The possibility to shift from a small to a larger instrument within the same set (Matrioska technique) makes MIP a very versatile technique suitable for the treatment of almost any stone. Studies in the literature have shown that MIP is equally effective, with comparable rates of post-operative complications, as conventional PCNL, independently from stone size. MIP does not represent a new technique, but rather a combination of the last ten years of PCNL improvements in a single system that can transversally cover all available techniques in the panorama of percutaneous stone treatment.
Minimal dispersion refractive index profiles.
Feit, M D
1979-09-01
The analogy between optics and quantum mechanics is exploited by considering a 2-D quantum system whose Schroedinger equation is closely related to the wave equation for light propagation in an optical fiber. From this viewpoint, Marcatili's condition for minimal-dispersion-refractive-index profiles, and the Olshansky- Keck formula for rms pulse spreading in an alpha-profile fiber may be derived without recourse to the WKB approximation. Besides affording physical insight into these results, the present approach points out a possible limitation in their application to real fibers.
Minimal sets of Reidemeister moves
Polyak, Michael
2009-01-01
It is well known that any two diagrams representing the same oriented link are related by a finite sequence of Reidemeister moves O1, O2 and O3. Depending on orientations of fragments involved in the moves, one may distinguish 4 different versions of each of the O1 and O2 moves, and 8 versions of the O3 move. We introduce a minimal generating set of oriented Reidemeister moves, which includes two moves of types O1 and O2, and only one move of type O3. We then consider other sets of moves and show that only few of them generate all Reidemeister moves.
About the ZOOM minimization package
Energy Technology Data Exchange (ETDEWEB)
Fischler, M.; Sachs, D.; /Fermilab
2004-11-01
A new object-oriented Minimization package is available for distribution in the same manner as CLHEP. This package, designed for use in HEP applications, has all the capabilities of Minuit, but is a re-write from scratch, adhering to modern C++ design principles. A primary goal of this package is extensibility in several directions, so that its capabilities can be kept fresh with as little maintenance effort as possible. This package is distinguished by the priority that was assigned to C++ design issues, and the focus on producing an extensible system that will resist becoming obsolete.
Prepulse minimization in KALI-5000
Kumar, D. Durga Praveen; Mitra, S.; Senthil, K.; Sharma, Vishnu K.; Singh, S. K.; Roy, A.; Sharma, Archana; Nagesh, K. V.; Chakravarthy, D. P.
2009-07-01
A pulse power system (1 MV, 50 kA, and 100 ns) based on Marx generator and Blumlein pulse forming line has been built for generating high power microwaves. The Blumlein configuration poses a prepulse problem and hence the diode gap had to be increased to match the diode impedance to the Blumlein impedance during the main pulse. A simple method to eliminate prepulse voltage using a vacuum sparkgap and a resistor is given. Another fundamental approach of increasing the inductance of Marx generator to minimize the prepulse voltage is also presented. Experimental results for both of these configurations are given.
Risk minimization through portfolio replication
Ciliberti, S.; Mã©Zard, M.
2007-05-01
We use a replica approach to deal with portfolio optimization problems. A given risk measure is minimized using empirical estimates of asset values correlations. We study the phase transition which happens when the time series is too short with respect to the size of the portfolio. We also study the noise sensitivity of portfolio allocation when this transition is approached. We consider explicitely the cases where the absolute deviation and the conditional value-at-risk are chosen as a risk measure. We show how the replica method can study a wide range of risk measures, and deal with various types of time series correlations, including realistic ones with volatility clustering.
Constructive Discrepancy Minimization by Walking on The Edges
Lovett, Shachar
2012-01-01
Minimizing the discrepancy of a set system is a fundamental problem in combinatorics. One of the cornerstones in this area is the celebrated six standard deviations result of Spencer (AMS 1985): In any system of n sets in a universe of size n, there always exists a coloring which achieves discrepancy 6\\sqrt{n}. The original proof of Spencer was existential in nature, and did not give an efficient algorithm to find such a coloring. Recently, a breakthrough work of Bansal (FOCS 2010) gave an efficient algorithm which finds such a coloring. His algorithm was based on an SDP relaxation of the discrepancy problem and a clever rounding procedure. In this work we give a new randomized algorithm to find a coloring as in Spencer's result based on a restricted random walk we call "Edge-Walk". Our algorithm and its analysis use only basic linear algebra and is "truly" constructive in that it does not appeal to the existential arguments, giving a new proof of Spencer's theorem and the partial coloring lemma.
Evolutionary Graph Drawing Algorithms
Institute of Scientific and Technical Information of China (English)
Huang Jing-wei; Wei Wen-fang
2003-01-01
In this paper, graph drawing algorithms based on genetic algorithms are designed for general undirected graphs and directed graphs. As being shown, graph drawing algorithms designed by genetic algorithms have the following advantages: the frames of the algorithms are unified, the method is simple, different algorithms may be attained by designing different objective functions, therefore enhance the reuse of the algorithms. Also, aesthetics or constrains may be added to satisfy different requirements.
Evolutionary algorithms for mobile ad hoc networks
Dorronsoro, Bernabé; Danoy, Grégoire; Pigné, Yoann; Bouvry, Pascal
2014-01-01
Describes how evolutionary algorithms (EAs) can be used to identify, model, and minimize day-to-day problems that arise for researchers in optimization and mobile networking. Mobile ad hoc networks (MANETs), vehicular networks (VANETs), sensor networks (SNs), and hybrid networks—each of these require a designer’s keen sense and knowledge of evolutionary algorithms in order to help with the common issues that plague professionals involved in optimization and mobile networking. This book introduces readers to both mobile ad hoc networks and evolutionary algorithms, presenting basic concepts as well as detailed descriptions of each. It demonstrates how metaheuristics and evolutionary algorithms (EAs) can be used to help provide low-cost operations in the optimization process—allowing designers to put some “intelligence” or sophistication into the design. It also offers efficient and accurate information on dissemination algorithms topology management, and mobility models to address challenges in the ...
External-Memory Algorithms and Data Structures
DEFF Research Database (Denmark)
Arge, Lars; Zeh, Norbert
2010-01-01
. This is due to the huge difference in access time of fast internal memory and slower external memory such as disks. The goal of theoretical work in the area of external memory algorithms (also called I/O algorithms or out-of-core algorithms) has been to develop algorithms that minimize the Input......The data sets involved in many modern applications are often too massive to fit in main memory of even the most powerful computers and must therefore reside on disk. Thus communication between internal and external memory, and not actual computation time, becomes the bottleneck in the computation....../Output communication (or just I/O) performed when solving a given problem. The area was effectively started in the late eighties by Aggarwal and Vitter and subsequently I/O algorithms have been developed for several problem domain. Also I/O performance can often be improved if many disks can efficiently be used...
MM Algorithms for Geometric and Signomial Programming.
Lange, Kenneth; Zhou, Hua
2014-02-01
This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.
Bit Loading Algorithms for Cooperative OFDM Systems
Directory of Open Access Journals (Sweden)
Bo Gui
2007-12-01
Full Text Available We investigate the resource allocation problem for an OFDM cooperative network with a single source-destination pair and multiple relays. Assuming knowledge of the instantaneous channel gains for all links in the entire network, we propose several bit and power allocation schemes aiming at minimizing the total transmission power under a target rate constraint. First, an optimal and efficient bit loading algorithm is proposed when the relay node uses the same subchannel to relay the information transmitted by the source node. To further improve the performance gain, subchannel permutation, in which the subchannels are reallocated at relay nodes, is considered. An optimal subchannel permutation algorithm is first proposed and then an efficient suboptimal algorithm is considered to achieve a better complexity-performance tradeoff. A distributed bit loading algorithm is also proposed for ad hoc networks. Simulation results show that significant performance gains can be achieved by the proposed bit loading algorithms, especially when subchannel permutation is employed.
Bit Loading Algorithms for Cooperative OFDM Systems
Directory of Open Access Journals (Sweden)
Gui Bo
2008-01-01
Full Text Available Abstract We investigate the resource allocation problem for an OFDM cooperative network with a single source-destination pair and multiple relays. Assuming knowledge of the instantaneous channel gains for all links in the entire network, we propose several bit and power allocation schemes aiming at minimizing the total transmission power under a target rate constraint. First, an optimal and efficient bit loading algorithm is proposed when the relay node uses the same subchannel to relay the information transmitted by the source node. To further improve the performance gain, subchannel permutation, in which the subchannels are reallocated at relay nodes, is considered. An optimal subchannel permutation algorithm is first proposed and then an efficient suboptimal algorithm is considered to achieve a better complexity-performance tradeoff. A distributed bit loading algorithm is also proposed for ad hoc networks. Simulation results show that significant performance gains can be achieved by the proposed bit loading algorithms, especially when subchannel permutation is employed.
Energy Technology Data Exchange (ETDEWEB)
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
Risk minimization by experimental mechanics
Energy Technology Data Exchange (ETDEWEB)
1992-01-01
Experimental mechanics has to undertake new functions to confirm theoretical perceptions and to check results of numerical analysis, because such results are dependent on the validity of assumptions and suppositions which generally are necessary to enable mathematical analysis of complex problems. Methods of experimental mechanics have become essential to design structures and products of a large variety to save raw materials and energy, to improve safety against failure and reliability of products, structures and even complex technical systems. Experimental methods are applied in system identification of complex structures. They are developed also as tools for supervising operating systems, machinery and installations as well as engineering structures, in order to get reliable informations on the life-time, to guarantee a higher degree of safety and to minimize risks. Therefore, methods of experimental mechanics are quite essential in developing strategies of riskmanagement. The contributions to this report are dealing with new theoretical perceptions, practical applications and experiences according to the objectives of the international symposium on 'Risk Minimization by Experimental Mechanics'. (orig.).
Concentrator design to minimize LCOE
McDonald, Mark; Horne, Steve; Conley, Gary
2007-09-01
The Levelized Cost of Energy (LCOE) takes into account more than just the cost of power output. It encompasses product longevity, performance degradation and the costs associated with delivering energy to the grid tie point. Concentrator optical design is one of the key components to minimizing the LCOE, by affecting conversion efficiency, acceptance angle and the amount of energy concentrated on the receiver. Optical systems for concentrators, even those at high concentrations ( >350X) can be designed by straightforward techniques, and will operate under most circumstances. Adding requirements for generous acceptance angles, non-destructive off-axis operation, safety and high efficiency however, complicate the design. Furthermore, the demands of high volume manufacturing, efficient logistics, minimal field commissioning time and low cost lead to quite complicated, system level design trade-offs. The technology which we will discuss features an array of reflective optics, scaled to be fabricated by techniques used in the automotive industry. The design couples a two-element imaging system to a non-imaging total internal reflection tertiary in a very compact design, with generous tolerance margins. Several optical units are mounted in a housing, which protects the optics and assists with dissipating waste heat. This paper outlines the key elements in the design of SolFocus concentrator optics, and discusses tradeoffs and experience with various design approaches.
Neural Network Algorithm for Particle Loading
Energy Technology Data Exchange (ETDEWEB)
J. L. V. Lewandowski
2003-04-25
An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given.
On the convergence of inexact Uzawa algorithms
Energy Technology Data Exchange (ETDEWEB)
Welfert, B.D. [Arizona State Univ., Tempe, AZ (United States)
1994-12-31
The author considers the solution of symmetric indefinite systems which can be cast in matrix block form, where diagonal blocks A and C are symmetric positive definite and semi-definite, respectively. Systems of this type arise frequently in quadratic minimization problems, as well as mixed finite element discretizations of fluid flow equation. The author uses the Uzawa algorithm to precondition the matrix equations.
An approximation algorithm for square packing
R. van Stee (Rob)
2004-01-01
textabstractWe consider the problem of packing squares into bins which are unit squares, where the goal is to minimize the number of bins used. We present an algorithm for this problem with an absolute worst-case ratio of 2, which is optimal provided P != NP.
Scheduling to Minimize Energy and Flow Time in Broadcast Scheduling
Moseley, Benjamin
2010-01-01
In this paper we initiate the study of minimizing power consumption in the broadcast scheduling model. In this setting there is a wireless transmitter. Over time requests arrive at the transmitter for pages of information. Multiple requests may be for the same page. When a page is transmitted, all requests for that page receive the transmission simulteneously. The speed the transmitter sends data at can be dynamically scaled to conserve energy. We consider the problem of minimizing flow time plus energy, the most popular scheduling metric considered in the standard scheduling model when the scheduler is energy aware. We will assume that the power consumed is modeled by an arbitrary convex function. For this problem there is a $\\Omega(n)$ lower bound. Due to the lower bound, we consider the resource augmentation model of Gupta \\etal \\cite{GuptaKP10}. Using resource augmentation, we give a scalable algorithm. Our result also gives a scalable non-clairvoyant algorithm for minimizing weighted flow time plus energ...
Optimal Path Planning for Minimizing Base Disturbance of Space Robot
Directory of Open Access Journals (Sweden)
Xiao-Peng Wei
2016-03-01
Full Text Available The path planning of free-floating space robot in space on-orbit service has been paid more and more attention. The problem is more complicated because of the interaction between the space robot and base. Therefore, it is necessary to minimize the base position and attitude disturbance to improve the path planning of free-floating space robot, reducing the fuel consumption for the position and attitude maintenance. In this paper, a reasonable path planning method to solve the problem is presented, which is feasible and relatively simple. First, the kinematic model of 6 degrees of freedom free-floating space robot is established. And then the joint angles are parameterized using the 7th order polynomial sine functions. The fitness function is defined according to the position and attitude of minimizing base disturbance and constraints of space robot. Furthermore, an improved chaotic particle swarm optimization (ICPSO is presented. The proposed algorithm is compared with the standard PSO and CPSO algorithm in the literature by the experimental simulation. The simulation results demonstrate that the proposed algorithm is more effective than the two other approaches, such as easy to find the optimal solution, and this method could provide a satisfactory path for the free-floating space robot.
Optimal Path Planning for Minimizing Base Disturbance of Space Robot
Directory of Open Access Journals (Sweden)
Xiao-Peng Wei
2016-03-01
Full Text Available The path planning of free-floating space robot in space on-orbit service has been paid more and more attention. The problem is more complicated because of the interaction between the space robot and base. Therefore, it is necessary to minimize the base position and attitude disturbance to improve the path planning of free-floating space robot, reducing the fuel consumption for the position and attitude maintenance. In this paper, a reasonable path planning method to solve the problem is presented, which is feasible and relatively simple. First, the kinematic model of 6 degrees of freedom free-floating space robot is established. And then the joint angles are parameterized using the 7th order polynomial sine functions. The fitness function is defined according to the position and attitude of minimizing base disturbance and constraints of space robot. Furthermore, an improved chaotic particle swarm optimization (ICPSO is presented. The proposed algorithm is compared with the standard PSO and CPSO algorithm in the literature by the experimental simulation. The simulation results demonstrate that the proposed algorithm is more effective than the two other approaches, such as easy to find the optimal solution, and this method could provide a satisfactory path for the free-floating space robot.
LENUS (Irish Health Repository)
Boyle, E
2008-11-01
Laparoscopic surgery for inflammatory bowel disease (IBD) is technically demanding but can offer improved short-term outcomes. The introduction of minimally invasive surgery (MIS) as the default operative approach for IBD, however, may have inherent learning curve-associated disadvantages. We hypothesise that the establishment of MIS as the standard operative approach does not increase patient morbidity as assessed in the initial period of its introduction into a specialised unit, and that it confers earlier postoperative gastrointestinal recovery and reduced hospitalisation compared with conventional open resection.
Integrating Genetic Algorithm, Tabu Search Approach for Job Shop Scheduling
Thamilselvan, R
2009-01-01
This paper presents a new algorithm based on integrating Genetic Algorithms and Tabu Search methods to solve the Job Shop Scheduling problem. The idea of the proposed algorithm is derived from Genetic Algorithms. Most of the scheduling problems require either exponential time or space to generate an optimal answer. Job Shop scheduling (JSS) is the general scheduling problem and it is a NP-complete problem, but it is difficult to find the optimal solution. This paper applies Genetic Algorithms and Tabu Search for Job Shop Scheduling problem and compares the results obtained by each. With the implementation of our approach the JSS problems reaches optimal solution and minimize the makespan.
A modified multitarget adaptive array algorithm for wireless CDMA system.
Liu, Yun-hui; Yang, Yu-hang
2004-11-01
The paper presents a modified least squares despread respread multitarget constant modulus algorithm (LS-DRMTCMA). The cost function of the original algorithm was modified by the minimum bit error rate (MBER) criterion. The novel algorithm tries to optimize weight vectors by directly minimizing bit error rate (BER) of code division multiple access (CDMA) mobile communication system. In order to achieve adaptive update of weight vectors, a stochastic gradient adaptive algorithm was developed by a kernel density estimator of possibility density function based on samples. Simulation results showed that the modified algorithm remarkably improves the BER performance, capacity and near-far effect resistance of a given CDMA communication system.
Davidon's optimally conditioned algorithms for unconstrained optimization
Energy Technology Data Exchange (ETDEWEB)
Nazareth, L.
1976-01-01
Recently, Davidon (Math. Prog., 9, 1-30) has published some new and very promising algorithms for minimizing unconstrained functionals. A particular perspective on these algorithms is presented, and extensions of some of the theory underlying them are developed in this paper.
A fast algorithm for nonnegative matrix factorization and its convergence.
Li, Li-Xin; Wu, Lin; Zhang, Hui-Sheng; Wu, Fang-Xiang
2014-10-01
Nonnegative matrix factorization (NMF) has recently become a very popular unsupervised learning method because of its representational properties of factors and simple multiplicative update algorithms for solving the NMF. However, for the common NMF approach of minimizing the Euclidean distance between approximate and true values, the convergence of multiplicative update algorithms has not been well resolved. This paper first discusses the convergence of existing multiplicative update algorithms. We then propose a new multiplicative update algorithm for minimizing the Euclidean distance between approximate and true values. Based on the optimization principle and the auxiliary function method, we prove that our new algorithm not only converges to a stationary point, but also does faster than existing ones. To verify our theoretical results, the experiments on three data sets have been conducted by comparing our proposed algorithm with other existing methods.
Rituximab in Minimal Change Disease
Directory of Open Access Journals (Sweden)
Nima Madanchi
2017-03-01
Full Text Available Treatment with rituximab, a monoclonal antibody against the B-lymphocyte surface protein CD20, leads to the depletion of B cells. Recently, rituximab was reported to effectively prevent relapses of glucocorticoid-dependent or frequently relapsing minimal change disease (MCD. MCD is thought to be T-cell mediated; how rituximab controls MCD is not understood. In this review, we summarize key clinical studies demonstrating the efficacy of rituximab in idiopathic nephrotic syndrome, mainly MCD. We then discuss immunological features of this disease and potential mechanisms of action of rituximab in its treatment based on what is known about the therapeutic action of rituximab in other immune-mediated disorders. We believe that studies aimed at understanding the mechanisms of action of rituximab in MCD will provide a novel approach to resolve the elusive immune pathophysiology of MCD.
Minimally packed phases in holography
Donos, Aristomenis
2015-01-01
We numerically construct asymptotically AdS black brane solutions of $D=4$ Einstein-Maxwell theory coupled to a pseudoscalar. The solutions are holographically dual to $d=3$ CFTs held at constant chemical potential and magnetic field that spontaneously break translation invariance leading to the spontaneous formation of abelian and momentum magnetisation currents flowing around the plaquettes of a periodic Bravais lattice. We analyse the three-dimensional moduli space of lattice solutions, which are generically oblique, and show that the free energy is minimised by the triangular lattice, associated with minimal packing of circles in the plane. The triangular structure persists at low temperatures indicating the existence of novel crystalline ground states.
A minimally invasive smile enhancement.
Peck, Fred H
2014-01-01
Minimally invasive dentistry refers to a wide variety of dental treatments. On the restorative aspect of dental procedures, direct resin bonding can be a very conservative treatment option for the patient. When tooth structure does not need to be removed, the patient benefits. Proper treatment planning is essential to determine how conservative the restorative treatment will be. This article describes the diagnosis, treatment options, and procedural techniques in the restoration of 4 maxillary anterior teeth with direct composite resin. The procedural steps are reviewed with regard to placing the composite and the variety of colors needed to ensure a natural result. Finishing and polishing of the composite are critical to ending with a natural looking dentition that the patient will be pleased with for many years.
Determining Knots by Minimizing Energy
Institute of Scientific and Technical Information of China (English)
Cai-Ming Zhang; Hui-Jian Han; Fuhua Frank Cheng
2006-01-01
A new method for determining knots to construct polynomial curves is presented. At each data point, a quadric curve which passes three consecutive points is constructed. The knots for constructing the quadric curve are determined by minimizing the internal strain energy, which can be regarded as a function of the angle. The function of the angle is expanded as a Taylor series with two terms, then the two knot intervals between the three consecutive points are defined by linear expression. Between the two consecutive points, there are two knot intervals, and the combination of the two knot intervals is used to define the final knot interval. A comparison of the new method with several existing methods is included.
[MINIMALLY INVASIVE AORTIC VALVE REPLACEMENT].
Tabata, Minoru
2016-03-01
Minimally invasive aortic valve replacement (MIAVR) is defined as aortic valve replacement avoiding full sternotomy. Common approaches include a partial sternotomy right thoracotomy, and a parasternal approach. MIAVR has been shown to have advantages over conventional AVR such as shorter length of stay and smaller amount of blood transfusion and better cosmesis. However, it is also known to have disadvantages such as longer cardiopulmonary bypass and aortic cross-clamp times and potential complications related to peripheral cannulation. Appropriate patient selection is very important. Since the procedure is more complex than conventional AVR, more intensive teamwork in the operating room is essential. Additionally, a team approach during postoperative management is critical to maximize the benefits of MIAVR.
Minimally invasive aortic valve replacement
DEFF Research Database (Denmark)
Foghsgaard, Signe; Schmidt, Thomas Andersen; Kjaergard, Henrik K
2009-01-01
In this descriptive prospective study, we evaluate the outcomes of surgery in 98 patients who were scheduled to undergo minimally invasive aortic valve replacement. These patients were compared with a group of 50 patients who underwent scheduled aortic valve replacement through a full sternotomy....... The 30-day mortality rate for the 98 patients was zero, although 14 of the 98 mini-sternotomies had to be converted to complete sternotomies intraoperatively due to technical problems. Such conversion doubled the operative time over that of the planned full sternotomies. In the group of patients whose...... is an excellent operation in selected patients, but its true advantages over conventional aortic valve replacement (other than a smaller scar) await evaluation by means of randomized clinical trial. The "extended mini-aortic valve replacement" operation, on the other hand, is a risky procedure that should...
Rapisarda, P.; Trentelman, H.L.; Minh, H.B.
2013-01-01
We illustrate an algorithm that starting from the image representation of a strictly bounded-real system computes a minimal balanced state variable, from which a minimal balanced state realization is readily obtained. The algorithm stems from an iterative procedure to compute a storage function, bas
Constrained Multiobjective Biogeography Optimization Algorithm
Directory of Open Access Journals (Sweden)
Hongwei Mo
2014-01-01
Full Text Available Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA.
Constrained multiobjective biogeography optimization algorithm.
Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping
2014-01-01
Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA.
The environmental cost of subsistence: Optimizing diets to minimize footprints.
Gephart, Jessica A; Davis, Kyle F; Emery, Kyle A; Leach, Allison M; Galloway, James N; Pace, Michael L
2016-05-15
The question of how to minimize monetary cost while meeting basic nutrient requirements (a subsistence diet) was posed by George Stigler in 1945. The problem, known as Stigler's diet problem, was famously solved using the simplex algorithm. Today, we are not only concerned with the monetary cost of food, but also the environmental cost. Efforts to quantify environmental impacts led to the development of footprint (FP) indicators. The environmental footprints of food production span multiple dimensions, including greenhouse gas emissions (carbon footprint), nitrogen release (nitrogen footprint), water use (blue and green water footprint) and land use (land footprint), and a diet minimizing one of these impacts could result in higher impacts in another dimension. In this study based on nutritional and population data for the United States, we identify diets that minimize each of these four footprints subject to nutrient constraints. We then calculate tradeoffs by taking the composition of each footprint's minimum diet and calculating the other three footprints. We find that diets for the minimized footprints tend to be similar for the four footprints, suggesting there are generally synergies, rather than tradeoffs, among low footprint diets. Plant-based food and seafood (fish and other aquatic foods) commonly appear in minimized diets and tend to most efficiently supply macronutrients and micronutrients, respectively. Livestock products rarely appear in minimized diets, suggesting these foods tend to be less efficient from an environmental perspective, even when nutrient content is considered. The results' emphasis on seafood is complicated by the environmental impacts of aquaculture versus capture fisheries, increasing in aquaculture, and shifting compositions of aquaculture feeds. While this analysis does not make specific diet recommendations, our approach demonstrates potential environmental synergies of plant- and seafood-based diets. As a result, this study
A TRUST-REGION ALGORITHM FOR NONLINEAR INEQUALITY CONSTRAINED OPTIMIZATION
Institute of Scientific and Technical Information of China (English)
Xiaojiao Tong; Shuzi Zhou
2003-01-01
This paper presents a new trust-region algorithm for n-dimension nonlinear optimization subject to m nonlinear inequality constraints. Equivalent KKT conditions are derived,which is the basis for constructing the new algorithm. Global convergence of the algorithm to a first-order KKT point is established under mild conditions on the trial steps, local quadratic convergence theorem is proved for nondegenerate minimizer point. Numerical experiment is presented to show the effectiveness of our approach.
Mathematical Framework for A Novel Database Replication Algorithm
Directory of Open Access Journals (Sweden)
Divakar Singh Yadav
2013-10-01
Full Text Available In this paper, the detailed overview of the database replication is presented. Thereafter, PDDRA (Pre-fetching based dynamic data replication algorithm algorithm as recently published is detailed. In this algorithm, further, modifications are suggested to minimize the delay in data replication. Finally a mathematical framework is presented to evaluate mean waiting time before a data can be replicated on the requested site.
Algorithm for Compressing Time-Series Data
Hawkins, S. Edward, III; Darlington, Edward Hugo
2012-01-01
An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").
Algorithm Animation with Galant.
Stallmann, Matthias F
2017-01-01
Although surveys suggest positive student attitudes toward the use of algorithm animations, it is not clear that they improve learning outcomes. The Graph Algorithm Animation Tool, or Galant, challenges and motivates students to engage more deeply with algorithm concepts, without distracting them with programming language details or GUIs. Even though Galant is specifically designed for graph algorithms, it has also been used to animate other algorithms, most notably sorting algorithms.
Stable Approximations of a Minimal Surface Problem with Variational Inequalities
Directory of Open Access Journals (Sweden)
M. Zuhair Nashed
1997-01-01
Full Text Available In this paper we develop a new approach for the stable approximation of a minimal surface problem associated with a relaxed Dirichlet problem in the space BV(Ω of functions of bounded variation. The problem can be reformulated as an unconstrained minimization problem of a functional on BV(Ω defined by (u=(u+∫∂Ω|Tu−Φ|, where (u is the “area integral” of u with respect to Ω,T is the “trace operator” from BV(Ω into L i(∂Ω, and ϕ is the prescribed data on the boundary of Ω. We establish convergence and stability of approximate regularized solutions which are solutions of a family of variational inequalities. We also prove convergence of an iterative method based on Uzawa's algorithm for implementation of our regularization procedure.
Optimal experiment design revisited: fair, precise and minimal tomography
Nunn, J; Puentes, G; Lundeen, J S; Walmsley, I A
2009-01-01
Given an experimental set-up and a fixed number of measurements, how should one take data in order to optimally reconstruct the state of a quantum system? The problem of optimal experiment design (OED) for quantum state tomography was first broached by Kosut et al. [arXiv:quant-ph/0411093v1]. Here we provide efficient numerical algorithms for finding the optimal design, and analytic results for the case of 'minimal tomography'. We also introduce the average OED, which is independent of the state to be reconstructed, and the optimal design for tomography (ODT), which minimizes tomographic bias. We find that these two designs are generally similar. Monte-Carlo simulations confirm the utility of our results for qubits. Finally, we adapt our approach to deal with constrained techniques such as maximum likelihood estimation. We find that these are less amenable to optimization than cruder reconstruction methods, such as linear inversion.
Redundancy of minimal weight expansions in Pisot bases
Grabner, Peter J
2011-01-01
Motivated by multiplication algorithms based on redundant number representations, we study representations of an integer $n$ as a sum $n=\\sum_k \\epsilon_k U_k$, where the digits $\\epsilon_k$ are taken from a finite alphabet $\\Sigma$ and $(U_k)_k$ is a linear recurrent sequence of Pisot type with $U_0=1$. The most prominent example of a base sequence $(U_k)_k$ is the sequence of Fibonacci numbers. We prove that the representations of minimal weight $\\sum_k|\\epsilon_k|$ are recognised by a finite automaton and obtain an asymptotic formula for the average number of representations of minimal weight. Furthermore, we relate the maximal order of magnitude of the number of representations of a given integer to the joint spectral radius of a certain set of matrices.
Redundancy of minimal weight expansions in Pisot bases.
Grabner, Peter J; Steiner, Wolfgang
2011-10-21
Motivated by multiplication algorithms based on redundant number representations, we study representations of an integer n as a sum n=∑kεkUk, where the digits εk are taken from a finite alphabet Σ and (Uk)k is a linear recurrent sequence of Pisot type with U0=1. The most prominent example of a base sequence (Uk)k is the sequence of Fibonacci numbers. We prove that the representations of minimal weight ∑k|εk| are recognised by a finite automaton and obtain an asymptotic formula for the average number of representations of minimal weight. Furthermore, we relate the maximal number of representations of a given integer to the joint spectral radius of a certain set of matrices.
A convergent overlapping domain decomposition method for total variation minimization
Fornasier, Massimo
2010-06-22
In this paper we are concerned with the analysis of convergent sequential and parallel overlapping domain decomposition methods for the minimization of functionals formed by a discrepancy term with respect to the data and a total variation constraint. To our knowledge, this is the first successful attempt of addressing such a strategy for the nonlinear, nonadditive, and nonsmooth problem of total variation minimization. We provide several numerical experiments, showing the successful application of the algorithm for the restoration of 1D signals and 2D images in interpolation/inpainting problems, respectively, and in a compressed sensing problem, for recovering piecewise constant medical-type images from partial Fourier ensembles. © 2010 Springer-Verlag.
Learn with SAT to Minimize Büchi Automata
Directory of Open Access Journals (Sweden)
Stephan Barth
2012-10-01
Full Text Available We describe a minimization procedure for nondeterministic Büchi automata (NBA. For an automaton A another automaton A_min with the minimal number of states is learned with the help of a SAT-solver. This is done by successively computing automata A' that approximate A in the sense that they accept a given finite set of positive examples and reject a given finite set of negative examples. In the course of the procedure these example sets are successively increased. Thus, our method can be seen as an instance of a generic learning algorithm based on a "minimally adequate teacher'' in the sense of Angluin. We use a SAT solver to find an NBA for given sets of positive and negative examples. We use complementation via construction of deterministic parity automata to check candidates computed in this manner for equivalence with A. Failure of equivalence yields new positive or negative examples. Our method proved successful on complete samplings of small automata and of quite some examples of bigger automata. We successfully ran the minimization on over ten thousand automata with mostly up to ten states, including the complements of all possible automata with two states and alphabet size three and discuss results and runtimes; single examples had over 100 states.
Minimizing communication cost among distributed controllers in software defined networks
Arlimatti, Shivaleela; Elbreiki, Walid; Hassan, Suhaidi; Habbal, Adib; Elshaikh, Mohamed
2016-08-01
Software Defined Networking (SDN) is a new paradigm to increase the flexibility of today's network by promising for a programmable network. The fundamental idea behind this new architecture is to simplify network complexity by decoupling control plane and data plane of the network devices, and by making the control plane centralized. Recently controllers have distributed to solve the problem of single point of failure, and to increase scalability and flexibility during workload distribution. Even though, controllers are flexible and scalable to accommodate more number of network switches, yet the problem of intercommunication cost between distributed controllers is still challenging issue in the Software Defined Network environment. This paper, aims to fill the gap by proposing a new mechanism, which minimizes intercommunication cost with graph partitioning algorithm, an NP hard problem. The methodology proposed in this paper is, swapping of network elements between controller domains to minimize communication cost by calculating communication gain. The swapping of elements minimizes inter and intra communication cost among network domains. We validate our work with the OMNeT++ simulation environment tool. Simulation results show that the proposed mechanism minimizes the inter domain communication cost among controllers compared to traditional distributed controllers.
Pnev, A B; Dvoretskiy, D A; Zhirnov, A A; Nesterov, E T; Sazonkin, S G; Chernutsky, A O; Shelestov, D A; Fedorov, A K; Svelto, C; Karasik, V E
2016-01-01
We propose a novel scheme for laser phase noise measurements with minimized sensitivity to external fluctuations including interferometer vibration, temperature instability, other low-frequency noise, and relative intensity noise. In order to minimize the effect of these external fluctuations, we employ simultaneous measurement of two spectrally separated channels in the scheme. We present an algorithm for selection of the desired signal to extract the phase noise. Experimental results demonstrate potential of the suggested scheme for a wide range of technological applications.
Liu, Taoming; Çavuşoğlu, M. Cenk
2015-01-01
This paper presents algorithms for optimal selection of needle grasp, for autonomous robotic execution of the minimally invasive surgical suturing task. In order to minimize the tissue trauma during the suturing motion, the best practices of needle path planning that are used by surgeons are applied for autonomous robotic surgical suturing tasks. Once an optimal needle trajectory in a well-defined suturing scenario is chosen, another critical issue for suturing is the choice of needle grasp f...
F[x]-lattice basis reduction algorithm and multisequence synthesis
Institute of Scientific and Technical Information of China (English)
王丽萍; 祝跃飞
2001-01-01
By means of F[x]-lattice basis reduction algorithm, a new algorithm is presented for synthesizing minimum length linear feedback shift registers (or minimal polynomials) for the given multiple sequences over a field F. Its computational complexity is O( N 2) operations in F where N is the length of each sequence. A necessary and sufficient condition for the uniqueness of minimal polynomials is given. The set and exact number of all minimal polynomials are also described when F is a finite field.
Speech/Nonspeech Detection Using Minimal Walsh Basis Functions
Directory of Open Access Journals (Sweden)
Pwint Moe
2007-01-01
Full Text Available This paper presents a new method to detect speech/nonspeech components of a given noisy signal. Employing the combination of binary Walsh basis functions and an analysis-synthesis scheme, the original noisy speech signal is modified first. From the modified signals, the speech components are distinguished from the nonspeech components by using a simple decision scheme. Minimal number of Walsh basis functions to be applied is determined using singular value decomposition (SVD. The main advantages of the proposed method are low computational complexity, less parameters to be adjusted, and simple implementation. It is observed that the use of Walsh basis functions makes the proposed algorithm efficiently applicable in real-world situations where processing time is crucial. Simulation results indicate that the proposed algorithm achieves high-speech and nonspeech detection rates while maintaining a low error rate for different noisy conditions.
Rationally reduced libraries for combinatorial pathway optimization minimizing experimental effort.
Jeschek, Markus; Gerngross, Daniel; Panke, Sven
2016-01-01
Rational flux design in metabolic engineering approaches remains difficult since important pathway information is frequently not available. Therefore empirical methods are applied that randomly change absolute and relative pathway enzyme levels and subsequently screen for variants with improved performance. However, screening is often limited on the analytical side, generating a strong incentive to construct small but smart libraries. Here we introduce RedLibs (Reduced Libraries), an algorithm that allows for the rational design of smart combinatorial libraries for pathway optimization thereby minimizing the use of experimental resources. We demonstrate the utility of RedLibs for the design of ribosome-binding site libraries by in silico and in vivo screening with fluorescent proteins and perform a simple two-step optimization of the product selectivity in the branched multistep pathway for violacein biosynthesis, indicating a general applicability for the algorithm and the proposed heuristics. We expect that RedLibs will substantially simplify the refactoring of synthetic metabolic pathways.
Minimizing Flow Time in the Wireless Gathering Problem
Bonifaci, Vincenzo; Marchetti-Spaccamela, Alberto; Stougie, Leen
2008-01-01
We address the problem of efficient data gathering in a wireless network through multi-hop communication. We focus on the objective of minimizing the maximum flow time of a data packet. We prove that no polynomial time algorithm for this problem can have approximation ratio less than $\\Omega(m^{1/3)$ when $m$ packets have to be transmitted, unless $P = NP$. We then use resource augmentation to assess the performance of a FIFO-like strategy. We prove that this strategy is 5-speed optimal, i.e., its cost remains within the optimal cost if we allow the algorithm to transmit data at a speed 5 times higher than that of the optimal solution we compare to.
Communication Synthesis for Interconnect Minimization in Multicycle Communication Architecture
Huang, Ya-Shih; Hong, Yu-Ju; Huang, Juinn-Dar
In deep-submicron technology, several state-of-the-art architectural synthesis flows have already adopted the distributed register architecture to cope with the increasing wire delay by allowing multicycle communication. In this article, we regard communication synthesis targeting a refined regular distributed register architecture, named RDR-GRS, as a problem of simultaneous data transfer routing and scheduling for global interconnect resource minimization. We also present an innovative algorithm with regard of both spatial and temporal perspectives. It features both a concentration-oriented path router gathering wire-sharable data transfers and a channel-based time scheduler resolving contentions for wires in a channel, which are in spatial and temporal domain, respectively. The experimental results show that the proposed algorithm can significantly outperform existing related works.
Reasoning about Minimal Belief and Negation as Failure
Rosati, R
2011-01-01
We investigate the problem of reasoning in the propositional fragment of MBNF, the logic of minimal belief and negation as failure introduced by Lifschitz, which can be considered as a unifying framework for several nonmonotonic formalisms, including default logic, autoepistemic logic, circumscription, epistemic queries, and logic programming. We characterize the complexity and provide algorithms for reasoning in propositional MBNF. In particular, we show that entailment in propositional MBNF lies at the third level of the polynomial hierarchy, hence it is harder than reasoning in all the above mentioned propositional formalisms for nonmonotonic reasoning. We also prove the exact correspondence between negation as failure in MBNF and negative introspection in Moore's autoepistemic logic.
MINIMIZING JOB SHOP INVENTORY WITH ON-TIME DELIVERY GUARANTEES
Institute of Scientific and Technical Information of China (English)
Leyuan SHI; Yunpeng PAN
2003-01-01
In this paper, we introduce a new job shop model that minimizes a well-motivated inventory measure while assuring on-time job deliveries. For this new problem, we introduce precise notation and formalization. A decomposition scheme is discussed in detail, which is subsequently utilized in a new shifting bottleneck procedure (SBP) for the problem. In addition to SBP, we propose another heuristic method based on successive insertion of operations. Algorithms are fine tuned through experimentation. Moreover, the two heuristic procedures are compared in terms of computation time and solution quality, using disguised actual factory data.
ONLINE MINIMIZATION OF VERTICAL BEAM SIZES AT APS
Energy Technology Data Exchange (ETDEWEB)
Sun, Yipeng
2017-06-25
In this paper, online minimization of vertical beam sizes along the APS (Advanced Photon Source) storage ring is presented. A genetic algorithm (GA) was developed and employed for the online optimization in the APS storage ring. A total of 59 families of skew quadrupole magnets were employed as knobs to adjust the coupling and the vertical dispersion in the APS storage ring. Starting from initially zero current skew quadrupoles, small vertical beam sizes along the APS storage ring were achieved in a short optimization time of one hour. The optimization results from this method are briefly compared with the one from LOCO (Linear Optics from Closed Orbits) response matrix correction.
Minimal parameter solution of the quaternion differential equation
Liu, Ruihua
2005-11-01
The efficiently calculate of the attitude matrix is an important subject in the Strap-down Inertial Navigation System (SINS), because the precision of its solution directly effects on the system performance. A new method of 3rd order minimal parameter solution for the orthogonal matrix differential equation is used to solve the quaternion differential equation of SINS in this paper, and the numerical simulation is done as well. From the simulation result, we can see that when the new algorithm is used, the precision of the solved attitude angles is two orders higher than the classical method, and the floating-point operations is only abort half of the old one.
Local orbitals by minimizing powers of the orbital variance
DEFF Research Database (Denmark)
Jansik, Branislav; Høst, Stinne; Kristensen, Kasper;
2011-01-01
It is demonstrated that a set of local orthonormal Hartree–Fock (HF) molecular orbitals can be obtained for both the occupied and virtual orbital spaces by minimizing powers of the orbital variance using the trust-region algorithm. For a power exponent equal to one, the Boys localization function...... is obtained. For increasing power exponents, the penalty for delocalized orbitals is increased and smaller maximum orbital spreads are encountered. Calculations on superbenzene, C60, and a fragment of the titin protein show that for a power exponent equal to one, delocalized outlier orbitals may...
Computational Algorithm for Orbit and Mass Determination of Visual Binaries
Sharaf, Mohamed; Saad, Abdel Naby; Elkhateeb, Magdy; Saad, Somaya
2014-01-01
In this paper we introduce an algorithm for determining the orbital elements and individual masses of visual binaries. The algorithm uses an optimal point, which minimizes a specific function describing the average length between the least-squares solution and the exact solution. The objective function to be minimized is exact, without any approximation. The algorithm is applied to Kowalsky's method for orbital parameter computation, and to Reed's method for the determination of the dynamical parallax and individual masses. The procedure is applied to A 1145 and ADS 15182.
A Robust Algorithm for Blind Total Variation Restoration
Institute of Scientific and Technical Information of China (English)
Jing Xu; Qian-shun Chang
2008-01-01
Image restoration is a fundamental problem in image processing. Blind image restoration has a great value in its practical application. However, it is not an easy problem to solve due to its complexity and difficulty. In this paper, we combine our robust algorithm for known blur operator with an alternating minimization implicit iterative scheme to deal with blind deconvolution problem, recover the image and identify the point spread function(PSF). The only assumption needed is satisfy the practical physical sense. Numerical experiments demonstrate that this minimization algorithm is efficient and robust over a wide range of PSF and have almost the same results compared with known PSF algorithm.
Mini-Med School Planning Guide
National Institutes of Health, Office of Science Education, 2008
2008-01-01
Mini-Med Schools are public education programs now offered by more than 70 medical schools, universities, research institutions, and hospitals across the nation. There are even Mini-Med Schools in Ireland, Malta, and Canada! The program is typically a lecture series that meets once a week and provides "mini-med students" information on some of the…
Robotic-assisted minimally invasive liver resection
Directory of Open Access Journals (Sweden)
Yao-Ming Wu
2014-04-01
Conclusion: Robotic assistance increased the percentage of minimally invasive liver resections and the percentage of major minimally invasive liver resections with comparable perioperative results. Robotic-assisted minimally invasive liver resection is feasible, but its role needs more accumulated experience to clarify.
Optimizing Ship Speed to Minimize Total Fuel Consumption with Multiple Time Windows
Directory of Open Access Journals (Sweden)
Jae-Gon Kim
2016-01-01
Full Text Available We study the ship speed optimization problem with the objective of minimizing the total fuel consumption. We consider multiple time windows for each port call as constraints and formulate the problem as a nonlinear mixed integer program. We derive intrinsic properties of the problem and develop an exact algorithm based on the properties. Computational experiments show that the suggested algorithm is very efficient in finding an optimal solution.
A multiobjective optimization algorithm is applied to a groundwater quality management problem involving remediation by pump-and-treat (PAT). The multiobjective optimization framework uses the niched Pareto genetic algorithm (NPGA) and is applied to simultaneously minimize the...
Algorithmic aspects of topology control problems for ad hoc networks
Energy Technology Data Exchange (ETDEWEB)
Liu, R. (Rui); Lloyd, E. L. (Errol L.); Marathe, M. V. (Madhav V.); Ramanathan, R. (Ram); Ravi, S. S.
2002-01-01
Topology control problems are concerned with the assignment of power values to nodes of an ad hoc network so that the power assignment leads to a graph topology satisfying some specified properties. This paper considers such problems under several optimization objectives, including minimizing the maximum power and minimizing the total power. A general approach leading to a polynomial algorithm is presented for minimizing maximum power for a class of graph properties, called monotone properties. The difficulty of generalizing the approach to properties that are not monoione is pointed out. Problems involving the minimization of total power are known to be NP-complete even for simple graph properties. A general approach that leads to an approximation algorithm for minimizing the total power for some monotone properties is presented. Using this approach, a new approximation algorithm for the problem of minimizing the total power for obtaining a 2-node-connected graph is obtained. It is shown that this algorithm provides a constant performance guarantee. Experimental results from an implementation of the approximation algorithm are also presented.
Institute of Scientific and Technical Information of China (English)
Zhongxiao Jia; Yuquan Sun
2007-01-01
Based on the generalized minimal residual(GMRES)principle,Hu and Reichel proposed a minimal residual algorithm for the Sylvester equation.The algorithm requires the solution of a structured least squares problem.They form the normal equations of the least squares problem and then solve it by a direct solver,so it is susceptible to instability.In this paper,by exploiting the special structure of the least squares problem and working on the problem directly,a numerically stable QR decomposition based algorithm is presented for the problem.The new algorithm is more stable than the normal equations algorithm of Hu and Reichel.Numerical experiments are reported to confirm the superior stability of the new algorithm.
Medical waste: a minimal hazard.
Keene, J H
1991-11-01
Medical waste is a subset of municipal waste, and regulated medical waste comprises less than 1% of the total municipal waste volume in the United States. As part of the overall waste stream, medical waste does contribute in a relative way to the aesthetic damage of the environment. Likewise, some small portion of the total release of hazardous chemicals and radioactive materials is derived from medical wastes. These comments can be made about any generated waste, regulated or unregulated. Healthcare professionals, including infection control personnel, microbiologists, public health officials, and others, have unsuccessfully argued that there is no evidence that past methods of treatment and disposal of regulated medical waste constitute any public health hazard. Historically, discovery of environmental contamination by toxic chemical disposal has followed assurances that the material was being disposed of in a safe manner. Therefore, a cynical public and its elected officials have demanded proof that the treatment and disposal of medical waste (i.e., infectious waste) do not constitute a public health hazard. Existent studies on municipal waste provide that proof. In order to argue that the results of these municipal waste studies are demonstrative of the minimal potential infectious environmental impact and lack of public health hazard associated with medical waste, we must accept the following: that the pathogens are the same whether they come from the hospital or the community, and that the municipal waste studied contained waste materials we now define as regulated medical waste.(ABSTRACT TRUNCATED AT 250 WORDS)
Robotic assisted minimally invasive surgery
Directory of Open Access Journals (Sweden)
Palep Jaydeep
2009-01-01
Full Text Available The term "robot" was coined by the Czech playright Karel Capek in 1921 in his play Rossom′s Universal Robots. The word "robot" is from the check word robota which means forced labor.The era of robots in surgery commenced in 1994 when the first AESOP (voice controlled camera holder prototype robot was used clinically in 1993 and then marketed as the first surgical robot ever in 1994 by the US FDA. Since then many robot prototypes like the Endoassist (Armstrong Healthcare Ltd., High Wycombe, Buck, UK, FIPS endoarm (Karlsruhe Research Center, Karlsruhe, Germany have been developed to add to the functions of the robot and try and increase its utility. Integrated Surgical Systems (now Intuitive Surgery, Inc. redesigned the SRI Green Telepresence Surgery system and created the daVinci Surgical System ® classified as a master-slave surgical system. It uses true 3-D visualization and EndoWrist ® . It was approved by FDA in July 2000 for general laparoscopic surgery, in November 2002 for mitral valve repair surgery. The da Vinci robot is currently being used in various fields such as urology, general surgery, gynecology, cardio-thoracic, pediatric and ENT surgery. It provides several advantages to conventional laparoscopy such as 3D vision, motion scaling, intuitive movements, visual immersion and tremor filtration. The advent of robotics has increased the use of minimally invasive surgery among laparoscopically naοve surgeons and expanded the repertoire of experienced surgeons to include more advanced and complex reconstructions.
Against Explanatory Minimalism in Psychiatry.
Thornton, Tim
2015-01-01
The idea that psychiatry contains, in principle, a series of levels of explanation has been criticized not only as empirically false but also, by Campbell, as unintelligible because it presupposes a discredited pre-Humean view of causation. Campbell's criticism is based on an interventionist-inspired denial that mechanisms and rational connections underpin physical and mental causation, respectively, and hence underpin levels of explanation. These claims echo some superficially similar remarks in Wittgenstein's Zettel. But attention to the context of Wittgenstein's remarks suggests a reason to reject explanatory minimalism in psychiatry and reinstate a Wittgensteinian notion of levels of explanation. Only in a context broader than the one provided by interventionism is that the ascription of propositional attitudes, even in the puzzling case of delusions, justified. Such a view, informed by Wittgenstein, can reconcile the idea that the ascription mental phenomena presupposes a particular level of explanation with the rejection of an a priori claim about its connection to a neurological level of explanation.
Minimal hepatic encephalopathy: A review.
Nardone, Raffaele; Taylor, Alexandra C; Höller, Yvonne; Brigo, Francesco; Lochner, Piergiorgio; Trinka, Eugen
2016-10-01
Minimal hepatic encephalopathy (MHE) is the earliest form of hepatic encephalopathy and can affect up to 80% of patients with liver cirrhosis. By definition, MHE is characterized by cognitive function impairment in the domains of attention, vigilance and integrative function, but obvious clinical manifestation are lacking. MHE has been shown to affect daily functioning, quality of life, driving and overall mortality. The diagnosis can be achieved through neuropsychological testing, recently developed computerized psychometric tests, such as the critical flicker frequency and the inhibitory control tests, as well as neurophysiological procedures. Event related potentials can reveal subtle changes in patients with normal neuropsychological performances. Spectral analysis of electroencephalography (EEG) and quantitative analysis of sleep EEG provide early markers of cerebral dysfunction in cirrhotic patients with MHE. Neuroimaging, in particular MRI, also increasingly reveals diffuse abnormalities in intrinsic brain activity and altered organization of functional connectivity networks. Medical treatment for MHE to date has been focused on reducing serum ammonia levels and includes non-absorbable disaccharides, probiotics or rifaximin. Liver transplantation may not reverse the cognitive deficits associated with MHE. We performed here an updated review on epidemiology, burden and quality of life, neuropsychological testing, neuroimaging, neurophysiology and therapy in subjects with MHE.
Unoriented Minimal Type 0 Strings
Carlisle, J E; Carlisle, James E; Johnson, Clifford V
2004-01-01
We define a family of string equations with perturbative expansions that admit an interpretation as an unoriented minimal string theory with background D-branes and R-R fluxes. The theory also has a well-defined non-perturbative sector and we expect it to have a continuum interpretation as an orientifold projection of the non-critical type~0A string for \\hat{c}=0, the (2,4) model. There is a second perturbative region which is consistent with an interpretation in terms of background R-R fluxes. We identify a natural parameter in the formulation that we speculate may have an interpretation as characterizing the contribution of a new type of background D-brane. There is a non-perturbative map to a family of string equations which we expect to be the \\hat{c}=0 type 0B string. The map exchanges D-branes and R-R fluxes. We present the general structure of the string equations for the (2,4k) type 0A models.
Cosine tuning minimizes motor errors.
Todorov, Emanuel
2002-06-01
Cosine tuning is ubiquitous in the motor system, yet a satisfying explanation of its origin is lacking. Here we argue that cosine tuning minimizes expected errors in force production, which makes it a natural choice for activating muscles and neurons in the final stages of motor processing. Our results are based on the empirically observed scaling of neuromotor noise, whose standard deviation is a linear function of the mean. Such scaling predicts a reduction of net force errors when redundant actuators pull in the same direction. We confirm this prediction by comparing forces produced with one versus two hands and generalize it across directions. Under the resulting neuromotor noise model, we prove that the optimal activation profile is a (possibly truncated) cosine--for arbitrary dimensionality of the workspace, distribution of force directions, correlated or uncorrelated noise, with or without a separate cocontraction command. The model predicts a negative force bias, truncated cosine tuning at low muscle cocontraction levels, and misalignment of preferred directions and lines of action for nonuniform muscle distributions. All predictions are supported by experimental data.
Study of constrained minimal supersymmetry
Kane, G L; Roszkowski, Leszek; Wells, J D; Chris Kolda; Leszek Roszkowski; James D Wells
1994-01-01
Taking seriously phenomenological indications for supersymmetry, we have made a detailed study of unified minimal SUSY, including effects at the few percent level in a consistent fashion. We report here a general analysis without choosing a particular unification gauge group. We find that the encouraging SUSY unification results of recent years do survive the challenge of a more complete and accurate analysis. Taking into account effects at the 5-10% level leads to several improvements of previous results, and allows us to sharpen our predictions for SUSY in the light of unification. We perform a thorough study of the parameter space. The results form a well-defined basis for comparing the physics potential of different facilities. Very little of the acceptable parameter space has been excluded by LEP or FNAL so far, but a significant fraction can be covered when these accelerators are upgraded. A number of initial applications to the understanding of the SUSY spectrum, detectability of SUSY at LEP II or FNAL...
Computing a Clique Tree with the Algorithm Maximal Label Search
Directory of Open Access Journals (Sweden)
Anne Berry
2017-01-01
Full Text Available The algorithm MLS (Maximal Label Search is a graph search algorithm that generalizes the algorithms Maximum Cardinality Search (MCS, Lexicographic Breadth-First Search (LexBFS, Lexicographic Depth-First Search (LexDFS and Maximal Neighborhood Search (MNS. On a chordal graph, MLS computes a PEO (perfect elimination ordering of the graph. We show how the algorithm MLS can be modified to compute a PMO (perfect moplex ordering, as well as a clique tree and the minimal separators of a chordal graph. We give a necessary and sufficient condition on the labeling structure of MLS for the beginning of a new clique in the clique tree to be detected by a condition on labels. MLS is also used to compute a clique tree of the complement graph, and new cliques in the complement graph can be detected by a condition on labels for any labeling structure. We provide a linear time algorithm computing a PMO and the corresponding generators of the maximal cliques and minimal separators of the complement graph. On a non-chordal graph, the algorithm MLSM, a graph search algorithm computing an MEO and a minimal triangulation of the graph, is used to compute an atom tree of the clique minimal separator decomposition of any graph.
Global regularity for minimal sets and counterexamples
Liang, Xiangyu
2011-01-01
We discuss the global regularity for 2 dimensional minimal sets, that is, whether all global minimal sets in $\\R^n$ are cones or not. Every minimal set looks like a minimal cone $C$ at infinity, hence the main point is to use the topological properties of a minimal set at large scale to control its topology at smaller scales. Such arguments depend on the cone $C$, thus we have to discuss them one by one. Recall that this is the idea to prove that 1-dimensional Almgren-minimal sets in $\\R^n$, and 2-dimensional Mumford-Shah minimal sets in $\\R^3$ are cones. In this article we discuss three types of 2-dimensional minimal sets: Almgren-minimal set in $\\R^3$ whose blow-in limit is a $\\T$ set; topological minimal sets in $\\R^4$ whose blow-in limit is a $\\T$ set; and Almgren minimal sets in $\\R^4$ whose blow-in limit is the union $P_\\theta$ of two almost orthogonal planes. For the first one we eliminate an existing potential counterexample that was proposed by several people, and show that a real counterexample shou...
Directory of Open Access Journals (Sweden)
Deptuła A.
2014-08-01
Full Text Available The method of minimization of complex partial multi-valued logical functions determines the degree of importance of construction and exploitation parameters playing the role of logical decision variables. Logical functions are taken into consideration in the issues of modelling machine sets. In multi-valued logical functions with weighting products, it is possible to use a modified Quine - McCluskey algorithm of multi-valued functions minimization. Taking into account weighting coefficients in the logical tree minimization reflects a physical model of the object being analysed much better
Generalized Weiszfeld Algorithms for Lq Optimization.
Aftab, Khurrum; Hartley, Richard; Trumpf, Jochen
2015-04-01
In many computer vision applications, a desired model of some type is computed by minimizing a cost function based on several measurements. Typically, one may compute the model that minimizes the L2 cost, that is the sum of squares of measurement errors with respect to the model. However, the Lq solution which minimizes the sum of the qth power of errors usually gives more robust results in the presence of outliers for some values of q, for example, q = 1. The Weiszfeld algorithm is a classic algorithm for finding the geometric L1 mean of a set of points in Euclidean space. It is provably optimal and requires neither differentiation, nor line search. The Weiszfeld algorithm has also been generalized to find the L1 mean of a set of points on a Riemannian manifold of non-negative curvature. This paper shows that the Weiszfeld approach may be extended to a wide variety of problems to find an Lq mean for 1 ≤ q algorithm provably finds the global Lq optimum) and multiple rotation averaging (for which no such proof exists). Experimental results of Lq optimization for rotations show the improved reliability and robustness compared to L2 optimization.
Minimal models of multidimensional computations.
Directory of Open Access Journals (Sweden)
Jeffrey D Fitzgerald
2011-03-01
Full Text Available The multidimensional computations performed by many biological systems are often characterized with limited information about the correlations between inputs and outputs. Given this limitation, our approach is to construct the maximum noise entropy response function of the system, leading to a closed-form and minimally biased model consistent with a given set of constraints on the input/output moments; the result is equivalent to conditional random field models from machine learning. For systems with binary outputs, such as neurons encoding sensory stimuli, the maximum noise entropy models are logistic functions whose arguments depend on the constraints. A constraint on the average output turns the binary maximum noise entropy models into minimum mutual information models, allowing for the calculation of the information content of the constraints and an information theoretic characterization of the system's computations. We use this approach to analyze the nonlinear input/output functions in macaque retina and thalamus; although these systems have been previously shown to be responsive to two input dimensions, the functional form of the response function in this reduced space had not been unambiguously identified. A second order model based on the logistic function is found to be both necessary and sufficient to accurately describe the neural responses to naturalistic stimuli, accounting for an average of 93% of the mutual information with a small number of parameters. Thus, despite the fact that the stimulus is highly non-Gaussian, the vast majority of the information in the neural responses is related to first and second order correlations. Our results suggest a principled and unbiased way to model multidimensional computations and determine the statistics of the inputs that are being encoded in the outputs.
Hybrid Genetic Algorithms with Fuzzy Logic Controller
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
In this paper, a new implementation of genetic algorithms (GAs) is developed for the machine scheduling problem, which is abundant among the modern manufacturing systems. The performance measure of early and tardy completion of jobs is very natural as one's aim, which is usually to minimize simultaneously both earliness and tardiness of all jobs. As the problem is NP-hard and no effective algorithms exist, we propose a hybrid genetic algorithms approach to deal with it. We adjust the crossover and mutation probabilities by fuzzy logic controller whereas the hybrid genetic algorithm does not require preliminary experiments to determine probabilities for genetic operators. The experimental results show the effectiveness of the GAs method proposed in the paper.``
Robust location algorithm for NLOS environments
Institute of Scientific and Technical Information of China (English)
Huang Jiyan; Wan Qun
2008-01-01
One of the main problems facing accurate location in wireless communication systems is non-line-of-sight(NLOS)propagation.Traditional location algorithms are based on classical techniques under minimizing a least-squares objective function and it loses optimality when the NLOS error distribution deviates from Gaussian distribution.An effective location algorithm based on a robust objective function is proposed to mitigate NLOS errors.The proposed method does not require the prior knowledge of the NLOS error distribution and can give a closed-form solution.A comparison is performed in different NLOS environments between the proposed algorithm and two additional ones(LS method and Chan's method with an NLOS correction).The proposed algorithm clearly outperforms the other two.
Skiena, Steven S
2008-01-01
Explaining designing algorithms, and analyzing their efficacy and efficiency, this book covers combinatorial algorithms technology, stressing design over analysis. It presents instruction on methods for designing and analyzing computer algorithms. It contains the catalog of algorithmic resources, implementations and a bibliography
Hamiltonian Algorithm Sound Synthesis
大矢, 健一
2013-01-01
Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.
Ha, Jeongmok; Jeong, Hong
2016-07-01
This study investigates the directed acyclic subgraph (DAS) algorithm, which is used to solve discrete labeling problems much more rapidly than other Markov-random-field-based inference methods but at a competitive accuracy. However, the mechanism by which the DAS algorithm simultaneously achieves competitive accuracy and fast execution speed, has not been elucidated by a theoretical derivation. We analyze the DAS algorithm by comparing it with a message passing algorithm. Graphical models, inference methods, and energy-minimization frameworks are compared between DAS and message passing algorithms. Moreover, the performances of DAS and other message passing methods [sum-product belief propagation (BP), max-product BP, and tree-reweighted message passing] are experimentally compared.
DEFF Research Database (Denmark)
Bucher, Taina
2017-01-01
of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...... the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...
Energy Technology Data Exchange (ETDEWEB)
Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics
1997-11-01
The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.
A Hybrid Genetic Algorithm for Reduct of Attributes in Decision System Based on Rough Set Theory
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
Knowledge reduction is an important issue when dealing with huge amounts of data. And it has been proved that computing the minimal reduct of decision system is NP-complete. By introducing heuristic information into genetic algorithm, we proposed a heuristic genetic algorithm. In the genetic algorithm, we constructed a new operator to maintaining the classification ability. The experiment shows that our algorithm is efficient and effective for minimal reduct, even for the special example that the simple heuristic algorithm can't get the right result.
Minimizing flowtime subject to optimal makespan on two identical parallel machines
Directory of Open Access Journals (Sweden)
Jatinder N. D. Gupta
2000-06-01
Full Text Available We consider the problem of scheduling jobs on two parallel identical machines where an optimal schedule is defined as one that gives the smallest total flowtime (the sum of the completion time of all jobs among the set of schedules with optimal makespan (the completion time of the latest job. Utilizing an existing optimization algorithm for the minimization of makespan, we propose an algorithm to determine optimal schedules for this problem. We empirically show that the proposed algorithm can quickly find optimal schedules for problems containing a large number of jobs.
A Two-Stage Assembly-Type Flowshop Scheduling Problem for Minimizing Total Tardiness
Directory of Open Access Journals (Sweden)
Ju-Yong Lee
2016-01-01
Full Text Available This research considers a two-stage assembly-type flowshop scheduling problem with the objective of minimizing the total tardiness. The first stage consists of two independent machines, and the second stage consists of a single machine. Two types of components are fabricated in the first stage, and then they are assembled in the second stage. Dominance properties and lower bounds are developed, and a branch and bound algorithm is presented that uses these properties and lower bounds as well as an upper bound obtained from a heuristic algorithm. The algorithm performance is evaluated using a series of computational experiments on randomly generated instances and the results are reported.
Single-machine scheduling to minimize total completion time and tardiness with two competing agents.
Lee, Wen-Chiung; Shiau, Yau-Ren; Chung, Yu-Hsiang; Ding, Lawson
2014-01-01
We consider a single-machine two-agent problem where the objective is to minimize a weighted combination of the total completion time and the total tardiness of jobs from the first agent given that no tardy jobs are allowed for the second agent. A branch-and-bound algorithm is developed to derive the optimal sequence and two simulated annealing heuristic algorithms are proposed to search for the near-optimal solutions. Computational experiments are also conducted to evaluate the proposed branch-and-bound and simulated annealing algorithms.
Algorithmically specialized parallel computers
Snyder, Lawrence; Gannon, Dennis B
1985-01-01
Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster
A hybrid algorithm for unrelated parallel machines scheduling
Directory of Open Access Journals (Sweden)
Mohsen Shafiei Nikabadi
2016-09-01
Full Text Available In this paper, a new hybrid algorithm based on multi-objective genetic algorithm (MOGA using simulated annealing (SA is proposed for scheduling unrelated parallel machines with sequence-dependent setup times, varying due dates, ready times and precedence relations among jobs. Our objective is to minimize makespan (Maximum completion time of all machines, number of tardy jobs, total tardiness and total earliness at the same time which can be more advantageous in real environment than considering each of objectives separately. For obtaining an optimal solution, hybrid algorithm based on MOGA and SA has been proposed in order to gain both good global and local search abilities. Simulation results and four well-known multi-objective performance metrics, indicate that the proposed hybrid algorithm outperforms the genetic algorithm (GA and SA in terms of each objective and significantly in minimizing the total cost of the weighted function.
Filtered refocusing: a volumetric reconstruction algorithm for plenoptic-PIV
Fahringer, Timothy W.; Thurow, Brian S.
2016-09-01
A new algorithm for reconstruction of 3D particle fields from plenoptic image data is presented. The algorithm is based on the technique of computational refocusing with the addition of a post reconstruction filter to remove the out of focus particles. This new algorithm is tested in terms of reconstruction quality on synthetic particle fields as well as a synthetically generated 3D Gaussian ring vortex. Preliminary results indicate that the new algorithm performs as well as the MART algorithm (used in previous work) in terms of the reconstructed particle position accuracy, but produces more elongated particles. The major advantage to the new algorithm is the dramatic reduction in the computational cost required to reconstruct a volume. It is shown that the new algorithm takes 1/9th the time to reconstruct the same volume as MART while using minimal resources. Experimental results are presented in the form of the wake behind a cylinder at a Reynolds number of 185.
Damped Arrow-Hurwicz algorithm for sphere packing
Degond, Pierre; Ferreira, Marina A.; Motsch, Sebastien
2017-03-01
We consider algorithms that, from an arbitrarily sampling of N spheres (possibly overlapping), find a close packed configuration without overlapping. These problems can be formulated as minimization problems with non-convex constraints. For such packing problems, we observe that the classical iterative Arrow-Hurwicz algorithm does not converge. We derive a novel algorithm from a multi-step variant of the Arrow-Hurwicz scheme with damping. We compare this algorithm with classical algorithms belonging to the class of linearly constrained Lagrangian methods and show that it performs better. We provide an analysis of the convergence of these algorithms in the simple case of two spheres in one spatial dimension. Finally, we investigate the behaviour of our algorithm when the number of spheres is large in two and three spatial dimensions.
Three penalized EM-type algorithms for PET image reconstruction.
Teng, Yueyang; Zhang, Tie
2012-06-01
Based on Bayes theory, Green introduced the maximum a posteriori (MAP) algorithm to obtain a smoothing reconstruction for positron emission tomography. This algorithm is flexible and convenient for most of the penalties, but it is hard to guarantee convergence. For a common goal, Fessler penalized a weighted least squares (WLS) estimator by a quadratic penalty and then solved it with the successive over-relaxation (SOR) algorithm, however, the algorithm was time-consuming and difficultly parallelized. Anderson proposed another WLS estimator for faster convergence, on which there were few regularization methods studied. For three regularized estimators above, we develop three new expectation maximization (EM) type algorithms to solve them. Unlike MAP and SOR, the proposed algorithms yield update rules by minimizing the auxiliary functions constructed on the previous iterations, which ensure the cost functions monotonically decreasing. Experimental results demonstrated the robustness and effectiveness of the proposed algorithms.
Approximation algorithms for some vehicle routing problems
Bazgan, Cristina; Hassin, Refael; Monnot, Jérôme
2005-01-01
We study vehicle routing problems with constraints on the distance traveled by each vehicle or on the number of vehicles. The objective is either to minimize the total distance traveled by vehicles or to minimize the number of vehicles used. We design constant differential approximation algorithms for kVRP. Note that, using the differential bound for METRIC 3VRP, we obtain the randomized standard ratio . This is an improvement of the best-known bound of 2 given by Haimovich et al. (Vehicle Ro...
Combinatorial Multiobjective Optimization Using Genetic Algorithms
Crossley, William A.; Martin. Eric T.
2002-01-01
The research proposed in this document investigated multiobjective optimization approaches based upon the Genetic Algorithm (GA). Several versions of the GA have been adopted for multiobjective design, but, prior to this research, there had not been significant comparisons of the most popular strategies. The research effort first generalized the two-branch tournament genetic algorithm in to an N-branch genetic algorithm, then the N-branch GA was compared with a version of the popular Multi-Objective Genetic Algorithm (MOGA). Because the genetic algorithm is well suited to combinatorial (mixed discrete / continuous) optimization problems, the GA can be used in the conceptual phase of design to combine selection (discrete variable) and sizing (continuous variable) tasks. Using a multiobjective formulation for the design of a 50-passenger aircraft to meet the competing objectives of minimizing takeoff gross weight and minimizing trip time, the GA generated a range of tradeoff designs that illustrate which aircraft features change from a low-weight, slow trip-time aircraft design to a heavy-weight, short trip-time aircraft design. Given the objective formulation and analysis methods used, the results of this study identify where turboprop-powered aircraft and turbofan-powered aircraft become more desirable for the 50 seat passenger application. This aircraft design application also begins to suggest how a combinatorial multiobjective optimization technique could be used to assist in the design of morphing aircraft.
A Frequent Closed Itemsets Lattice-based Approach for Mining Minimal Non-Redundant Association Rules
Vo, Bay
2011-01-01
There are many algorithms developed for improvement the time of mining frequent itemsets (FI) or frequent closed itemsets (FCI). However, the algorithms which deal with the time of generating association rules were not put in deep research. In reality, in case of a database containing many FI/FCI (from ten thousands up to millions), the time of generating association rules is much larger than that of mining FI/FCI. Therefore, this paper presents an application of frequent closed itemsets lattice (FCIL) for mining minimal non-redundant association rules (MNAR) to reduce a lot of time for generating rules. Firstly, we use CHARM-L for building FCIL. After that, based on FCIL, an algorithm for fast generating MNAR will be proposed. Experimental results show that the proposed algorithm is much faster than frequent itemsets lattice-based algorithm in the mining time.
LCD motion blur: modeling, analysis, and algorithm.
Chan, Stanley H; Nguyen, Truong Q
2011-08-01
Liquid crystal display (LCD) devices are well known for their slow responses due to the physical limitations of liquid crystals. Therefore, fast moving objects in a scene are often perceived as blurred. This effect is known as the LCD motion blur. In order to reduce LCD motion blur, an accurate LCD model and an efficient deblurring algorithm are needed. However, existing LCD motion blur models are insufficient to reflect the limitation of human-eye-tracking system. Also, the spatiotemporal equivalence in LCD motion blur models has not been proven directly in the discrete 2-D spatial domain, although it is widely used. There are three main contributions of this paper: modeling, analysis, and algorithm. First, a comprehensive LCD motion blur model is presented, in which human-eye-tracking limits are taken into consideration. Second, a complete analysis of spatiotemporal equivalence is provided and verified using real video sequences. Third, an LCD motion blur reduction algorithm is proposed. The proposed algorithm solves an l(1)-norm regularized least-squares minimization problem using a subgradient projection method. Numerical results show that the proposed algorithm gives higher peak SNR, lower temporal error, and lower spatial error than motion-compensated inverse filtering and Lucy-Richardson deconvolution algorithm, which are two state-of-the-art LCD deblurring algorithms.
A NEW METHOD FOR THE CONSTRUCTION OF MULTIVARIATE MINIMAL INTERPOLATION POLYNOMIAL
Institute of Scientific and Technical Information of China (English)
Zhang Chuanlin
2001-01-01
The extended Hermite interpolation problem on segment points set over n-dimensional Euclidean space is cansidered. Based on the algorithm to com pute the Grobner basis of Ideal given by dual basis a new method to construct minimal multivariate polynomial which satis fies the interpolation conditions is given.
Optimal Hops-Based Adaptive Clustering Algorithm
Xuan, Xin; Chen, Jian; Zhen, Shanshan; Kuo, Yonghong
This paper proposes an optimal hops-based adaptive clustering algorithm (OHACA). The algorithm sets an energy selection threshold before the cluster forms so that the nodes with less energy are more likely to go to sleep immediately. In setup phase, OHACA introduces an adaptive mechanism to adjust cluster head and load balance. And the optimal distance theory is applied to discover the practical optimal routing path to minimize the total energy for transmission. Simulation results show that OHACA prolongs the life of network, improves utilizing rate and transmits more data because of energy balance.
Margolis, C Z
1983-02-04
The clinical algorithm (flow chart) is a text format that is specially suited for representing a sequence of clinical decisions, for teaching clinical decision making, and for guiding patient care. A representative clinical algorithm is described in detail; five steps for writing an algorithm and seven steps for writing a set of algorithms are outlined. Five clinical education and patient care uses of algorithms are then discussed, including a map for teaching clinical decision making and protocol charts for guiding step-by-step care of specific problems. Clinical algorithms are compared as to their clinical usefulness with decision analysis. Three objections to clinical algorithms are answered, including the one that they restrict thinking. It is concluded that methods should be sought for writing clinical algorithms that represent expert consensus. A clinical algorithm could then be written for any area of medical decision making that can be standardized. Medical practice could then be taught more effectively, monitored accurately, and understood better.
Algorithmic approach to quantum physics
Ozhigov, Y
2004-01-01
Algorithmic approach is based on the assumption that any quantum evolution of many particle system can be simulated on a classical computer with the polynomial time and memory cost. Algorithms play the central role here but not the analysis, and a simulation gives a "film" which visualizes many particle quantum dynamics and is demonstrated to a user of the model. Restrictions following from the algorithm theory are considered on a level of fundamental physical laws. Born rule for the calculation of quantum probability as well as the decoherence is derived from the existence of a nonzero minimal value of amplitude module - a grain of amplitude. The limitation on the classical computational resources gives the unified description of quantum dynamics that is not divided to the unitary dynamics and measurements and does not depend on the existence of observer. It is proposed the description of states based on the nesting of particles in each other that permits to account the effects of all levels in the same mode...
Logistics distribution centers location problem and algorithm under fuzzy environment
Yang, Lixing; Ji, Xiaoyu; Gao, Ziyou; Li, Keping
2007-11-01
Distribution centers location problem is concerned with how to select distribution centers from the potential set so that the total relevant cost is minimized. This paper mainly investigates this problem under fuzzy environment. Consequentially, chance-constrained programming model for the problem is designed and some properties of the model are investigated. Tabu search algorithm, genetic algorithm and fuzzy simulation algorithm are integrated to seek the approximate best solution of the model. A numerical example is also given to show the application of the algorithm.
Low complexity bit loading algorithm for OFDM system
Institute of Scientific and Technical Information of China (English)
Yang Yu; Sha Xuejun; Zhang Zhonghua
2006-01-01
A new approach to loading for orthogonal frequency division multiplexing (OFDM) system is proposed, this bit-loading algorithm assigns bits to different subchannels in order to minimize the transmit energy. In the algorithm,first most bit are allocated to each subchannels according to channel condition, Shannon formula and QoS require of the user, then the residual bit are allocated to the subchannels bit by bit. In this way the algorithm is efficient while calculation is less complex. This is the first time to load bits with the scale following Shannon formula and the algorithm is of O (4N) complexity.
A Minimum Cost Handover Algorithm for Mobile Satellite Networks
Institute of Scientific and Technical Information of China (English)
Zhang Tao; Zhang Jun
2008-01-01
For mobile satellite networks,an appropriate handover scheme should be devised to shorten handover delay with optimized application of network resources.By introducing the handover cost model of service,this article proposes a rerouting triggering scheme for path optimization after handover and a new minimum cost handover algorithm for mobile satellite networks.This algorithm ensures the quality of service (QoS) parameters,such as delay,during the handover and minimizes the handover costs.Simulation indicates that this algorithm is superior to other current algorithms in guaranteeing the QoS and decreasing handover costs.
An Efficient Algorithm for Capacitated Multifacility Location Problems
Directory of Open Access Journals (Sweden)
Chansiri Singhtaun
2007-01-01
Full Text Available In this paper, a squared-Euclidean distance multifacility location problem with inseparable demands under balanced transportation constraints is analyzed. Using calculus to project the problem onto the space of allocation variables, the problem becomes minimizing concave quadratic integer programming problem. The algorithm based on extreme point ranking method combining with logical techniques is developed. The numerical experiments are randomly generated to test efficiency of the proposed algorithm compared with a linearization algorithm. The results show that the proposed algorithm provides a better solution on average with less processing time for all various sizes of problems.
How to Make the Quantum Adiabatic Algorithm Fail
Farhi, E; Gutmann, S; Nagaj, D; Farhi, Edward; Goldstone, Jeffrey; Gutmann, Sam; Nagaj, Daniel
2005-01-01
The quantum adiabatic algorithm is a Hamiltonian based quantum algorithm designed to find the minimum of a classical cost function whose domain has size N. We show that poor choices for the Hamiltonian can guarantee that the algorithm will not find the minimum if the run time grows more slowly than square root of N. These poor choices are nonlocal and wash out any structure in the cost function to be minimized and the best that can be hoped for is Grover speedup. These failures tell us what not to do when designing quantum adiabatic algorithms.
Baro, Elias
2008-01-01
We work over an o-minimal expansion of a real closed field. The o-minimal homotopy groups of a definable set are defined naturally using definable continuous maps. We prove that any two semialgebraic maps which are definably homotopic are also semialgebraically homotopic. This result together with the study of semialgebraic homotopy done by H. Delfs and M. Knebusch allows us to develop an o-minimal homotopy theory. In particular, we obtain o-minimal versions of the Hurewicz theorems and the Whitehead theorem.
Institute of Scientific and Technical Information of China (English)
Tian-qi WU; Min YAO; Jian-hua YANG
2016-01-01
By adopting the distributed problem-solving strategy, swarm intelligence algorithms have been successfully applied to many optimization problems that are difficult to deal with using traditional methods. At present, there are many well-implemented algorithms, such as particle swarm optimization, genetic algorithm, artificial bee colony algorithm, and ant colony optimization. These algorithms have already shown favorable performances. However, with the objects becoming increasingly complex, it is becoming gradually more difficult for these algorithms to meet human’s demand in terms of accuracy and time. Designing a new algorithm to seek better solutions for optimization problems is becoming increasingly essential. Dolphins have many noteworthy biological characteristics and living habits such as echolocation, information exchanges, cooperation, and division of labor. Combining these biological characteristics and living habits with swarm intelligence and bringing them into optimization prob-lems, we propose a brand new algorithm named the ‘dolphin swarm algorithm’ in this paper. We also provide the definitions of the algorithm and specific descriptions of the four pivotal phases in the algorithm, which are the search phase, call phase, reception phase, and predation phase. Ten benchmark functions with different properties are tested using the dolphin swarm algorithm, particle swarm optimization, genetic algorithm, and artificial bee colony algorithm. The convergence rates and benchmark func-tion results of these four algorithms are compared to testify the effect of the dolphin swarm algorithm. The results show that in most cases, the dolphin swarm algorithm performs better. The dolphin swarm algorithm possesses some great features, such as first-slow-then-fast convergence, periodic convergence, local-optimum-free, and no specific demand on benchmark functions. Moreover, the dolphin swarm algorithm is particularly appropriate to optimization problems, with more
Principal component analysis of minimal excitatory postsynaptic potentials.
Astrelin, A V; Sokolov, M V; Behnisch, T; Reymann, K G; Voronin, L L
1998-02-20
'Minimal' excitatory postsynaptic potentials (EPSPs) are often recorded from central neurones, specifically for quantal analysis. However the EPSPs may emerge from activation of several fibres or transmission sites so that formal quantal analysis may give false results. Here we extended application of the principal component analysis (PCA) to minimal EPSPs. We tested a PCA algorithm and a new graphical 'alignment' procedure against both simulated data and hippocampal EPSPs. Minimal EPSPs were recorded before and up to 3.5 h following induction of long-term potentiation (LTP) in CA1 neurones. In 29 out of 45 EPSPs, two (N=22) or three (N=7) components were detected which differed in latencies, rise time (Trise) or both. The detected differences ranged from 0.6 to 7.8 ms for the latency and from 1.6-9 ms for Trise. Different components behaved differently following LTP induction. Cases were found when one component was potentiated immediately after tetanus whereas the other with a delay of 15-60 min. The immediately potentiated component could decline in 1-2 h so that the two components contributed differently into early (reflections of synchronized quantal releases. In general, the results demonstrate PCA applicability to separate EPSPs into different components and its usefulness for precise analysis of synaptic transmission.
Subspace weighted ℓ 2,1 minimization for sparse signal recovery
Zheng, Chundi; Li, Gang; Liu, Yimin; Wang, Xiqin
2012-12-01
In this article, we propose a weighted ℓ 2,1 minimization algorithm for jointly-sparse signal recovery problem. The proposed algorithm exploits the relationship between the noise subspace and the overcomplete basis matrix for designing weights, i.e., large weights are appointed to the entries, whose indices are more likely to be outside of the row support of the jointly sparse signals, so that their indices are expelled from the row support in the solution, and small weights are appointed to the entries, whose indices correspond to the row support of the jointly sparse signals, so that the solution prefers to reserve their indices. Compared with the regular ℓ 2,1 minimization, the proposed algorithm can not only further enhance the sparseness of the solution but also reduce the requirements on both the number of snapshots and the signal-to-noise ratio (SNR) for stable recovery. Both simulations and experiments on real data demonstrate that the proposed algorithm outperforms the ℓ 1-SVD algorithm, which exploits straightforwardly ℓ 2,1 minimization, for both deterministic basis matrix and random basis matrix.
New focused crawling algorithm
Institute of Scientific and Technical Information of China (English)
Su Guiyang; Li Jianhua; Ma Yinghua; Li Shenghong; Song Juping
2005-01-01
Focused carawling is a new research approach of search engine. It restricts information retrieval and provides search service in specific topic area. Focused crawling search algorithm is a key technique of focused crawler which directly affects the search quality. This paper first introduces several traditional topic-specific crawling algorithms, then an inverse link based topic-specific crawling algorithm is put forward. Comparison experiment proves that this algorithm has a good performance in recall, obviously better than traditional Breadth-First and Shark-Search algorithms. The experiment also proves that this algorithm has a good precision.
Symplectic algebraic dynamics algorithm
Institute of Scientific and Technical Information of China (English)
2007-01-01
Based on the algebraic dynamics solution of ordinary differential equations andintegration of ,the symplectic algebraic dynamics algorithm sn is designed,which preserves the local symplectic geometric structure of a Hamiltonian systemand possesses the same precision of the na ve algebraic dynamics algorithm n.Computer experiments for the 4th order algorithms are made for five test modelsand the numerical results are compared with the conventional symplectic geometric algorithm,indicating that sn has higher precision,the algorithm-inducedphase shift of the conventional symplectic geometric algorithm can be reduced,and the dynamical fidelity can be improved by one order of magnitude.
Adaptive cockroach swarm algorithm
Obagbuwa, Ibidun C.; Abidoye, Ademola P.
2017-07-01
An adaptive cockroach swarm optimization (ACSO) algorithm is proposed in this paper to strengthen the existing cockroach swarm optimization (CSO) algorithm. The ruthless component of CSO algorithm is modified by the employment of blend crossover predator-prey evolution method which helps algorithm prevent any possible population collapse, maintain population diversity and create adaptive search in each iteration. The performance of the proposed algorithm on 16 global optimization benchmark function problems was evaluated and compared with the existing CSO, cuckoo search, differential evolution, particle swarm optimization and artificial bee colony algorithms.
Decoherence in Search Algorithms
Abal, G; Marquezino, F L; Oliveira, A C; Portugal, R
2009-01-01
Recently several quantum search algorithms based on quantum walks were proposed. Those algorithms differ from Grover's algorithm in many aspects. The goal is to find a marked vertex in a graph faster than classical algorithms. Since the implementation of those new algorithms in quantum computers or in other quantum devices is error-prone, it is important to analyze their robustness under decoherence. In this work we analyze the impact of decoherence on quantum search algorithms implemented on two-dimensional grids and on hypercubes.
Real-Time Adaptive Least-Squares Drag Minimization for Performance Adaptive Aeroelastic Wing
Ferrier, Yvonne L.; Nguyen, Nhan T.; Ting, Eric
2016-01-01
This paper contains a simulation study of a real-time adaptive least-squares drag minimization algorithm for an aeroelastic model of a flexible wing aircraft. The aircraft model is based on the NASA Generic Transport Model (GTM). The wing structures incorporate a novel aerodynamic control surface known as the Variable Camber Continuous Trailing Edge Flap (VCCTEF). The drag minimization algorithm uses the Newton-Raphson method to find the optimal VCCTEF deflections for minimum drag in the context of an altitude-hold flight control mode at cruise conditions. The aerodynamic coefficient parameters used in this optimization method are identified in real-time using Recursive Least Squares (RLS). The results demonstrate the potential of the VCCTEF to improve aerodynamic efficiency for drag minimization for transport aircraft.
Contemporary review of minimally invasive pancreaticoduodenectomy.
Dai, Rui; Turley, Ryan S; Blazer, Dan G
2016-12-27
To assess the current literature describing various minimally invasive techniques for and to review short-term outcomes after minimally invasive pancreaticoduodenectomy (PD). PD remains the only potentially curative treatment for periampullary malignancies, including, most commonly, pancreatic adenocarcinoma. Minimally invasive approaches to this complex operation have begun to be increasingly reported in the literature and are purported by some to reduce the historically high morbidity of PD associated with the open technique. In this systematic review, we have searched the literature for high-quality publications describing minimally invasive techniques for PD-including laparoscopic, robotic, and laparoscopic-assisted robotic approaches (hybrid approach). We have identified publications with the largest operative experiences from well-known centers of excellence for this complex procedure. We report primarily short term operative and perioperative results and some short term oncologic endpoints. Minimally invasive techniques include laparoscopic, robotic and hybrid approaches and each of these techniques has strong advocates. Consistently, across all minimally invasive modalities, these techniques are associated less intraoperative blood loss than traditional open PD (OPD), but in exchange for longer operating times. These techniques are relatively equivalent in terms of perioperative morbidity and short term oncologic outcomes. Importantly, pancreatic fistula rate appears to be comparable in most minimally invasive series compared to open technique. Impact of minimally invasive technique on length of stay is mixed compared to some traditional open series. A few series have suggested that initiation of and time to adjuvant therapy may be improved with minimally invasive techniques, however this assertion remains controversial. In terms of short-terms costs, minimally invasive PD is significantly higher than that of OPD. Minimally invasive approaches to PD show
Minimizing size of decision trees for multi-label decision tables
Azad, Mohammad
2014-09-29
We used decision tree as a model to discover the knowledge from multi-label decision tables where each row has a set of decisions attached to it and our goal is to find out one arbitrary decision from the set of decisions attached to a row. The size of the decision tree can be small as well as very large. We study here different greedy as well as dynamic programming algorithms to minimize the size of the decision trees. When we compare the optimal result from dynamic programming algorithm, we found some greedy algorithms produce results which are close to the optimal result for the minimization of number of nodes (at most 18.92% difference), number of nonterminal nodes (at most 20.76% difference), and number of terminal nodes (at most 18.71% difference).
Directory of Open Access Journals (Sweden)
Shih-Wei Lin
2014-01-01
Full Text Available Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP, which aims to minimize total service time, and proposes an iterated greedy (IG algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set.
Lin, Shih-Wei; Ying, Kuo-Ching; Wan, Shu-Yen
2014-01-01
Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP), which aims to minimize total service time, and proposes an iterated greedy (IG) algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set.
Zhang, Jun; Gu, Zhenghui; Yu, Zhu Liang; Li, Yuanqing
2015-03-01
Low energy consumption is crucial for body area networks (BANs). In BAN-enabled ECG monitoring, the continuous monitoring entails the need of the sensor nodes to transmit a huge data to the sink node, which leads to excessive energy consumption. To reduce airtime over energy-hungry wireless links, this paper presents an energy-efficient compressed sensing (CS)-based approach for on-node ECG compression. At first, an algorithm called minimal mutual coherence pursuit is proposed to construct sparse binary measurement matrices, which can be used to encode the ECG signals with superior performance and extremely low complexity. Second, in order to minimize the data rate required for faithful reconstruction, a weighted ℓ1 minimization model is derived by exploring the multisource prior knowledge in wavelet domain. Experimental results on MIT-BIH arrhythmia database reveals that the proposed approach can obtain higher compression ratio than the state-of-the-art CS-based methods. Together with its low encoding complexity, our approach can achieve significant energy saving in both encoding process and wireless transmission.
Directory of Open Access Journals (Sweden)
Kristian M. Lien
1990-01-01
Full Text Available This paper presents a new algorithm based on the heuristic tearing algorithm by Gundersen and Hertzberg (1983. The basic idea in both the original and the proposed algorithm is sequential tearing of strong components which have been identified by an algorithm proposed by Targan (1972. The new algorithm has two alternative options for selection of tear streams, and alternative precedence orderings may be generated for the selected set of tear streams. The algorithm has been tested on several problems. It has identified minimal (optimal tear sets for all of them, including the four problems presented in Gundersen and Hertzberg (1983 where the original algorithm could not find a minimal tear set. A Lisp implementation of the algorithm is described, and example problems arc presented.
The relative volume growth of minimal submanifolds
DEFF Research Database (Denmark)
Markvorsen, Steen; Palmer, V.
2002-01-01
The volume growth of certain well-defined subsets of minimal submanifolds in riemannian spaces are compared with the volume growth of balls and spheres ill space forms of constant curvature.......The volume growth of certain well-defined subsets of minimal submanifolds in riemannian spaces are compared with the volume growth of balls and spheres ill space forms of constant curvature....
Minimal Exit Trajectories with Optimum Correctional Manoeuvres
Directory of Open Access Journals (Sweden)
T. N. Srivastava
1980-10-01
Full Text Available Minimal exit trajectories with optimum correctional manoeuvers to a rocket between two coplaner, noncoaxial elliptic orbits in an inverse square gravitational field have been investigated. Case of trajectories with no correctional manoeuvres has been analysed. In the end minimal exit trajectories through specified orbital terminals are discussed and problem of ref. (2 is derived as a particular case.
Software For Genetic Algorithms
Wang, Lui; Bayer, Steve E.
1992-01-01
SPLICER computer program is genetic-algorithm software tool used to solve search and optimization problems. Provides underlying framework and structure for building genetic-algorithm application program. Written in Think C.
Progressive geometric algorithms
Directory of Open Access Journals (Sweden)
Sander P.A. Alewijnse
2015-01-01
Full Text Available Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms for two geometric problems: computing the convex hull of a planar point set, and finding popular places in a set of trajectories.
Approximate iterative algorithms
Almudevar, Anthony Louis
2014-01-01
Iterative algorithms often rely on approximate evaluation techniques, which may include statistical estimation, computer simulation or functional approximation. This volume presents methods for the study of approximate iterative algorithms, providing tools for the derivation of error bounds and convergence rates, and for the optimal design of such algorithms. Techniques of functional analysis are used to derive analytical relationships between approximation methods and convergence properties for general classes of algorithms. This work provides the necessary background in functional analysis a
Borbely, Eva
2007-01-01
A quantum algorithm is a set of instructions for a quantum computer, however, unlike algorithms in classical computer science their results cannot be guaranteed. A quantum system can undergo two types of operation, measurement and quantum state transformation, operations themselves must be unitary (reversible). Most quantum algorithms involve a series of quantum state transformations followed by a measurement. Currently very few quantum algorithms are known and no general design methodology e...
Competing Sudakov Veto Algorithms
Kleiss, Ronald
2016-01-01
We present a way to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance and show that there are significantly faster alternatives to the commonly used algorithms.
Autonomous Star Tracker Algorithms
DEFF Research Database (Denmark)
Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren
1998-01-01
Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....
Selecting materialized views using random algorithm
Zhou, Lijuan; Hao, Zhongxiao; Liu, Chi
2007-04-01
The data warehouse is a repository of information collected from multiple possibly heterogeneous autonomous distributed databases. The information stored at the data warehouse is in form of views referred to as materialized views. The selection of the materialized views is one of the most important decisions in designing a data warehouse. Materialized views are stored in the data warehouse for the purpose of efficiently implementing on-line analytical processing queries. The first issue for the user to consider is query response time. So in this paper, we develop algorithms to select a set of views to materialize in data warehouse in order to minimize the total view maintenance cost under the constraint of a given query response time. We call it query_cost view_ selection problem. First, cost graph and cost model of query_cost view_ selection problem are presented. Second, the methods for selecting materialized views by using random algorithms are presented. The genetic algorithm is applied to the materialized views selection problem. But with the development of genetic process, the legal solution produced become more and more difficult, so a lot of solutions are eliminated and producing time of the solutions is lengthened in genetic algorithm. Therefore, improved algorithm has been presented in this paper, which is the combination of simulated annealing algorithm and genetic algorithm for the purpose of solving the query cost view selection problem. Finally, in order to test the function and efficiency of our algorithms experiment simulation is adopted. The experiments show that the given methods can provide near-optimal solutions in limited time and works better in practical cases. Randomized algorithms will become invaluable tools for data warehouse evolution.
Husfeldt, Thore
2015-01-01
This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available techniques and is organized by algorithmic paradigm.