WorldWideScience

Sample records for hard constraint algorithm

  1. A Hard Constraint Algorithm to Model Particle Interactions in DNA-laden Flows

    Energy Technology Data Exchange (ETDEWEB)

    Trebotich, D; Miller, G H; Bybee, M D

    2006-08-01

    We present a new method for particle interactions in polymer models of DNA. The DNA is represented by a bead-rod polymer model and is fully-coupled to the fluid. The main objective in this work is to implement short-range forces to properly model polymer-polymer and polymer-surface interactions, specifically, rod-rod and rod-surface uncrossing. Our new method is based on a rigid constraint algorithm whereby rods elastically bounce off one another to prevent crossing, similar to our previous algorithm used to model polymer-surface interactions. We compare this model to a classical (smooth) potential which acts as a repulsive force between rods, and rods and surfaces.

  2. Constraint satisfaction problems with isolated solutions are hard

    International Nuclear Information System (INIS)

    Zdeborová, Lenka; Mézard, Marc

    2008-01-01

    We study the phase diagram and the algorithmic hardness of the random 'locked' constraint satisfaction problems, and compare them to the commonly studied 'non-locked' problems like satisfiability of Boolean formulae or graph coloring. The special property of the locked problems is that clusters of solutions are isolated points. This simplifies significantly the determination of the phase diagram, which makes the locked problems particularly appealing from the mathematical point of view. On the other hand, we show empirically that the clustered phase of these problems is extremely hard from the algorithmic point of view: the best known algorithms all fail to find solutions. Our results suggest that the easy/hard transition (for currently known algorithms) in the locked problems coincides with the clustering transition. These should thus be regarded as new benchmarks of really hard constraint satisfaction problems

  3. Nonnegative Matrix Factorization with Rank Regularization and Hard Constraint.

    Science.gov (United States)

    Shang, Ronghua; Liu, Chiyang; Meng, Yang; Jiao, Licheng; Stolkin, Rustam

    2017-09-01

    Nonnegative matrix factorization (NMF) is well known to be an effective tool for dimensionality reduction in problems involving big data. For this reason, it frequently appears in many areas of scientific and engineering literature. This letter proposes a novel semisupervised NMF algorithm for overcoming a variety of problems associated with NMF algorithms, including poor use of prior information, negative impact on manifold structure of the sparse constraint, and inaccurate graph construction. Our proposed algorithm, nonnegative matrix factorization with rank regularization and hard constraint (NMFRC), incorporates label information into data representation as a hard constraint, which makes full use of prior information. NMFRC also measures pairwise similarity according to geodesic distance rather than Euclidean distance. This results in more accurate measurement of pairwise relationships, resulting in more effective manifold information. Furthermore, NMFRC adopts rank constraint instead of norm constraints for regularization to balance the sparseness and smoothness of data. In this way, the new data representation is more representative and has better interpretability. Experiments on real data sets suggest that NMFRC outperforms four other state-of-the-art algorithms in terms of clustering accuracy.

  4. Learning With Mixed Hard/Soft Pointwise Constraints.

    Science.gov (United States)

    Gnecco, Giorgio; Gori, Marco; Melacci, Stefano; Sanguineti, Marcello

    2015-09-01

    A learning paradigm is proposed and investigated, in which the classical framework of learning from examples is enhanced by the introduction of hard pointwise constraints, i.e., constraints imposed on a finite set of examples that cannot be violated. Such constraints arise, e.g., when requiring coherent decisions of classifiers acting on different views of the same pattern. The classical examples of supervised learning, which can be violated at the cost of some penalization (quantified by the choice of a suitable loss function) play the role of soft pointwise constraints. Constrained variational calculus is exploited to derive a representer theorem that provides a description of the functional structure of the optimal solution to the proposed learning paradigm. It is shown that such an optimal solution can be represented in terms of a set of support constraints, which generalize the concept of support vectors and open the doors to a novel learning paradigm, called support constraint machines. The general theory is applied to derive the representation of the optimal solution to the problem of learning from hard linear pointwise constraints combined with soft pointwise constraints induced by supervised examples. In some cases, closed-form optimal solutions are obtained.

  5. Hard decoding algorithm for optimizing thresholds under general Markovian noise

    Science.gov (United States)

    Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond

    2017-04-01

    Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.

  6. Firefly Algorithm for Cardinality Constrained Mean-Variance Portfolio Optimization Problem with Entropy Diversity Constraint

    Science.gov (United States)

    2014-01-01

    Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results. PMID:24991645

  7. Firefly algorithm for cardinality constrained mean-variance portfolio optimization problem with entropy diversity constraint.

    Science.gov (United States)

    Bacanin, Nebojsa; Tuba, Milan

    2014-01-01

    Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results.

  8. Image Registration Algorithm Based on Parallax Constraint and Clustering Analysis

    Science.gov (United States)

    Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song

    2018-01-01

    To resolve the problem of slow computation speed and low matching accuracy in image registration, a new image registration algorithm based on parallax constraint and clustering analysis is proposed. Firstly, Harris corner detection algorithm is used to extract the feature points of two images. Secondly, use Normalized Cross Correlation (NCC) function to perform the approximate matching of feature points, and the initial feature pair is obtained. Then, according to the parallax constraint condition, the initial feature pair is preprocessed by K-means clustering algorithm, which is used to remove the feature point pairs with obvious errors in the approximate matching process. Finally, adopt Random Sample Consensus (RANSAC) algorithm to optimize the feature points to obtain the final feature point matching result, and the fast and accurate image registration is realized. The experimental results show that the image registration algorithm proposed in this paper can improve the accuracy of the image matching while ensuring the real-time performance of the algorithm.

  9. A synthetic dataset for evaluating soft and hard fusion algorithms

    Science.gov (United States)

    Graham, Jacob L.; Hall, David L.; Rimland, Jeffrey

    2011-06-01

    There is an emerging demand for the development of data fusion techniques and algorithms that are capable of combining conventional "hard" sensor inputs such as video, radar, and multispectral sensor data with "soft" data including textual situation reports, open-source web information, and "hard/soft" data such as image or video data that includes human-generated annotations. New techniques that assist in sense-making over a wide range of vastly heterogeneous sources are critical to improving tactical situational awareness in counterinsurgency (COIN) and other asymmetric warfare situations. A major challenge in this area is the lack of realistic datasets available for test and evaluation of such algorithms. While "soft" message sets exist, they tend to be of limited use for data fusion applications due to the lack of critical message pedigree and other metadata. They also lack corresponding hard sensor data that presents reasonable "fusion opportunities" to evaluate the ability to make connections and inferences that span the soft and hard data sets. This paper outlines the design methodologies, content, and some potential use cases of a COIN-based synthetic soft and hard dataset created under a United States Multi-disciplinary University Research Initiative (MURI) program funded by the U.S. Army Research Office (ARO). The dataset includes realistic synthetic reports from a variety of sources, corresponding synthetic hard data, and an extensive supporting database that maintains "ground truth" through logical grouping of related data into "vignettes." The supporting database also maintains the pedigree of messages and other critical metadata.

  10. On the Convergence of Iterative Receiver Algorithms Utilizing Hard Decisions

    Directory of Open Access Journals (Sweden)

    Jürgen F. Rößler

    2009-01-01

    Full Text Available The convergence of receivers performing iterative hard decision interference cancellation (IHDIC is analyzed in a general framework for ASK, PSK, and QAM constellations. We first give an overview of IHDIC algorithms known from the literature applied to linear modulation and DS-CDMA-based transmission systems and show the relation to Hopfield neural network theory. It is proven analytically that IHDIC with serial update scheme always converges to a stable state in the estimated values in course of iterations and that IHDIC with parallel update scheme converges to cycles of length 2. Additionally, we visualize the convergence behavior with the aid of convergence charts. Doing so, we give insight into possible errors occurring in IHDIC which turn out to be caused by locked error situations. The derived results can directly be applied to those iterative soft decision interference cancellation (ISDIC receivers whose soft decision functions approach hard decision functions in course of the iterations.

  11. Scheduling of Fault-Tolerant Embedded Systems with Soft and Hard Timing Constraints

    DEFF Research Database (Denmark)

    Izosimov, Viacheslav; Pop, Paul; Eles, Petru

    2008-01-01

    In this paper we present an approach to the synthesis of fault-tolerant schedules for embedded applications with soft and hard real-time constraints. We are interested to guarantee the deadlines for the hard processes even in the case of faults, while maximizing the overall utility. We use time....../utility functions to capture the utility of soft processes. Process re-execution is employed to recover from multiple faults. A single static schedule computed off-line is not fault tolerant and is pessimistic in terms of utility, while a purely online approach, which computes a new schedule every time a process...

  12. Data Reduction Algorithm Using Nonnegative Matrix Factorization with Nonlinear Constraints

    Science.gov (United States)

    Sembiring, Pasukat

    2017-12-01

    Processing ofdata with very large dimensions has been a hot topic in recent decades. Various techniques have been proposed in order to execute the desired information or structure. Non- Negative Matrix Factorization (NMF) based on non-negatives data has become one of the popular methods for shrinking dimensions. The main strength of this method is non-negative object, the object model by a combination of some basic non-negative parts, so as to provide a physical interpretation of the object construction. The NMF is a dimension reduction method thathasbeen used widely for numerous applications including computer vision,text mining, pattern recognitions,and bioinformatics. Mathematical formulation for NMF did not appear as a convex optimization problem and various types of algorithms have been proposed to solve the problem. The Framework of Alternative Nonnegative Least Square(ANLS) are the coordinates of the block formulation approaches that have been proven reliable theoretically and empirically efficient. This paper proposes a new algorithm to solve NMF problem based on the framework of ANLS.This algorithm inherits the convergenceproperty of the ANLS framework to nonlinear constraints NMF formulations.

  13. Iterative Nonlinear Tikhonov Algorithm with Constraints for Electromagnetic Tomography

    Science.gov (United States)

    Xu, Feng; Deshpande, Manohar

    2012-01-01

    Low frequency electromagnetic tomography such as the capacitance tomography (ECT) has been proposed for monitoring and mass-gauging of gas-liquid two-phase system under microgravity condition in NASA's future long-term space missions. Due to the ill-posed inverse problem of ECT, images reconstructed using conventional linear algorithms often suffer from limitations such as low resolution and blurred edges. Hence, new efficient high resolution nonlinear imaging algorithms are needed for accurate two-phase imaging. The proposed Iterative Nonlinear Tikhonov Regularized Algorithm with Constraints (INTAC) is based on an efficient finite element method (FEM) forward model of quasi-static electromagnetic problem. It iteratively minimizes the discrepancy between FEM simulated and actual measured capacitances by adjusting the reconstructed image using the Tikhonov regularized method. More importantly, it enforces the known permittivity of two phases to the unknown pixels which exceed the reasonable range of permittivity in each iteration. This strategy does not only stabilize the converging process, but also produces sharper images. Simulations show that resolution improvement of over 2 times can be achieved by INTAC with respect to conventional approaches. Strategies to further improve spatial imaging resolution are suggested, as well as techniques to accelerate nonlinear forward model and thus increase the temporal resolution.

  14. Efficient algorithms for extracting biological key pathways with global constraints

    DEFF Research Database (Denmark)

    Baumbach, Jan; Friedrich, T.; Kötzing, T.

    2012-01-01

    The integrated analysis of data of different types and with various interdependencies is one of the major challenges in computational biology. Recently, we developed KeyPathwayMiner, a method that combines biological networks modeled as graphs with disease-specific genetic expression data gained....... Here we present an alternative approach that avoids a certain bias towards hub nodes: We now aim for extracting all maximal connected sub-networks where all but at most K nodes are expressed in all cases but in total (!) at most L, i.e. accumulated over all cases and all nodes in a solution. We call...... this strategy GLONE (global node exceptions); the previous problem we call INES (individual node exceptions). Since finding GLONE-components is computationally hard, we developed an Ant Colony Optimization algorithm and implemented it with the KeyPathwayMiner Cytoscape framework as an alternative to the INES...

  15. Firefly Algorithm for Cardinality Constrained Mean-Variance Portfolio Optimization Problem with Entropy Diversity Constraint

    Directory of Open Access Journals (Sweden)

    Nebojsa Bacanin

    2014-01-01

    portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results.

  16. Branch and bound algorithms to solve semiring constraint satisfaction problems

    CSIR Research Space (South Africa)

    Leenen, L

    2008-12-01

    Full Text Available The Semiring Constraint Satisfaction Problem (SCSP) framework is a popular approach for the representation of partial constraint satisfaction problems. Considerable research has been done in solving SCSPs, but limited work has been done in building...

  17. Flexible Job-Shop Scheduling with Dual-Resource Constraints to Minimize Tardiness Using Genetic Algorithm

    Science.gov (United States)

    Paksi, A. B. N.; Ma'ruf, A.

    2016-02-01

    In general, both machines and human resources are needed for processing a job on production floor. However, most classical scheduling problems have ignored the possible constraint caused by availability of workers and have considered only machines as a limited resource. In addition, along with production technology development, routing flexibility appears as a consequence of high product variety and medium demand for each product. Routing flexibility is caused by capability of machines that offers more than one machining process. This paper presents a method to address scheduling problem constrained by both machines and workers, considering routing flexibility. Scheduling in a Dual-Resource Constrained shop is categorized as NP-hard problem that needs long computational time. Meta-heuristic approach, based on Genetic Algorithm, is used due to its practical implementation in industry. Developed Genetic Algorithm uses indirect chromosome representative and procedure to transform chromosome into Gantt chart. Genetic operators, namely selection, elitism, crossover, and mutation are developed to search the best fitness value until steady state condition is achieved. A case study in a manufacturing SME is used to minimize tardiness as objective function. The algorithm has shown 25.6% reduction of tardiness, equal to 43.5 hours.

  18. A Selfish Constraint Satisfaction Genetic Algorithms for Planning a Long-Distance Transportation Network

    Science.gov (United States)

    Onoyama, Takashi; Maekawa, Takuya; Kubota, Sen; Tsuruta, Setuso; Komoda, Norihisa

    To build a cooperative logistics network covering multiple enterprises, a planning method that can build a long-distance transportation network is required. Many strict constraints are imposed on this type of problem. To solve these strict-constraint problems, a selfish constraint satisfaction genetic algorithm (GA) is proposed. In this GA, each gene of an individual satisfies only its constraint selfishly, disregarding the constraints of other genes in the same individuals. Moreover, a constraint pre-checking method is also applied to improve the GA convergence speed. The experimental result shows the proposed method can obtain an accurate solution in a practical response time.

  19. The Viterbi Algorithm expressed in Constraint Handling Rules

    DEFF Research Database (Denmark)

    Christiansen, Henning; Have, Christian Theil; Lassen, Ole Torp

    2010-01-01

    The Viterbi algorithm is a classical example of a dynamic programming algorithm, in which pruning reduces the search space drastically, so that an otherwise exponential time complexity is reduced to linearity. The central steps of the algorithm, expansion and pruning, can be expressed in a concis...

  20. Algorithms and ordering heuristics for distributed constraint satisfaction problems

    CERN Document Server

    Wahbi , Mohamed

    2013-01-01

    DisCSP (Distributed Constraint Satisfaction Problem) is a general framework for solving distributed problems arising in Distributed Artificial Intelligence.A wide variety of problems in artificial intelligence are solved using the constraint satisfaction problem paradigm. However, there are several applications in multi-agent coordination that are of a distributed nature. In this type of application, the knowledge about the problem, that is, variables and constraints, may be logically or geographically distributed among physical distributed agents. This distribution is mainly due to p

  1. Exponential-Time Algorithms and Complexity of NP-Hard Graph Problems

    DEFF Research Database (Denmark)

    Taslaman, Nina Sofia

    of algorithms, as well as investigations into how far such improvements can get under reasonable assumptions.      The first part is concerned with detection of cycles in graphs, especially parameterized generalizations of Hamiltonian cycles. A remarkably simple Monte Carlo algorithm is presented......NP-hard problems are deemed highly unlikely to be solvable in polynomial time. Still, one can often find algorithms that are substantially faster than brute force solutions. This thesis concerns such algorithms for problems from graph theory; techniques for constructing and improving this type......, and with high probability any found solution is shortest possible. Moreover, the algorithm can be used to find a cycle of given parity through the specified elements.      The second part concerns the hardness of problems encoded as evaluations of the Tutte polynomial at some fixed point in the rational plane...

  2. A linear programming algorithm to test for jamming in hard-sphere packings

    International Nuclear Information System (INIS)

    Donev, Aleksandar; Torquato, Salvatore.; Stillinger, Frank H.; Connelly, Robert

    2004-01-01

    Jamming in hard-particle packings has been the subject of considerable interest in recent years. In a paper by Torquato and Stillinger [J. Phys. Chem. B 105 (2001)], a classification scheme of jammed packings into hierarchical categories of locally, collectively and strictly jammed configurations has been proposed. They suggest that these jamming categories can be tested using numerical algorithms that analyze an equivalent contact network of the packing under applied displacements, but leave the design of such algorithms as a future task. In this work, we present a rigorous and practical algorithm to assess whether an ideal hard-sphere packing in two or three dimensions is jammed according to the aforementioned categories. The algorithm is based on linear programming and is applicable to regular as well as random packings of finite size with hard-wall and periodic boundary conditions. If the packing is not jammed, the algorithm yields representative multi-particle unjamming motions. Furthermore, we extend the jamming categories and the testing algorithm to packings with significant interparticle gaps. We describe in detail two variants of the proposed randomized linear programming approach to test for jamming in hard-sphere packings. The first algorithm treats ideal packings in which particles form perfect contacts. Another algorithm treats the case of jamming in packings with significant interparticle gaps. This extended algorithm allows one to explore more fully the nature of the feasible particle displacements. We have implemented the algorithms and applied them to ordered as well as random packings of circular disks and spheres with periodic boundary conditions. Some representative results for large disordered disk and sphere packings are given, but more robust and efficient implementations as well as further applications (e.g., non-spherical particles) are anticipated for the future

  3. How Do Severe Constraints Affect the Search Ability of Multiobjective Evolutionary Algorithms in Water Resources?

    Science.gov (United States)

    Clarkin, T. J.; Kasprzyk, J. R.; Raseman, W. J.; Herman, J. D.

    2015-12-01

    This study contributes a diagnostic assessment of multiobjective evolutionary algorithm (MOEA) search on a set of water resources problem formulations with different configurations of constraints. Unlike constraints in classical optimization modeling, constraints within MOEA simulation-optimization represent limits on acceptable performance that delineate whether solutions within the search problem are feasible. Constraints are relevant because of the emergent pressures on water resources systems: increasing public awareness of their sustainability, coupled with regulatory pressures on water management agencies. In this study, we test several state-of-the-art MOEAs that utilize restricted tournament selection for constraint handling on varying configurations of water resources planning problems. For example, a problem that has no constraints on performance levels will be compared with a problem with several severe constraints, and a problem with constraints that have less severe values on the constraint thresholds. One such problem, Lower Rio Grande Valley (LRGV) portfolio planning, has been solved with a suite of constraints that ensure high reliability, low cost variability, and acceptable performance in a single year severe drought. But to date, it is unclear whether or not the constraints are negatively affecting MOEAs' ability to solve the problem effectively. Two categories of results are explored. The first category uses control maps of algorithm performance to determine if the algorithm's performance is sensitive to user-defined parameters. The second category uses run-time performance metrics to determine the time required for the algorithm to reach sufficient levels of convergence and diversity on the solution sets. Our work exploring the effect of constraints will better enable practitioners to define MOEA problem formulations for real-world systems, especially when stakeholders are concerned with achieving fixed levels of performance according to one or

  4. Application of multiple tabu search algorithm to solve dynamic economic dispatch considering generator constraints

    International Nuclear Information System (INIS)

    Pothiya, Saravuth; Ngamroo, Issarachai; Kongprawechnon, Waree

    2008-01-01

    This paper presents a new optimization technique based on a multiple tabu search algorithm (MTS) to solve the dynamic economic dispatch (ED) problem with generator constraints. In the constrained dynamic ED problem, the load demand and spinning reserve capacity as well as some practical operation constraints of generators, such as ramp rate limits and prohibited operating zone are taken into consideration. The MTS algorithm introduces additional mechanisms such as initialization, adaptive searches, multiple searches, crossover and restarting process. To show its efficiency, the MTS algorithm is applied to solve constrained dynamic ED problems of power systems with 6 and 15 units. The results obtained from the MTS algorithm are compared to those achieved from the conventional approaches, such as simulated annealing (SA), genetic algorithm (GA), tabu search (TS) algorithm and particle swarm optimization (PSO). The experimental results show that the proposed MTS algorithm approaches is able to obtain higher quality solutions efficiently and with less computational time than the conventional approaches

  5. Statistical mechanics of fluids under internal constraints: Rigorous results for the one-dimensional hard rod fluid

    International Nuclear Information System (INIS)

    Corti, D.S.; Debenedetti, P.G.

    1998-01-01

    The rigorous statistical mechanics of metastability requires the imposition of internal constraints that prevent access to regions of phase space corresponding to inhomogeneous states. We derive exactly the Helmholtz energy and equation of state of the one-dimensional hard rod fluid under the influence of an internal constraint that places an upper bound on the distance between nearest-neighbor rods. This type of constraint is relevant to the suppression of boiling in a superheated liquid. We determine the effects of this constraint upon the thermophysical properties and internal structure of the hard rod fluid. By adding an infinitely weak and infinitely long-ranged attractive potential to the hard core, the fluid exhibits a first-order vapor-liquid transition. We determine exactly the equation of state of the one-dimensional superheated liquid and show that it exhibits metastable phase equilibrium. We also derive statistical mechanical relations for the equation of state of a fluid under the action of arbitrary constraints, and show the connection between the statistical mechanics of constrained and unconstrained ensembles. copyright 1998 The American Physical Society

  6. Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection

    Science.gov (United States)

    Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang

    2018-01-01

    In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10−5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced. PMID:29342963

  7. Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection.

    Science.gov (United States)

    Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang

    2018-01-15

    In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes' (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10 -5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.

  8. Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection

    Directory of Open Access Journals (Sweden)

    Jiahui Meng

    2018-01-01

    Full Text Available In order to improve the performance of non-binary low-density parity check codes (LDPC hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER of 10−5 over an additive white Gaussian noise (AWGN channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.

  9. Constrained Optimization Based on Hybrid Evolutionary Algorithm and Adaptive Constraint-Handling Technique

    DEFF Research Database (Denmark)

    Wang, Yong; Cai, Zixing; Zhou, Yuren

    2009-01-01

    A novel approach to deal with numerical and engineering constrained optimization problems, which incorporates a hybrid evolutionary algorithm and an adaptive constraint-handling technique, is presented in this paper. The hybrid evolutionary algorithm simultaneously uses simplex crossover and two...... mutation operators to generate the offspring population. Additionally, the adaptive constraint-handling technique consists of three main situations. In detail, at each situation, one constraint-handling mechanism is designed based on current population state. Experiments on 13 benchmark test functions...... and four well-known constrained design problems verify the effectiveness and efficiency of the proposed method. The experimental results show that integrating the hybrid evolutionary algorithm with the adaptive constraint-handling technique is beneficial, and the proposed method achieves competitive...

  10. Comparison of a constraint directed search to a genetic algorithm in a scheduling application

    International Nuclear Information System (INIS)

    Abbott, L.

    1993-01-01

    Scheduling plutonium containers for blending is a time-intensive operation. Several constraints must be taken into account; including the number of containers in a dissolver run, the size of each dissolver run, and the size and target purity of the blended mixture formed from these runs. Two types of algorithms have been used to solve this problem: a constraint directed search and a genetic algorithm. This paper discusses the implementation of these two different approaches to the problem and the strengths and weaknesses of each algorithm

  11. An Improved Hierarchical Genetic Algorithm for Sheet Cutting Scheduling with Process Constraints

    OpenAIRE

    Yunqing Rao; Dezhong Qi; Jinling Li

    2013-01-01

    For the first time, an improved hierarchical genetic algorithm for sheet cutting problem which involves n cutting patterns for m non-identical parallel machines with process constraints has been proposed in the integrated cutting stock model. The objective of the cutting scheduling problem is minimizing the weighted completed time. A mathematical model for this problem is presented, an improved hierarchical genetic algorithm (ant colony—hierarchical genetic algorithm) is developed for better ...

  12. Principal distance constraint error diffusion algorithm for homogeneous dot distribution

    Science.gov (United States)

    Kang, Ki-Min; Kim, Choon-Woo

    1999-12-01

    The perceived quality of the halftoned image strongly depends on the spatial distribution of the binary dots. Various error diffusion algorithms have been proposed for realizing the homogeneous dot distribution in the highlight and shadow regions. However, they are computationally expensive and/or require large memory space. This paper presents a new threshold modulated error diffusion algorithm for the homogeneous dot distribution. The proposed method is applied exactly same as the Floyd-Steinberg's algorithm except the thresholding process. The threshold value is modulated based on the difference between the distance to the nearest minor pixel, `minor pixel distance', and the principal distance. To do so, calculation of the minor pixel distance is needed for every pixel. But, it is quite time consuming and requires large memory resources. In order to alleviate this problem, `the minor pixel offset array' that transforms the 2D history of minor pixels into the 1D codes is proposed. The proposed algorithm drastically reduces the computational load and memory spaces needed for calculation of the minor pixel distance.

  13. Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis

    Science.gov (United States)

    Chiou, Jin-Chern

    1990-01-01

    Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.

  14. ALGORITHMIC CONSTRUCTION SCHEDULES IN CONDITIONS OF TIMING CONSTRAINTS

    Directory of Open Access Journals (Sweden)

    Alexey S. Dobrynin

    2014-01-01

    Full Text Available Tasks of time-schedule construction (JSSP in various fields of human activities have an important theoretical and practical significance. The main feature of these tasks is a timing requirement, describing allowed planning time periods and periods of downtime. This article describes implementation variations of the work scheduling algorithm under timing requirements for the tasks of industrial time-schedules construction, and service activities.

  15. Application of fermionic marginal constraints to hybrid quantum algorithms

    Science.gov (United States)

    Rubin, Nicholas C.; Babbush, Ryan; McClean, Jarrod

    2018-05-01

    Many quantum algorithms, including recently proposed hybrid classical/quantum algorithms, make use of restricted tomography of the quantum state that measures the reduced density matrices, or marginals, of the full state. The most straightforward approach to this algorithmic step estimates each component of the marginal independently without making use of the algebraic and geometric structure of the marginals. Within the field of quantum chemistry, this structure is termed the fermionic n-representability conditions, and is supported by a vast amount of literature on both theoretical and practical results related to their approximations. In this work, we introduce these conditions in the language of quantum computation, and utilize them to develop several techniques to accelerate and improve practical applications for quantum chemistry on quantum computers. As a general result, we demonstrate how these marginals concentrate to diagonal quantities when measured on random quantum states. We also show that one can use fermionic n-representability conditions to reduce the total number of measurements required by more than an order of magnitude for medium sized systems in chemistry. As a practical demonstration, we simulate an efficient restoration of the physicality of energy curves for the dilation of a four qubit diatomic hydrogen system in the presence of three distinct one qubit error channels, providing evidence these techniques are useful for pre-fault tolerant quantum chemistry experiments.

  16. Exact and Heuristic Algorithms for Routing AGV on Path with Precedence Constraints

    Directory of Open Access Journals (Sweden)

    Liang Xu

    2016-01-01

    Full Text Available A new problem arises when an automated guided vehicle (AGV is dispatched to visit a set of customers, which are usually located along a fixed wire transmitting signal to navigate the AGV. An optimal visiting sequence is desired with the objective of minimizing the total travelling distance (or time. When precedence constraints are restricted on customers, the problem is referred to as traveling salesman problem on path with precedence constraints (TSPP-PC. Whether or not it is NP-complete has no answer in the literature. In this paper, we design dynamic programming for the TSPP-PC, which is the first polynomial-time exact algorithm when the number of precedence constraints is a constant. For the problem with number of precedence constraints, part of the input can be arbitrarily large, so we provide an efficient heuristic based on the exact algorithm.

  17. Parallelized event chain algorithm for dense hard sphere and polymer systems

    International Nuclear Information System (INIS)

    Kampmann, Tobias A.; Boltz, Horst-Holger; Kierfeld, Jan

    2015-01-01

    We combine parallelization and cluster Monte Carlo for hard sphere systems and present a parallelized event chain algorithm for the hard disk system in two dimensions. For parallelization we use a spatial partitioning approach into simulation cells. We find that it is crucial for correctness to ensure detailed balance on the level of Monte Carlo sweeps by drawing the starting sphere of event chains within each simulation cell with replacement. We analyze the performance gains for the parallelized event chain and find a criterion for an optimal degree of parallelization. Because of the cluster nature of event chain moves massive parallelization will not be optimal. Finally, we discuss first applications of the event chain algorithm to dense polymer systems, i.e., bundle-forming solutions of attractive semiflexible polymers

  18. A Strongly and Superlinearly Convergent SQP Algorithm for Optimization Problems with Linear Complementarity Constraints

    International Nuclear Information System (INIS)

    Jian Jinbao; Li Jianling; Mo Xingde

    2006-01-01

    This paper discusses a kind of optimization problem with linear complementarity constraints, and presents a sequential quadratic programming (SQP) algorithm for solving a stationary point of the problem. The algorithm is a modification of the SQP algorithm proposed by Fukushima et al. [Computational Optimization and Applications, 10 (1998),5-34], and is based on a reformulation of complementarity condition as a system of linear equations. At each iteration, one quadratic programming and one system of equations needs to be solved, and a curve search is used to yield the step size. Under some appropriate assumptions, including the lower-level strict complementarity, but without the upper-level strict complementarity for the inequality constraints, the algorithm is proved to possess strong convergence and superlinear convergence. Some preliminary numerical results are reported

  19. Hard Real-Time Task Scheduling in Cloud Computing Using an Adaptive Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Amjad Mahmood

    2017-04-01

    Full Text Available In the Infrastructure-as-a-Service cloud computing model, virtualized computing resources in the form of virtual machines are provided over the Internet. A user can rent an arbitrary number of computing resources to meet their requirements, making cloud computing an attractive choice for executing real-time tasks. Economical task allocation and scheduling on a set of leased virtual machines is an important problem in the cloud computing environment. This paper proposes a greedy and a genetic algorithm with an adaptive selection of suitable crossover and mutation operations (named as AGA to allocate and schedule real-time tasks with precedence constraint on heterogamous virtual machines. A comprehensive simulation study has been done to evaluate the performance of the proposed algorithms in terms of their solution quality and efficiency. The simulation results show that AGA outperforms the greedy algorithm and non-adaptive genetic algorithm in terms of solution quality.

  20. Bayesian Network Constraint-Based Structure Learning Algorithms: Parallel and Optimized Implementations in the bnlearn R Package

    Directory of Open Access Journals (Sweden)

    Marco Scutari

    2017-03-01

    Full Text Available It is well known in the literature that the problem of learning the structure of Bayesian networks is very hard to tackle: Its computational complexity is super-exponential in the number of nodes in the worst case and polynomial in most real-world scenarios. Efficient implementations of score-based structure learning benefit from past and current research in optimization theory, which can be adapted to the task by using the network score as the objective function to maximize. This is not true for approaches based on conditional independence tests, called constraint-based learning algorithms. The only optimization in widespread use, backtracking, leverages the symmetries implied by the definitions of neighborhood and Markov blanket. In this paper we illustrate how backtracking is implemented in recent versions of the bnlearn R package, and how it degrades the stability of Bayesian network structure learning for little gain in terms of speed. As an alternative, we describe a software architecture and framework that can be used to parallelize constraint-based structure learning algorithms (also implemented in bnlearn and we demonstrate its performance using four reference networks and two real-world data sets from genetics and systems biology. We show that on modern multi-core or multiprocessor hardware parallel implementations are preferable over backtracking, which was developed when single-processor machines were the norm.

  1. Improved differential evolution algorithms for handling economic dispatch optimization with generator constraints

    International Nuclear Information System (INIS)

    Coelho, Leandro dos Santos; Mariani, Viviana Cocco

    2007-01-01

    Global optimization based on evolutionary algorithms can be used as the important component for many engineering optimization problems. Evolutionary algorithms have yielded promising results for solving nonlinear, non-differentiable and multi-modal optimization problems in the power systems area. Differential evolution (DE) is a simple and efficient evolutionary algorithm for function optimization over continuous spaces. It has reportedly outperformed search heuristics when tested over both benchmark and real world problems. This paper proposes improved DE algorithms for solving economic load dispatch problems that take into account nonlinear generator features such as ramp rate limits and prohibited operating zones in the power system operation. The DE algorithms and its variants are validated for two test systems consisting of 6 and 15 thermal units. Various DE approaches outperforms other state of the art algorithms reported in the literature in solving load dispatch problems with generator constraints

  2. Constraints on the optical afterglow emission of the short/hard burst GRB 010119

    DEFF Research Database (Denmark)

    Gorosabel, J.; Andersen, M.I.; Hjorth, J.

    2002-01-01

    We report optical observations of the short/hard burst GRB 010119 error box, one of the smallest error boxes reported to date for short/hard GRBs. Limits of R >22.3 and I >21.2 are imposed by observations carried out 20.31 and 20.58 hours after the gamma-ray event, respectively. They represent th...

  3. Improved Genetic and Simulating Annealing Algorithms to Solve the Traveling Salesman Problem Using Constraint Programming

    Directory of Open Access Journals (Sweden)

    M. Abdul-Niby

    2016-04-01

    Full Text Available The Traveling Salesman Problem (TSP is an integer programming problem that falls into the category of NP-Hard problems. As the problem become larger, there is no guarantee that optimal tours will be found within reasonable computation time. Heuristics techniques, like genetic algorithm and simulating annealing, can solve TSP instances with different levels of accuracy. Choosing which algorithm to use in order to get a best solution is still considered as a hard choice. This paper suggests domain reduction as a tool to be combined with any meta-heuristic so that the obtained results will be almost the same. The hybrid approach of combining domain reduction with any meta-heuristic encountered the challenge of choosing an algorithm that matches the TSP instance in order to get the best results.

  4. A Novel Energy Saving Algorithm with Frame Response Delay Constraint in IEEE 802.16e

    Science.gov (United States)

    Nga, Dinh Thi Thuy; Kim, Mingon; Kang, Minho

    Sleep-mode operation of a Mobile Subscriber Station (MSS) in IEEE 802.16e effectively saves energy consumption; however, it induces frame response delay. In this letter, we propose an algorithm to quickly find the optimal value of the final sleep interval in sleep-mode in order to minimize energy consumption with respect to a given frame response delay constraint. The validations of our proposed algorithm through analytical results and simulation results suggest that our algorithm provide a potential guidance to energy saving.

  5. C-mixture and multi-constraints based genetic algorithm for collaborative data publishing

    Directory of Open Access Journals (Sweden)

    Yogesh R. Kulkarni

    2018-04-01

    Full Text Available Due to increasing need of using distributed databases, high demand presents on sharing data to easily update and access the useful information without any interruption. The sharing of distributed databases causes a serious issue of securing information since the databases consist of sensitive personal information. To preserve the sensitive information and at the same time, releasing the useful information, a significant effort is made by the researchers under privacy preserving data publishing that have been receiving considerable attention in recent years. In this work, a new privacy measure, called c-mixture is introduced to maintain the privacy constraint without affecting utility of the database. In order to apply the proposed privacy measure to privacy preserving data publishing, a new algorithm called, CPGEN is developed using genetic algorithm and multi-objective constraints. The proposed multi-objective optimization considered the multiple privacy constraints along with the utility measurement to measure the importance. Also, the proposed CPGEN is adapted to handle the cold-start problem which commonly happened in distributed databases. The proposed algorithm is experimented with adult dataset and quantitative performance is analyzed using generalized information loss and average equivalence class size metric. From the experimentation, we proved that the proposed algorithm maintained the privacy and utility as compared with the existing algorithm. Keywords: Privacy, Utility, Distributed databases, Data publishing, Optimization, Sensitive information

  6. Hybrid Discrete Differential Evolution Algorithm for Lot Splitting with Capacity Constraints in Flexible Job Scheduling

    Directory of Open Access Journals (Sweden)

    Xinli Xu

    2013-01-01

    Full Text Available A two-level batch chromosome coding scheme is proposed to solve the lot splitting problem with equipment capacity constraints in flexible job shop scheduling, which includes a lot splitting chromosome and a lot scheduling chromosome. To balance global search and local exploration of the differential evolution algorithm, a hybrid discrete differential evolution algorithm (HDDE is presented, in which the local strategy with dynamic random searching based on the critical path and a random mutation operator is developed. The performance of HDDE was experimented with 14 benchmark problems and the practical dye vat scheduling problem. The simulation results showed that the proposed algorithm has the strong global search capability and can effectively solve the practical lot splitting problems with equipment capacity constraints.

  7. A triangle voting algorithm based on double feature constraints for star sensors

    Science.gov (United States)

    Fan, Qiaoyun; Zhong, Xuyang

    2018-02-01

    A novel autonomous star identification algorithm is presented in this study. In the proposed algorithm, each sensor star constructs multi-triangle with its bright neighbor stars and obtains its candidates by triangle voting process, in which the triangle is considered as the basic voting element. In order to accelerate the speed of this algorithm and reduce the required memory for star database, feature extraction is carried out to reduce the dimension of triangles and each triangle is described by its base and height. During the identification period, the voting scheme based on double feature constraints is proposed to implement triangle voting. This scheme guarantees that only the catalog star satisfying two features can vote for the sensor star, which improves the robustness towards false stars. The simulation and real star image test demonstrate that compared with the other two algorithms, the proposed algorithm is more robust towards position noise, magnitude noise and false stars.

  8. Efficient Constraint Handling in Electromagnetism-Like Algorithm for Traveling Salesman Problem with Time Windows

    Science.gov (United States)

    Yurtkuran, Alkın

    2014-01-01

    The traveling salesman problem with time windows (TSPTW) is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA) that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle's boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms. PMID:24723834

  9. A Constraint programming-based genetic algorithm for capacity output optimization

    Directory of Open Access Journals (Sweden)

    Kate Ean Nee Goh

    2014-10-01

    Full Text Available Purpose: The manuscript presents an investigation into a constraint programming-based genetic algorithm for capacity output optimization in a back-end semiconductor manufacturing company.Design/methodology/approach: In the first stage, constraint programming defining the relationships between variables was formulated into the objective function. A genetic algorithm model was created in the second stage to optimize capacity output. Three demand scenarios were applied to test the robustness of the proposed algorithm.Findings: CPGA improved both the machine utilization and capacity output once the minimum requirements of a demand scenario were fulfilled. Capacity outputs of the three scenarios were improved by 157%, 7%, and 69%, respectively.Research limitations/implications: The work relates to aggregate planning of machine capacity in a single case study. The constraints and constructed scenarios were therefore industry-specific.Practical implications: Capacity planning in a semiconductor manufacturing facility need to consider multiple mutually influenced constraints in resource availability, process flow and product demand. The findings prove that CPGA is a practical and an efficient alternative to optimize the capacity output and to allow the company to review its capacity with quick feedback.Originality/value: The work integrates two contemporary computational methods for a real industry application conventionally reliant on human judgement.

  10. Efficient Constraint Handling in Electromagnetism-Like Algorithm for Traveling Salesman Problem with Time Windows

    Directory of Open Access Journals (Sweden)

    Alkın Yurtkuran

    2014-01-01

    Full Text Available The traveling salesman problem with time windows (TSPTW is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle’s boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms.

  11. Multiobjective anatomy-based dose optimization for HDR-brachytherapy with constraint free deterministic algorithms

    International Nuclear Information System (INIS)

    Milickovic, N.; Lahanas, M.; Papagiannopoulou, M.; Zamboglou, N.; Baltas, D.

    2002-01-01

    In high dose rate (HDR) brachytherapy, conventional dose optimization algorithms consider multiple objectives in the form of an aggregate function that transforms the multiobjective problem into a single-objective problem. As a result, there is a loss of information on the available alternative possible solutions. This method assumes that the treatment planner exactly understands the correlation between competing objectives and knows the physical constraints. This knowledge is provided by the Pareto trade-off set obtained by single-objective optimization algorithms with a repeated optimization with different importance vectors. A mapping technique avoids non-feasible solutions with negative dwell weights and allows the use of constraint free gradient-based deterministic algorithms. We compare various such algorithms and methods which could improve their performance. This finally allows us to generate a large number of solutions in a few minutes. We use objectives expressed in terms of dose variances obtained from a few hundred sampling points in the planning target volume (PTV) and in organs at risk (OAR). We compare two- to four-dimensional Pareto fronts obtained with the deterministic algorithms and with a fast-simulated annealing algorithm. For PTV-based objectives, due to the convex objective functions, the obtained solutions are global optimal. If OARs are included, then the solutions found are also global optimal, although local minima may be present as suggested. (author)

  12. A new algorithm for combined dynamic economic emission dispatch with security constraints

    International Nuclear Information System (INIS)

    Arul, R.; Velusami, S.; Ravi, G.

    2015-01-01

    The primary objective of CDEED (combined dynamic economic emission dispatch) problem is to determine the optimal power generation schedule for the online generating units over a time horizon considered and simultaneously minimizing the emission level and satisfying the generators and system constraints. The CDEED problem is bi-objective optimization problem, where generation cost and emission are considered as two competing objective functions. This bi-objective CDEED problem is represented as a single objective optimization problem by assigning different weights for each objective functions. The weights are varied in steps and for each variation one compromise solution are generated and finally fuzzy based selection method is used to select the best compromise solution from the set of compromise solutions obtained. In order to reflect the test systems considered as real power system model, the security constraints are also taken into account. Three new versions of DHS (differential harmony search) algorithms have been proposed to solve the CDEED problems. The feasibility of the proposed algorithms is demonstrated on IEEE-26 and IEEE-39 bus systems. The result obtained by the proposed CSADHS (chaotic self-adaptive differential harmony search) algorithm is found to be better than EP (evolutionary programming), DHS, and the other proposed algorithms in terms of solution quality, convergence speed and computation time. - Highlights: • In this paper, three new algorithms CDHS, SADHS and CSADHS are proposed. • To solve DED with emission, poz's, spinning reserve and security constraints. • Results obtained by the proposed CSADHS algorithm are better than others. • The proposed CSADHS algorithm has fast convergence characteristic than others

  13. An Improved Hierarchical Genetic Algorithm for Sheet Cutting Scheduling with Process Constraints

    Directory of Open Access Journals (Sweden)

    Yunqing Rao

    2013-01-01

    Full Text Available For the first time, an improved hierarchical genetic algorithm for sheet cutting problem which involves n cutting patterns for m non-identical parallel machines with process constraints has been proposed in the integrated cutting stock model. The objective of the cutting scheduling problem is minimizing the weighted completed time. A mathematical model for this problem is presented, an improved hierarchical genetic algorithm (ant colony—hierarchical genetic algorithm is developed for better solution, and a hierarchical coding method is used based on the characteristics of the problem. Furthermore, to speed up convergence rates and resolve local convergence issues, a kind of adaptive crossover probability and mutation probability is used in this algorithm. The computational result and comparison prove that the presented approach is quite effective for the considered problem.

  14. An improved hierarchical genetic algorithm for sheet cutting scheduling with process constraints.

    Science.gov (United States)

    Rao, Yunqing; Qi, Dezhong; Li, Jinling

    2013-01-01

    For the first time, an improved hierarchical genetic algorithm for sheet cutting problem which involves n cutting patterns for m non-identical parallel machines with process constraints has been proposed in the integrated cutting stock model. The objective of the cutting scheduling problem is minimizing the weighted completed time. A mathematical model for this problem is presented, an improved hierarchical genetic algorithm (ant colony--hierarchical genetic algorithm) is developed for better solution, and a hierarchical coding method is used based on the characteristics of the problem. Furthermore, to speed up convergence rates and resolve local convergence issues, a kind of adaptive crossover probability and mutation probability is used in this algorithm. The computational result and comparison prove that the presented approach is quite effective for the considered problem.

  15. An Adaptive Evolutionary Algorithm for Traveling Salesman Problem with Precedence Constraints

    Directory of Open Access Journals (Sweden)

    Jinmo Sung

    2014-01-01

    Full Text Available Traveling sales man problem with precedence constraints is one of the most notorious problems in terms of the efficiency of its solution approach, even though it has very wide range of industrial applications. We propose a new evolutionary algorithm to efficiently obtain good solutions by improving the search process. Our genetic operators guarantee the feasibility of solutions over the generations of population, which significantly improves the computational efficiency even when it is combined with our flexible adaptive searching strategy. The efficiency of the algorithm is investigated by computational experiments.

  16. An Improved Algorithm Research on the PrefixSpan Based on the Server Session Constraint

    Directory of Open Access Journals (Sweden)

    Cai Hong-Guo

    2017-01-01

    Full Text Available When we mine long sequential pattern and discover knowledge by the PrefixSpan algorithm in Web Usage Mining (WUM.The elements and the suffix sequences are much more may cause the problem of the calculation, such as the space explosion. To further solve the problem a more effective way is that. Firstly, a server session-based server log file format is proposed. Then the improved algorithm on the PrefixSpan based on server session constraint is discussed for mining frequent Sequential patterns on the website. Finally, the validity and superiority of the method are presented by the experiment in the paper.

  17. An analysis dictionary learning algorithm under a noisy data model with orthogonality constraint.

    Science.gov (United States)

    Zhang, Ye; Yu, Tenglong; Wang, Wenwu

    2014-01-01

    Two common problems are often encountered in analysis dictionary learning (ADL) algorithms. The first one is that the original clean signals for learning the dictionary are assumed to be known, which otherwise need to be estimated from noisy measurements. This, however, renders a computationally slow optimization process and potentially unreliable estimation (if the noise level is high), as represented by the Analysis K-SVD (AK-SVD) algorithm. The other problem is the trivial solution to the dictionary, for example, the null dictionary matrix that may be given by a dictionary learning algorithm, as discussed in the learning overcomplete sparsifying transform (LOST) algorithm. Here we propose a novel optimization model and an iterative algorithm to learn the analysis dictionary, where we directly employ the observed data to compute the approximate analysis sparse representation of the original signals (leading to a fast optimization procedure) and enforce an orthogonality constraint on the optimization criterion to avoid the trivial solutions. Experiments demonstrate the competitive performance of the proposed algorithm as compared with three baselines, namely, the AK-SVD, LOST, and NAAOLA algorithms.

  18. An Analysis Dictionary Learning Algorithm under a Noisy Data Model with Orthogonality Constraint

    Directory of Open Access Journals (Sweden)

    Ye Zhang

    2014-01-01

    Full Text Available Two common problems are often encountered in analysis dictionary learning (ADL algorithms. The first one is that the original clean signals for learning the dictionary are assumed to be known, which otherwise need to be estimated from noisy measurements. This, however, renders a computationally slow optimization process and potentially unreliable estimation (if the noise level is high, as represented by the Analysis K-SVD (AK-SVD algorithm. The other problem is the trivial solution to the dictionary, for example, the null dictionary matrix that may be given by a dictionary learning algorithm, as discussed in the learning overcomplete sparsifying transform (LOST algorithm. Here we propose a novel optimization model and an iterative algorithm to learn the analysis dictionary, where we directly employ the observed data to compute the approximate analysis sparse representation of the original signals (leading to a fast optimization procedure and enforce an orthogonality constraint on the optimization criterion to avoid the trivial solutions. Experiments demonstrate the competitive performance of the proposed algorithm as compared with three baselines, namely, the AK-SVD, LOST, and NAAOLA algorithms.

  19. A Hybrid Genetic Algorithm to Minimize Total Tardiness for Unrelated Parallel Machine Scheduling with Precedence Constraints

    Directory of Open Access Journals (Sweden)

    Chunfeng Liu

    2013-01-01

    Full Text Available The paper presents a novel hybrid genetic algorithm (HGA for a deterministic scheduling problem where multiple jobs with arbitrary precedence constraints are processed on multiple unrelated parallel machines. The objective is to minimize total tardiness, since delays of the jobs may lead to punishment cost or cancellation of orders by the clients in many situations. A priority rule-based heuristic algorithm, which schedules a prior job on a prior machine according to the priority rule at each iteration, is suggested and embedded to the HGA for initial feasible schedules that can be improved in further stages. Computational experiments are conducted to show that the proposed HGA performs well with respect to accuracy and efficiency of solution for small-sized problems and gets better results than the conventional genetic algorithm within the same runtime for large-sized problems.

  20. Multi-AGV path planning with double-path constraints by using an improved genetic algorithm.

    Directory of Open Access Journals (Sweden)

    Zengliang Han

    Full Text Available This paper investigates an improved genetic algorithm on multiple automated guided vehicle (multi-AGV path planning. The innovations embody in two aspects. First, three-exchange crossover heuristic operators are used to produce more optimal offsprings for getting more information than with the traditional two-exchange crossover heuristic operators in the improved genetic algorithm. Second, double-path constraints of both minimizing the total path distance of all AGVs and minimizing single path distances of each AGV are exerted, gaining the optimal shortest total path distance. The simulation results show that the total path distance of all AGVs and the longest single AGV path distance are shortened by using the improved genetic algorithm.

  1. A novel artificial immune algorithm for spatial clustering with obstacle constraint and its applications.

    Science.gov (United States)

    Sun, Liping; Luo, Yonglong; Ding, Xintao; Zhang, Ji

    2014-01-01

    An important component of a spatial clustering algorithm is the distance measure between sample points in object space. In this paper, the traditional Euclidean distance measure is replaced with innovative obstacle distance measure for spatial clustering under obstacle constraints. Firstly, we present a path searching algorithm to approximate the obstacle distance between two points for dealing with obstacles and facilitators. Taking obstacle distance as similarity metric, we subsequently propose the artificial immune clustering with obstacle entity (AICOE) algorithm for clustering spatial point data in the presence of obstacles and facilitators. Finally, the paper presents a comparative analysis of AICOE algorithm and the classical clustering algorithms. Our clustering model based on artificial immune system is also applied to the case of public facility location problem in order to establish the practical applicability of our approach. By using the clone selection principle and updating the cluster centers based on the elite antibodies, the AICOE algorithm is able to achieve the global optimum and better clustering effect.

  2. A Novel Artificial Immune Algorithm for Spatial Clustering with Obstacle Constraint and Its Applications

    Directory of Open Access Journals (Sweden)

    Liping Sun

    2014-01-01

    Full Text Available An important component of a spatial clustering algorithm is the distance measure between sample points in object space. In this paper, the traditional Euclidean distance measure is replaced with innovative obstacle distance measure for spatial clustering under obstacle constraints. Firstly, we present a path searching algorithm to approximate the obstacle distance between two points for dealing with obstacles and facilitators. Taking obstacle distance as similarity metric, we subsequently propose the artificial immune clustering with obstacle entity (AICOE algorithm for clustering spatial point data in the presence of obstacles and facilitators. Finally, the paper presents a comparative analysis of AICOE algorithm and the classical clustering algorithms. Our clustering model based on artificial immune system is also applied to the case of public facility location problem in order to establish the practical applicability of our approach. By using the clone selection principle and updating the cluster centers based on the elite antibodies, the AICOE algorithm is able to achieve the global optimum and better clustering effect.

  3. constNJ: an algorithm to reconstruct sets of phylogenetic trees satisfying pairwise topological constraints.

    Science.gov (United States)

    Matsen, Frederick A

    2010-06-01

    This article introduces constNJ (constrained neighbor-joining), an algorithm for phylogenetic reconstruction of sets of trees with constrained pairwise rooted subtree-prune-regraft (rSPR) distance. We are motivated by the problem of constructing sets of trees that must fit into a recombination, hybridization, or similar network. Rather than first finding a set of trees that are optimal according to a phylogenetic criterion (e.g., likelihood or parsimony) and then attempting to fit them into a network, constNJ estimates the trees while enforcing specified rSPR distance constraints. The primary input for constNJ is a collection of distance matrices derived from sequence blocks which are assumed to have evolved in a tree-like manner, such as blocks of an alignment which do not contain any recombination breakpoints. The other input is a set of rSPR constraint inequalities for any set of pairs of trees. constNJ is consistent and a strict generalization of the neighbor-joining algorithm; it uses the new notion of maximum agreement partitions (MAPs) to assure that the resulting trees satisfy the given rSPR distance constraints.

  4. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation

    Science.gov (United States)

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical

  5. Shrimp Feed Formulation via Evolutionary Algorithm with Power Heuristics for Handling Constraints

    Directory of Open Access Journals (Sweden)

    Rosshairy Abd. Rahman

    2017-01-01

    Full Text Available Formulating feed for shrimps represents a challenge to farmers and industry partners. Most previous studies selected from only a small number of ingredients due to cost pressures, even though hundreds of potential ingredients could be used in the shrimp feed mix. Even with a limited number of ingredients, the best combination of the most appropriate ingredients is still difficult to obtain due to various constraint requirements, such as nutrition value and cost. This paper proposes a new operator which we call Power Heuristics, as part of an Evolutionary Algorithm (EA, which acts as a constraint handling technique for the shrimp feed or diet formulation. The operator is able to choose and discard certain ingredients by utilising a specialized search mechanism. The aim is to achieve the most appropriate combination of ingredients. Power Heuristics are embedded in the EA at the early stage of a semirandom initialization procedure. The resulting combination of ingredients, after fulfilling all the necessary constraints, shows that this operator is useful in discarding inappropriate ingredients when a crucial constraint is violated.

  6. Fast and Easy 3D Reconstruction with the Help of Geometric Constraints and Genetic Algorithms

    Science.gov (United States)

    Annich, Afafe; El Abderrahmani, Abdellatif; Satori, Khalid

    2017-09-01

    The purpose of the work presented in this paper is to describe new method of 3D reconstruction from one or more uncalibrated images. This method is based on two important concepts: geometric constraints and genetic algorithms (GAs). At first, we are going to discuss the combination between bundle adjustment and GAs that we have proposed in order to improve 3D reconstruction efficiency and success. We used GAs in order to improve fitness quality of initial values that are used in the optimization problem. It will increase surely convergence rate. Extracted geometric constraints are used first to obtain an estimated value of focal length that helps us in the initialization step. Matching homologous points and constraints is used to estimate the 3D model. In fact, our new method gives us a lot of advantages: reducing the estimated parameter number in optimization step, decreasing used image number, winning time and stabilizing good quality of 3D results. At the end, without any prior information about our 3D scene, we obtain an accurate calibration of the cameras, and a realistic 3D model that strictly respects the geometric constraints defined before in an easy way. Various data and examples will be used to highlight the efficiency and competitiveness of our present approach.

  7. A probabilistic multi objective CLSC model with Genetic algorithm-ε_Constraint approach

    Directory of Open Access Journals (Sweden)

    Alireza TaheriMoghadam

    2014-05-01

    Full Text Available In this paper an uncertain multi objective closed-loop supply chain is developed. The first objective function is maximizing the total profit. The second objective function is minimizing the use of row materials. In the other word, the second objective function is maximizing the amount of remanufacturing and recycling. Genetic algorithm is used for optimization and for finding the pareto optimal line, Epsilon-constraint method is used. Finally a numerical example is solved with proposed approach and performance of the model is evaluated in different sizes. The results show that this approach is effective and useful for managerial decisions.

  8. VLSI Architecture for Configurable and Low-Complexity Design of Hard-Decision Viterbi Decoding Algorithm

    Directory of Open Access Journals (Sweden)

    Rachmad Vidya Wicaksana Putra

    2016-06-01

    Full Text Available Convolutional encoding and data decoding are fundamental processes in convolutional error correction. One of the most popular error correction methods in decoding is the Viterbi algorithm. It is extensively implemented in many digital communication applications. Its VLSI design challenges are about area, speed, power, complexity and configurability. In this research, we specifically propose a VLSI architecture for a configurable and low-complexity design of a hard-decision Viterbi decoding algorithm. The configurable and low-complexity design is achieved by designing a generic VLSI architecture, optimizing each processing element (PE at the logical operation level and designing a conditional adapter. The proposed design can be configured for any predefined number of trace-backs, only by changing the trace-back parameter value. Its computational process only needs N + 2 clock cycles latency, with N is the number of trace-backs. Its configurability function has been proven for N = 8, N = 16, N = 32 and N = 64. Furthermore, the proposed design was synthesized and evaluated in Xilinx and Altera FPGA target boards for area consumption and speed performance.

  9. Comparison of Multiobjective Evolutionary Algorithms for Operations Scheduling under Machine Availability Constraints

    Directory of Open Access Journals (Sweden)

    M. Frutos

    2013-01-01

    Full Text Available Many of the problems that arise in production systems can be handled with multiobjective techniques. One of those problems is that of scheduling operations subject to constraints on the availability of machines and buffer capacity. In this paper we analyze different Evolutionary multiobjective Algorithms (MOEAs for this kind of problems. We consider an experimental framework in which we schedule production operations for four real world Job-Shop contexts using three algorithms, NSGAII, SPEA2, and IBEA. Using two performance indexes, Hypervolume and R2, we found that SPEA2 and IBEA are the most efficient for the tasks at hand. On the other hand IBEA seems to be a better choice of tool since it yields more solutions in the approximate Pareto frontier.

  10. Constraints on Short, Hard Gamma-Ray Burst Beaming Angles from Gravitational Wave Observations

    Science.gov (United States)

    Williams, D.; Clark, J. A.; Williamson, A. R.; Heng, I. S.

    2018-05-01

    The first detection of a binary neutron star merger, GW170817, and an associated short gamma-ray burst confirmed that neutron star mergers are responsible for at least some of these bursts. The prompt gamma-ray emission from these events is thought to be highly relativistically beamed. We present a method for inferring limits on the extent of this beaming by comparing the number of short gamma-ray bursts (SGRBs) observed electromagnetically with the number of neutron star binary mergers detected in gravitational waves. We demonstrate that an observing run comparable to the expected Advanced LIGO (aLIGO) 2016–2017 run would be capable of placing limits on the beaming angle of approximately θ \\in (2\\buildrel{\\circ}\\over{.} 88,14\\buildrel{\\circ}\\over{.} 15), given one binary neutron star detection, under the assumption that all mergers produce a gamma-ray burst, and that SGRBs occur at an illustrative rate of {{ \\mathcal R }}grb}=10 {Gpc}}-3 {yr}}-1. We anticipate that after a year of observations with aLIGO at design sensitivity in 2020, these constraints will improve to θ \\in (8\\buildrel{\\circ}\\over{.} 10,14\\buildrel{\\circ}\\over{.} 95), under the same efficiency and SGRB rate assumptions.

  11. Real-time algorithms for JET hard X-ray and gamma-ray profile monitor

    International Nuclear Information System (INIS)

    Fernandes, A.; Pereira, R.C.; Valcárcel, D.F.; Alves, D.; Carvalho, B.B.; Sousa, J.; Kiptily, V.; Correia, C.M.B.A.; Gonçalves, B.

    2014-01-01

    Highlights: • Real-time tools and mechanisms are required for data handling and machine control. • A new DAQ system, ATCA based, with embedded FPGAs, was installed at JET. • Different real-time algorithms were developed for FPGAs and MARTe application. • MARTe provides the interface to CODAS and to the JET real-time network. • The new DAQ system is capable to process and deliver data in real-time. - Abstract: The steady state operation with high energy content foreseen for future generation of fusion devices will necessarily demand dedicated real-time tools and mechanisms for data handling and machine control. Consequently, the real-time systems for those devices should be carefully selected and their capabilities previously established. The Joint European Torus (JET) is undertaking an enhancement program, which includes tests of relevant real-time tools for the International Thermonuclear Experimental Reactor (ITER), a key experiment for future fusion devices. In these enhancements a new Data AcQuisition (DAQ) system is included, with real-time processing capabilities, for the JET hard X-ray and gamma-ray profile monitor. The DAQ system is composed of dedicated digitizer modules with embedded Field Programmable Gate Array (FPGA) devices. The interface between the DAQ system, the JET control and data acquisition system and the JET real-time data network is provided by the Multithreaded Application Real-Time executor (MARTe). This paper describes the real-time algorithms, developed for both digitizers’ FPGAs and MARTe application, capable of meeting the DAQ real-time requirements. The new DAQ system, including the embedded real-time features, was commissioned during the 2012 experiments. Results achieved with these real-time algorithms during experiments are presented

  12. Real-time algorithms for JET hard X-ray and gamma-ray profile monitor

    Energy Technology Data Exchange (ETDEWEB)

    Fernandes, A., E-mail: anaf@ipfn.ist.utl.pt [Associação EURATOM/IST, Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade Técnica de Lisboa, 1049-001 Lisboa (Portugal); Pereira, R.C.; Valcárcel, D.F.; Alves, D.; Carvalho, B.B.; Sousa, J. [Associação EURATOM/IST, Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade Técnica de Lisboa, 1049-001 Lisboa (Portugal); Kiptily, V. [EURATOM/CCFE Fusion Association, Culham Centre for Fusion Energy, Culham Science Centre, Abingdon OX14 3DB (United Kingdom); Correia, C.M.B.A. [Centro de Instrumentação, Dept. de Física, Universidade de Coimbra, 3004-516 Coimbra (Portugal); Gonçalves, B. [Associação EURATOM/IST, Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade Técnica de Lisboa, 1049-001 Lisboa (Portugal)

    2014-03-15

    Highlights: • Real-time tools and mechanisms are required for data handling and machine control. • A new DAQ system, ATCA based, with embedded FPGAs, was installed at JET. • Different real-time algorithms were developed for FPGAs and MARTe application. • MARTe provides the interface to CODAS and to the JET real-time network. • The new DAQ system is capable to process and deliver data in real-time. - Abstract: The steady state operation with high energy content foreseen for future generation of fusion devices will necessarily demand dedicated real-time tools and mechanisms for data handling and machine control. Consequently, the real-time systems for those devices should be carefully selected and their capabilities previously established. The Joint European Torus (JET) is undertaking an enhancement program, which includes tests of relevant real-time tools for the International Thermonuclear Experimental Reactor (ITER), a key experiment for future fusion devices. In these enhancements a new Data AcQuisition (DAQ) system is included, with real-time processing capabilities, for the JET hard X-ray and gamma-ray profile monitor. The DAQ system is composed of dedicated digitizer modules with embedded Field Programmable Gate Array (FPGA) devices. The interface between the DAQ system, the JET control and data acquisition system and the JET real-time data network is provided by the Multithreaded Application Real-Time executor (MARTe). This paper describes the real-time algorithms, developed for both digitizers’ FPGAs and MARTe application, capable of meeting the DAQ real-time requirements. The new DAQ system, including the embedded real-time features, was commissioned during the 2012 experiments. Results achieved with these real-time algorithms during experiments are presented.

  13. Model-based minimization algorithm of a supercritical helium loop consumption subject to operational constraints

    Science.gov (United States)

    Bonne, F.; Bonnay, P.; Girard, A.; Hoa, C.; Lacroix, B.; Le Coz, Q.; Nicollet, S.; Poncet, J.-M.; Zani, L.

    2017-12-01

    Supercritical helium loops at 4.2 K are the baseline cooling strategy of tokamaks superconducting magnets (JT-60SA, ITER, DEMO, etc.). This loops work with cryogenic circulators that force a supercritical helium flow through the superconducting magnets in order that the temperature stay below the working range all along their length. This paper shows that a supercritical helium loop associated with a saturated liquid helium bath can satisfy temperature constraints in different ways (playing on bath temperature and on the supercritical flow), but that only one is optimal from an energy point of view (every Watt consumed at 4.2 K consumes at least 220 W of electrical power). To find the optimal operational conditions, an algorithm capable of minimizing an objective function (energy consumption at 5 bar, 5 K) subject to constraints has been written. This algorithm works with a supercritical loop model realized with the Simcryogenics [2] library. This article describes the model used and the results of constrained optimization. It will be possible to see that the changes in operating point on the temperature of the magnet (e.g. in case of a change in the plasma configuration) involves large changes on the cryodistribution optimal operating point. Recommendations will be made to ensure that the energetic consumption is kept as low as possible despite the changing operating point. This work is partially supported by EUROfusion Consortium through the Euratom Research and Training Program 20142018 under Grant 633053.

  14. A Foot-Mounted Inertial Measurement Unit (IMU) Positioning Algorithm Based on Magnetic Constraint.

    Science.gov (United States)

    Wang, Yan; Li, Xin; Zou, Jiaheng

    2018-03-01

    With the development of related applications, indoor positioning techniques have been more and more widely developed. Based on Wi-Fi, Bluetooth low energy (BLE) and geomagnetism, indoor positioning techniques often rely on the physical location of fingerprint information. The focus and difficulty of establishing the fingerprint database are in obtaining a relatively accurate physical location with as little given information as possible. This paper presents a foot-mounted inertial measurement unit (IMU) positioning algorithm under the loop closure constraint based on magnetic information. It can provide relatively reliable position information without maps and geomagnetic information and provides a relatively accurate coordinate for the collection of a fingerprint database. In the experiment, the features extracted by the multi-level Fourier transform method proposed in this paper are validated and the validity of loop closure matching is tested with a RANSAC-based method. Moreover, the loop closure detection results show that the cumulative error of the trajectory processed by the graph optimization algorithm is significantly suppressed, presenting a good accuracy. The average error of the trajectory under loop closure constraint is controlled below 2.15 m.

  15. A Foot-Mounted Inertial Measurement Unit (IMU Positioning Algorithm Based on Magnetic Constraint

    Directory of Open Access Journals (Sweden)

    Yan Wang

    2018-03-01

    Full Text Available With the development of related applications, indoor positioning techniques have been more and more widely developed. Based on Wi-Fi, Bluetooth low energy (BLE and geomagnetism, indoor positioning techniques often rely on the physical location of fingerprint information. The focus and difficulty of establishing the fingerprint database are in obtaining a relatively accurate physical location with as little given information as possible. This paper presents a foot-mounted inertial measurement unit (IMU positioning algorithm under the loop closure constraint based on magnetic information. It can provide relatively reliable position information without maps and geomagnetic information and provides a relatively accurate coordinate for the collection of a fingerprint database. In the experiment, the features extracted by the multi-level Fourier transform method proposed in this paper are validated and the validity of loop closure matching is tested with a RANSAC-based method. Moreover, the loop closure detection results show that the cumulative error of the trajectory processed by the graph optimization algorithm is significantly suppressed, presenting a good accuracy. The average error of the trajectory under loop closure constraint is controlled below 2.15 m.

  16. Practical Constraint K-Segment Principal Curve Algorithms for Generating Railway GPS Digital Map

    Directory of Open Access Journals (Sweden)

    Dewang Chen

    2013-01-01

    Full Text Available In order to obtain a decent trade-off between the low-cost, low-accuracy Global Positioning System (GPS receivers and the requirements of high-precision digital maps for modern railways, using the concept of constraint K-segment principal curves (CKPCS and the expert knowledge on railways, we propose three practical CKPCS generation algorithms with reduced computational complexity, and thereafter more suitable for engineering applications. The three algorithms are named ALLopt, MPMopt, and DCopt, in which ALLopt exploits global optimization and MPMopt and DCopt apply local optimization with different initial solutions. We compare the three practical algorithms according to their performance on average projection error, stability, and the fitness for simple and complex simulated trajectories with noise data. It is found that ALLopt only works well for simple curves and small data sets. The other two algorithms can work better for complex curves and large data sets. Moreover, MPMopt runs faster than DCopt, but DCopt can work better for some curves with cross points. The three algorithms are also applied in generating GPS digital maps for two railway GPS data sets measured in Qinghai-Tibet Railway (QTR. Similar results like the ones in synthetic data are obtained. Because the trajectory of a railway is relatively simple and straight, we conclude that MPMopt works best according to the comprehensive considerations on the speed of computation and the quality of generated CKPCS. MPMopt can be used to obtain some key points to represent a large amount of GPS data. Hence, it can greatly reduce the data storage requirements and increase the positioning speed for real-time digital map applications.

  17. Homotopy Algorithm for Optimal Control Problems with a Second-order State Constraint

    International Nuclear Information System (INIS)

    Hermant, Audrey

    2010-01-01

    This paper deals with optimal control problems with a regular second-order state constraint and a scalar control, satisfying the strengthened Legendre-Clebsch condition. We study the stability of structure of stationary points. It is shown that under a uniform strict complementarity assumption, boundary arcs are stable under sufficiently smooth perturbations of the data. On the contrary, nonreducible touch points are not stable under perturbations. We show that under some reasonable conditions, either a boundary arc or a second touch point may appear. Those results allow us to design an homotopy algorithm which automatically detects the structure of the trajectory and initializes the shooting parameters associated with boundary arcs and touch points.

  18. Improved Coarray Interpolation Algorithms with Additional Orthogonal Constraint for Cyclostationary Signals

    Directory of Open Access Journals (Sweden)

    Jinyang Song

    2018-01-01

    Full Text Available Many modulated signals exhibit a cyclostationarity property, which can be exploited in direction-of-arrival (DOA estimation to effectively eliminate interference and noise. In this paper, our aim is to integrate the cyclostationarity with the spatial domain and enable the algorithm to estimate more sources than sensors. However, DOA estimation with a sparse array is performed in the coarray domain and the holes within the coarray limit the usage of the complete coarray information. In order to use the complete coarray information to increase the degrees-of-freedom (DOFs, sparsity-aware-based methods and the difference coarray interpolation methods have been proposed. In this paper, the coarray interpolation technique is further explored with cyclostationary signals. Besides the difference coarray model and its corresponding Toeplitz completion formulation, we build up a sum coarray model and formulate a Hankel completion problem. In order to further improve the performance of the structured matrix completion, we define the spatial spectrum sampling operations and the derivative (conjugate correlation subspaces, which can be exploited to construct orthogonal constraints for the autocorrelation vectors in the coarray interpolation problem. Prior knowledge of the source interval can also be incorporated into the problem. Simulation results demonstrate that the additional constraints contribute to a remarkable performance improvement.

  19. Preventive Maintenance Scheduling for Multicogeneration Plants with Production Constraints Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Khaled Alhamad

    2015-01-01

    Full Text Available This paper describes a method developed to schedule the preventive maintenance tasks of the generation and desalination units in separate and linked cogeneration plants provided that all the necessary maintenance and production constraints are satisfied. The proposed methodology is used to generate two preventing maintenance schedules, one for electricity and the other for distiller. Two types of crossover operators were adopted, 2-point and 4-point. The objective function of the model is to maximize the available number of operational units in each plant. The results obtained were satisfying the problem parameters. However, 4-point slightly produce better solution than 2-point ones for both electricity and water distiller. The performance as well as the effectiveness of the genetic algorithm in solving preventive maintenance scheduling is applied and tested on a real system of 21 units for electricity and 21 units for water. The results presented here show a great potential for utility applications for effective energy management over a time horizon of 52 weeks. The model presented is an effective decision tool that optimizes the solution of the maintenance scheduling problem for cogeneration plants under maintenance and production constraints.

  20. A Minimum Variance Algorithm for Overdetermined TOA Equations with an Altitude Constraint.

    Energy Technology Data Exchange (ETDEWEB)

    Romero, Louis A; Mason, John J.

    2018-04-01

    We present a direct (non-iterative) method for solving for the location of a radio frequency (RF) emitter, or an RF navigation receiver, using four or more time of arrival (TOA) measurements and an assumed altitude above an ellipsoidal earth. Both the emitter tracking problem and the navigation application are governed by the same equations, but with slightly different interpreta- tions of several variables. We treat the assumed altitude as a soft constraint, with a specified noise level, just as the TOA measurements are handled, with their respective noise levels. With 4 or more TOA measurements and the assumed altitude, the problem is overdetermined and is solved in the weighted least squares sense for the 4 unknowns, the 3-dimensional position and time. We call the new technique the TAQMV (TOA Altitude Quartic Minimum Variance) algorithm, and it achieves the minimum possible error variance for given levels of TOA and altitude estimate noise. The method algebraically produces four solutions, the least-squares solution, and potentially three other low residual solutions, if they exist. In the lightly overdermined cases where multiple local minima in the residual error surface are more likely to occur, this algebraic approach can produce all of the minima even when an iterative approach fails to converge. Algorithm performance in terms of solution error variance and divergence rate for bas eline (iterative) and proposed approach are given in tables.

  1. Real-Time Attitude Control Algorithm for Fast Tumbling Objects under Torque Constraint

    Science.gov (United States)

    Tsuda, Yuichi; Nakasuka, Shinichi

    This paper describes a new control algorithm for achieving any arbitrary attitude and angular velocity states of a rigid body, even fast and complicated tumbling rotations, under some practical constraints. This technique is expected to be applied for the attitude motion synchronization to capture a non-cooperative, tumbling object in such missions as removal of debris from orbit, servicing broken-down satellites for repairing or inspection, rescue of manned vehicles, etc. For this objective, we have introduced a novel control algorithm called Free Motion Path Method (FMPM) in the previous paper, which was formulated as an open-loop controller. The next step of this consecutive work is to derive a closed-loop FMPM controller, and as the preliminary step toward the objective, this paper attempts to derive a conservative state variables representation of a rigid body dynamics. 6-Dimensional conservative state variables are introduced in place of general angular velocity-attitude angle representation, and how to convert between both representations are shown in this paper.

  2. A ROBUST METHOD FOR STEREO VISUAL ODOMETRY BASED ON MULTIPLE EUCLIDEAN DISTANCE CONSTRAINT AND RANSAC ALGORITHM

    Directory of Open Access Journals (Sweden)

    Q. Zhou

    2017-07-01

    Full Text Available Visual Odometry (VO is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC and Random Sample Consensus (RANSAC algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation. The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.

  3. Development of Quadratic Programming Algorithm Based on Interior Point Method with Estimation Mechanism of Active Constraints

    Science.gov (United States)

    Hashimoto, Hiroyuki; Takaguchi, Yusuke; Nakamura, Shizuka

    Instability of calculation process and increase of calculation time caused by increasing size of continuous optimization problem remain the major issues to be solved to apply the technique to practical industrial systems. This paper proposes an enhanced quadratic programming algorithm based on interior point method mainly for improvement of calculation stability. The proposed method has dynamic estimation mechanism of active constraints on variables, which fixes the variables getting closer to the upper/lower limit on them and afterwards releases the fixed ones as needed during the optimization process. It is considered as algorithm-level integration of the solution strategy of active-set method into the interior point method framework. We describe some numerical results on commonly-used bench-mark problems called “CUTEr” to show the effectiveness of the proposed method. Furthermore, the test results on large-sized ELD problem (Economic Load Dispatching problems in electric power supply scheduling) are also described as a practical industrial application.

  4. An Evolutionary Algorithm for Feature Subset Selection in Hard Disk Drive Failure Prediction

    Science.gov (United States)

    Bhasin, Harpreet

    2011-01-01

    Hard disk drives are used in everyday life to store critical data. Although they are reliable, failure of a hard disk drive can be catastrophic, especially in applications like medicine, banking, air traffic control systems, missile guidance systems, computer numerical controlled machines, and more. The use of Self-Monitoring, Analysis and…

  5. Music algorithm for imaging of a sound-hard arc in limited-view inverse scattering problem

    Science.gov (United States)

    Park, Won-Kwang

    2017-07-01

    MUltiple SIgnal Classification (MUSIC) algorithm for a non-iterative imaging of sound-hard arc in limited-view inverse scattering problem is considered. In order to discover mathematical structure of MUSIC, we derive a relationship between MUSIC and an infinite series of Bessel functions of integer order. This structure enables us to examine some properties of MUSIC in limited-view problem. Numerical simulations are performed to support the identified structure of MUSIC.

  6. [Multispectral Radiation Algorithm Based on Emissivity Model Constraints for True Temperature Measurement].

    Science.gov (United States)

    Liang, Mei; Sun, Xiao-gang; Luan, Mei-sheng

    2015-10-01

    Temperature measurement is one of the important factors for ensuring product quality, reducing production cost and ensuring experiment safety in industrial manufacture and scientific experiment. Radiation thermometry is the main method for non-contact temperature measurement. The second measurement (SM) method is one of the common methods in the multispectral radiation thermometry. However, the SM method cannot be applied to on-line data processing. To solve the problems, a rapid inversion method for multispectral radiation true temperature measurement is proposed and constraint conditions of emissivity model are introduced based on the multispectral brightness temperature model. For non-blackbody, it can be drawn that emissivity is an increasing function in the interval if the brightness temperature is an increasing function or a constant function in a range and emissivity satisfies an inequality of emissivity and wavelength in that interval if the brightness temperature is a decreasing function in a range, according to the relationship of brightness temperatures at different wavelengths. The construction of emissivity assumption values is reduced from multiclass to one class and avoiding the unnecessary emissivity construction with emissivity model constraint conditions on the basis of brightness temperature information. Simulation experiments and comparisons for two different temperature points are carried out based on five measured targets with five representative variation trends of real emissivity. decreasing monotonically, increasing monotonically, first decreasing with wavelength and then increasing, first increasing and then decreasing and fluctuating with wavelength randomly. The simulation results show that compared with the SM method, for the same target under the same initial temperature and emissivity search range, the processing speed of the proposed algorithm is increased by 19.16%-43.45% with the same precision and the same calculation results.

  7. Using Coevolution Genetic Algorithm with Pareto Principles to Solve Project Scheduling Problem under Duration and Cost Constraints

    Directory of Open Access Journals (Sweden)

    Alexandr Victorovich Budylskiy

    2014-06-01

    Full Text Available This article considers the multicriteria optimization approach using the modified genetic algorithm to solve the project-scheduling problem under duration and cost constraints. The work contains the list of choices for solving this problem. The multicriteria optimization approach is justified here. The study describes the Pareto principles, which are used in the modified genetic algorithm. We identify the mathematical model of the project-scheduling problem. We introduced the modified genetic algorithm, the ranking strategies, the elitism approaches. The article includes the example.

  8. Optimal Power Flow Using Gbest-Guided Cuckoo Search Algorithm with Feedback Control Strategy and Constraint Domination Rule

    Directory of Open Access Journals (Sweden)

    Gonggui Chen

    2017-01-01

    Full Text Available The optimal power flow (OPF is well-known as a significant optimization tool for the security and economic operation of power system, and OPF problem is a complex nonlinear, nondifferentiable programming problem. Thus this paper proposes a Gbest-guided cuckoo search algorithm with the feedback control strategy and constraint domination rule which is named as FCGCS algorithm for solving OPF problem and getting optimal solution. This FCGCS algorithm is guided by the global best solution for strengthening exploitation ability. Feedback control strategy is devised to dynamically regulate the control parameters according to actual and specific feedback value in the simulation process. And the constraint domination rule can efficiently handle inequality constraints on state variables, which is superior to traditional penalty function method. The performance of FCGCS algorithm is tested and validated on the IEEE 30-bus and IEEE 57-bus example systems, and simulation results are compared with different methods obtained from other literatures recently. The comparison results indicate that FCGCS algorithm can provide high-quality feasible solutions for different OPF problems.

  9. Hard Ware Implementation of Diamond Search Algorithm for Motion Estimation and Object Tracking

    International Nuclear Information System (INIS)

    Hashimaa, S.M.; Mahmoud, I.I.; Elazm, A.A.

    2009-01-01

    Object tracking is very important task in computer vision. Fast search algorithms emerged as important search technique to achieve real time tracking results. To enhance the performance of these algorithms, we advocate the hardware implementation of such algorithms. Diamond search block matching motion estimation has been proposed recently to reduce the complexity of motion estimation. In this paper we selected the diamond search algorithm (DS) for implementation using FPGA. This is due to its fundamental role in all fast search patterns. The proposed architecture is simulated and synthesized using Xilinix and modelsim soft wares. The results agree with the algorithm implementation in Matlab environment.

  10. The MarCon Algorithm: A Systematic Market Approach to Distributed Constraint Problems

    National Research Council Canada - National Science Library

    Parunak, H. Van Dyke

    1998-01-01

    .... Each variable integrates this information from the constraints interested in it and provides feedback that enables the constraints to shrink their sets of assignments until they converge on a solution...

  11. Hardness Optimization for Al6061-MWCNT Nanocomposite Prepared by Mechanical Alloying Using Artificial Neural Networks and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Mehrdad Mahdavi Jafari

    2017-06-01

    Full Text Available Among artificial intelligence approaches, artificial neural networks (ANNs and genetic algorithm (GA are widely applied for modification of materials property in engineering science in large scale modeling. In this work artificial neural network (ANN and genetic algorithm (GA were applied to find the optimal conditions for achieving the maximum hardness of Al6061 reinforced by multiwall carbon nanotubes (MWCNTs through modeling of nanocomposite characteristics. After examination the different ANN architectures an optimal structure of the model, i.e. 6-18-1, is obtained with 1.52% mean absolute error and R2 = 0.987. The proposed structure was used as fitting function for genetic algorithm. The results of GA simulation predicted that the combination sintering temperature 346 °C, sintering time 0.33 h, compact pressure 284.82 MPa, milling time 19.66 h and vial speed 310.5 rpm give the optimum hardness, (i.e., 87.5 micro Vickers in the composite with 0.53 wt% CNT. Also, sensitivity analysis shows that the sintering time, milling time, compact pressure, vial speed and amount of MWCNT are the significant parameter and sintering time is the most important parameter. Comparison of the predicted values with the experimental data revealed that the GA–ANN model is a powerful method to find the optimal conditions for preparing of Al6061-MWCNT.

  12. Multi-objective optimization in the presence of practical constraints using non-dominated sorting hybrid cuckoo search algorithm

    Directory of Open Access Journals (Sweden)

    M. Balasubbareddy

    2015-12-01

    Full Text Available A novel optimization algorithm is proposed to solve single and multi-objective optimization problems with generation fuel cost, emission, and total power losses as objectives. The proposed method is a hybridization of the conventional cuckoo search algorithm and arithmetic crossover operations. Thus, the non-linear, non-convex objective function can be solved under practical constraints. The effectiveness of the proposed algorithm is analyzed for various cases to illustrate the effect of practical constraints on the objectives' optimization. Two and three objective multi-objective optimization problems are formulated and solved using the proposed non-dominated sorting-based hybrid cuckoo search algorithm. The effectiveness of the proposed method in confining the Pareto front solutions in the solution region is analyzed. The results for single and multi-objective optimization problems are physically interpreted on standard test functions as well as the IEEE-30 bus test system with supporting numerical and graphical results and also validated against existing methods.

  13. Improved adaptive genetic algorithm with sparsity constraint applied to thermal neutron CT reconstruction of two-phase flow

    Science.gov (United States)

    Yan, Mingfei; Hu, Huasi; Otake, Yoshie; Taketani, Atsushi; Wakabayashi, Yasuo; Yanagimachi, Shinzo; Wang, Sheng; Pan, Ziheng; Hu, Guang

    2018-05-01

    Thermal neutron computer tomography (CT) is a useful tool for visualizing two-phase flow due to its high imaging contrast and strong penetrability of neutrons for tube walls constructed with metallic material. A novel approach for two-phase flow CT reconstruction based on an improved adaptive genetic algorithm with sparsity constraint (IAGA-SC) is proposed in this paper. In the algorithm, the neighborhood mutation operator is used to ensure the continuity of the reconstructed object. The adaptive crossover probability P c and mutation probability P m are improved to help the adaptive genetic algorithm (AGA) achieve the global optimum. The reconstructed results for projection data, obtained from Monte Carlo simulation, indicate that the comprehensive performance of the IAGA-SC algorithm exceeds the adaptive steepest descent-projection onto convex sets (ASD-POCS) algorithm in restoring typical and complex flow regimes. It especially shows great advantages in restoring the simply connected flow regimes and the shape of object. In addition, the CT experiment for two-phase flow phantoms was conducted on the accelerator-driven neutron source to verify the performance of the developed IAGA-SC algorithm.

  14. Precise algorithm to generate random sequential adsorption of hard polygons at saturation

    Science.gov (United States)

    Zhang, G.

    2018-04-01

    Random sequential adsorption (RSA) is a time-dependent packing process, in which particles of certain shapes are randomly and sequentially placed into an empty space without overlap. In the infinite-time limit, the density approaches a "saturation" limit. Although this limit has attracted particular research interest, the majority of past studies could only probe this limit by extrapolation. We have previously found an algorithm to reach this limit using finite computational time for spherical particles and could thus determine the saturation density of spheres with high accuracy. In this paper, we generalize this algorithm to generate saturated RSA packings of two-dimensional polygons. We also calculate the saturation density for regular polygons of three to ten sides and obtain results that are consistent with previous, extrapolation-based studies.

  15. Multilevel Balancing Domain Decomposition by Constraints Deluxe Algorithms with Adaptive Coarse Spaces for Flow in Porous Media

    KAUST Repository

    Zampini, Stefano; Tu, Xuemin

    2017-01-01

    Multilevel balancing domain decomposition by constraints (BDDC) deluxe algorithms are developed for the saddle point problems arising from mixed formulations of Darcy flow in porous media. In addition to the standard no-net-flux constraints on each face, adaptive primal constraints obtained from the solutions of local generalized eigenvalue problems are included to control the condition number. Special deluxe scaling and local generalized eigenvalue problems are designed in order to make sure that these additional primal variables lie in a benign subspace in which the preconditioned operator is positive definite. The current multilevel theory for BDDC methods for porous media flow is complemented with an efficient algorithm for the computation of the so-called malign part of the solution, which is needed to make sure the rest of the solution can be obtained using the conjugate gradient iterates lying in the benign subspace. We also propose a new technique, based on the Sherman--Morrison formula, that lets us preserve the complexity of the subdomain local solvers. Condition number estimates are provided under certain standard assumptions. Extensive numerical experiments confirm the theoretical estimates; additional numerical results prove the effectiveness of the method with higher order elements and high-contrast problems from real-world applications.

  16. A novel optimization algorithm based on epsilon constraint-RBF neural network for tuning PID controller in decoupled HVAC system

    International Nuclear Information System (INIS)

    Attaran, Seyed Mohammad; Yusof, Rubiyah; Selamat, Hazlina

    2016-01-01

    Highlights: • Decoupling of a heating, ventilation, and air conditioning system is presented. • RBF models were identified by Epsilon constraint method for temperature and humidity. • Control settings derived from optimization of the decoupled model. • Epsilon constraint-RBF based on PID controller was implemented to keep thermal comfort and minimize energy. • Enhancements of controller parameters of the HVAC system are desired. - Abstract: The energy efficiency of a heating, ventilating and air conditioning (HVAC) system optimized using a radial basis function neural network (RBFNN) combined with the epsilon constraint (EC) method is reported. The new method adopts the advanced algorithm of RBFNN for the HVAC system to estimate the residual errors, increase the control signal and reduce the error results. The objective of this study is to develop and simulate the EC-RBFNN for a self tuning PID controller for a decoupled bilinear HVAC system to control the temperature and relative humidity (RH) produced by the system. A case study indicates that the EC-RBFNN algorithm has a much better accuracy than optimization PID itself and PID-RBFNN, respectively.

  17. Multilevel Balancing Domain Decomposition by Constraints Deluxe Algorithms with Adaptive Coarse Spaces for Flow in Porous Media

    KAUST Repository

    Zampini, Stefano

    2017-08-03

    Multilevel balancing domain decomposition by constraints (BDDC) deluxe algorithms are developed for the saddle point problems arising from mixed formulations of Darcy flow in porous media. In addition to the standard no-net-flux constraints on each face, adaptive primal constraints obtained from the solutions of local generalized eigenvalue problems are included to control the condition number. Special deluxe scaling and local generalized eigenvalue problems are designed in order to make sure that these additional primal variables lie in a benign subspace in which the preconditioned operator is positive definite. The current multilevel theory for BDDC methods for porous media flow is complemented with an efficient algorithm for the computation of the so-called malign part of the solution, which is needed to make sure the rest of the solution can be obtained using the conjugate gradient iterates lying in the benign subspace. We also propose a new technique, based on the Sherman--Morrison formula, that lets us preserve the complexity of the subdomain local solvers. Condition number estimates are provided under certain standard assumptions. Extensive numerical experiments confirm the theoretical estimates; additional numerical results prove the effectiveness of the method with higher order elements and high-contrast problems from real-world applications.

  18. Evolutionary Hybrid Particle Swarm Optimization Algorithm for Solving NP-Hard No-Wait Flow Shop Scheduling Problems

    Directory of Open Access Journals (Sweden)

    Laxmi A. Bewoor

    2017-10-01

    Full Text Available The no-wait flow shop is a flowshop in which the scheduling of jobs is continuous and simultaneous through all machines without waiting for any consecutive machines. The scheduling of a no-wait flow shop requires finding an appropriate sequence of jobs for scheduling, which in turn reduces total processing time. The classical brute force method for finding the probabilities of scheduling for improving the utilization of resources may become trapped in local optima, and this problem can hence be observed as a typical NP-hard combinatorial optimization problem that requires finding a near optimal solution with heuristic and metaheuristic techniques. This paper proposes an effective hybrid Particle Swarm Optimization (PSO metaheuristic algorithm for solving no-wait flow shop scheduling problems with the objective of minimizing the total flow time of jobs. This Proposed Hybrid Particle Swarm Optimization (PHPSO algorithm presents a solution by the random key representation rule for converting the continuous position information values of particles to a discrete job permutation. The proposed algorithm initializes population efficiently with the Nawaz-Enscore-Ham (NEH heuristic technique and uses an evolutionary search guided by the mechanism of PSO, as well as simulated annealing based on a local neighborhood search to avoid getting stuck in local optima and to provide the appropriate balance of global exploration and local exploitation. Extensive computational experiments are carried out based on Taillard’s benchmark suite. Computational results and comparisons with existing metaheuristics show that the PHPSO algorithm outperforms the existing methods in terms of quality search and robustness for the problem considered. The improvement in solution quality is confirmed by statistical tests of significance.

  19. TH-CD-209-01: A Greedy Reassignment Algorithm for the PBS Minimum Monitor Unit Constraint

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Y; Kooy, H; Craft, D; Depauw, N; Flanz, J; Clasie, B [Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States)

    2016-06-15

    Purpose: To investigate a Greedy Reassignment algorithm in order to mitigate the effects of low weight spots in proton pencil beam scanning (PBS) treatment plans. Methods: To convert a plan from the treatment planning system’s (TPS) to a deliverable plan, post processing methods can be used to adjust the spot maps to meets the minimum MU constraint. Existing methods include: deleting low weight spots (Cut method), or rounding spots with weight above/below half the limit up/down to the limit/zero (Round method). An alternative method called Greedy Reassignment was developed in this work in which the lowest weight spot in the field was removed and its weight reassigned equally among its nearest neighbors. The process was repeated with the next lowest weight spot until all spots in the field were above the MU constraint. The algorithm performance was evaluated using plans collected from 190 patients (496 fields) treated at our facility. The evaluation criteria were the γ-index pass rate comparing the pre-processed and post-processed dose distributions. A planning metric was further developed to predict the impact of post-processing on treatment plans for various treatment planning, machine, and dose tolerance parameters. Results: For fields with a gamma pass rate of 90±1%, the metric has a standard deviation equal to 18% of the centroid value. This showed that the metric and γ-index pass rate are correlated for the Greedy Reassignment algorithm. Using a 3rd order polynomial fit to the data, the Greedy Reassignment method had 1.8 times better metric at 90% pass rate compared to other post-processing methods. Conclusion: We showed that the Greedy Reassignment method yields deliverable plans that are closest to the optimized-without-MU-constraint plan from the TPS. The metric developed in this work could help design the minimum MU threshold with the goal of keeping the γ-index pass rate above an acceptable value.

  20. CCM Continuity Constraint Method: A finite-element computational fluid dynamics algorithm for incompressible Navier-Stokes fluid flows

    Energy Technology Data Exchange (ETDEWEB)

    Williams, P. T. [Univ. of Tennessee, Knoxville, TN (United States)

    1993-09-01

    As the field of computational fluid dynamics (CFD) continues to mature, algorithms are required to exploit the most recent advances in approximation theory, numerical mathematics, computing architectures, and hardware. Meeting this requirement is particularly challenging in incompressible fluid mechanics, where primitive-variable CFD formulations that are robust, while also accurate and efficient in three dimensions, remain an elusive goal. This dissertation asserts that one key to accomplishing this goal is recognition of the dual role assumed by the pressure, i.e., a mechanism for instantaneously enforcing conservation of mass and a force in the mechanical balance law for conservation of momentum. Proving this assertion has motivated the development of a new, primitive-variable, incompressible, CFD algorithm called the Continuity Constraint Method (CCM). The theoretical basis for the CCM consists of a finite-element spatial semi-discretization of a Galerkin weak statement, equal-order interpolation for all state-variables, a 0-implicit time-integration scheme, and a quasi-Newton iterative procedure extended by a Taylor Weak Statement (TWS) formulation for dispersion error control. Original contributions to algorithmic theory include: (a) formulation of the unsteady evolution of the divergence error, (b) investigation of the role of non-smoothness in the discretized continuity-constraint function, (c) development of a uniformly H1 Galerkin weak statement for the Reynolds-averaged Navier-Stokes pressure Poisson equation, (d) derivation of physically and numerically well-posed boundary conditions, and (e) investigation of sparse data structures and iterative methods for solving the matrix algebra statements generated by the algorithm.

  1. A conservative quaternion-based time integration algorithm for rigid body rotations with implicit constraints

    DEFF Research Database (Denmark)

    Nielsen, Martin Bjerre; Krenk, Steen

    2012-01-01

    A conservative time integration algorithm for rigid body rotations is presented in a purely algebraic form in terms of the four quaternions components and the four conjugate momentum variables via Hamilton’s equations. The introduction of an extended mass matrix leads to a symmetric set of eight...

  2. SU-F-T-340: Direct Editing of Dose Volume Histograms: Algorithms and a Unified Convex Formulation for Treatment Planning with Dose Constraints

    Energy Technology Data Exchange (ETDEWEB)

    Ungun, B [Stanford University, Stanford, CA (United States); Stanford University School of Medicine, Stanford, CA (United States); Fu, A; Xing, L [Stanford University School of Medicine, Stanford, CA (United States); Boyd, S [Stanford University, Stanford, CA (United States)

    2016-06-15

    Purpose: To develop a procedure for including dose constraints in convex programming-based approaches to treatment planning, and to support dynamic modification of such constraints during planning. Methods: We present a mathematical approach that allows mean dose, maximum dose, minimum dose and dose volume (i.e., percentile) constraints to be appended to any convex formulation of an inverse planning problem. The first three constraint types are convex and readily incorporated. Dose volume constraints are not convex, however, so we introduce a convex restriction that is related to CVaR-based approaches previously proposed in the literature. To compensate for the conservatism of this restriction, we propose a new two-pass algorithm that solves the restricted problem on a first pass and uses this solution to form exact constraints on a second pass. In another variant, we introduce slack variables for each dose constraint to prevent the problem from becoming infeasible when the user specifies an incompatible set of constraints. We implement the proposed methods in Python using the convex programming package cvxpy in conjunction with the open source convex solvers SCS and ECOS. Results: We show, for several cases taken from the clinic, that our proposed method meets specified constraints (often with margin) when they are feasible. Constraints are met exactly when we use the two-pass method, and infeasible constraints are replaced with the nearest feasible constraint when slacks are used. Finally, we introduce ConRad, a Python-embedded free software package for convex radiation therapy planning. ConRad implements the methods described above and offers a simple interface for specifying prescriptions and dose constraints. Conclusion: This work demonstrates the feasibility of using modifiable dose constraints in a convex formulation, making it practical to guide the treatment planning process with interactively specified dose constraints. This work was supported by the

  3. SU-F-T-340: Direct Editing of Dose Volume Histograms: Algorithms and a Unified Convex Formulation for Treatment Planning with Dose Constraints

    International Nuclear Information System (INIS)

    Ungun, B; Fu, A; Xing, L; Boyd, S

    2016-01-01

    Purpose: To develop a procedure for including dose constraints in convex programming-based approaches to treatment planning, and to support dynamic modification of such constraints during planning. Methods: We present a mathematical approach that allows mean dose, maximum dose, minimum dose and dose volume (i.e., percentile) constraints to be appended to any convex formulation of an inverse planning problem. The first three constraint types are convex and readily incorporated. Dose volume constraints are not convex, however, so we introduce a convex restriction that is related to CVaR-based approaches previously proposed in the literature. To compensate for the conservatism of this restriction, we propose a new two-pass algorithm that solves the restricted problem on a first pass and uses this solution to form exact constraints on a second pass. In another variant, we introduce slack variables for each dose constraint to prevent the problem from becoming infeasible when the user specifies an incompatible set of constraints. We implement the proposed methods in Python using the convex programming package cvxpy in conjunction with the open source convex solvers SCS and ECOS. Results: We show, for several cases taken from the clinic, that our proposed method meets specified constraints (often with margin) when they are feasible. Constraints are met exactly when we use the two-pass method, and infeasible constraints are replaced with the nearest feasible constraint when slacks are used. Finally, we introduce ConRad, a Python-embedded free software package for convex radiation therapy planning. ConRad implements the methods described above and offers a simple interface for specifying prescriptions and dose constraints. Conclusion: This work demonstrates the feasibility of using modifiable dose constraints in a convex formulation, making it practical to guide the treatment planning process with interactively specified dose constraints. This work was supported by the

  4. Determining Optimal Replacement Policy with an Availability Constraint via Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Shengliang Zong

    2017-01-01

    Full Text Available We develop a model and a genetic algorithm for determining an optimal replacement policy for power equipment subject to Poisson shocks. If the time interval of two consecutive shocks is less than a threshold value, the failed equipment can be repaired. We assume that the operating time after repair is stochastically nonincreasing and the repair time is exponentially distributed with a geometric increasing mean. Our objective is to minimize the expected average cost under an availability requirement. Based on this average cost function, we propose the genetic algorithm to locate the optimal replacement policy N to minimize the average cost rate. The results show that the GA is effective and efficient in finding the optimal solutions. The availability of equipment has significance effect on the optimal replacement policy. Many practical systems fit the model developed in this paper.

  5. Determining Optimal Replacement Policy with an Availability Constraint via Genetic Algorithms

    OpenAIRE

    Zong, Shengliang; Chai, Guorong; Su, Yana

    2017-01-01

    We develop a model and a genetic algorithm for determining an optimal replacement policy for power equipment subject to Poisson shocks. If the time interval of two consecutive shocks is less than a threshold value, the failed equipment can be repaired. We assume that the operating time after repair is stochastically nonincreasing and the repair time is exponentially distributed with a geometric increasing mean. Our objective is to minimize the expected average cost under an availability requi...

  6. Application of constraint satisfaction algorithms for conditioning and packing activated control rod assemblies in MOSAIK"R-casks. Application of constraint satisfaction algorithms for conditioning and packaging 160 control rod assemblies

    International Nuclear Information System (INIS)

    Harding, P.J.

    2017-01-01

    In the wake of the decommissioning of numerous nuclear power plants in Germany, techniques to reduce the number of costly waste casks or containers are sought after. The large bandwidth of limits (dose rate, mass, individual nuclide activities, chemical composition...) the waste packages have to comply with for both interim storage facilities and the repository Konrad render the manual planning of packaging concepts prohibitive. However, in the past, the planning for packaging has been performed in this way, albeit on the basis of several facilitating assumptions. Surprisingly, to the best of our knowledge, the automated computer-assisted generation of packaging plans for radioactive waste has not been demonstrated previously. In this talk we demonstrate how the conditioning and packing of 160 control rod assemblies was optimised using constraint satisfaction algorithms. These algorithms can be executed by a computer in a few minutes, thus considerably accelerating the generation of packaging plans, while optimising the utilisation of the waste casks and containers with respect to mass, activity, dose rate, etc. This automated and computer-assisted procedure took into account complex logistical boundary conditions present during decommissioning, such as space requirements, the sequence of the waste and the (lack of) availability of suitable waste casks. In addition, packaging concepts based on several scenarios (cask availability, space requirements,...) were easily and automatically generated once the packaging rules had been coded. We demonstrate the successful application of these algorithms to a real packaging campaign of control rod assemblies of a boiling water reactor, for which excellent results were achieved. We also present an outlook of a much larger scale project, in which the logistics and storage of radioactive waste packages is mathematically optimised. Finally, we give prospects on these techniques to others, similar logistical problems currently

  7. Linear time algorithms to construct populations fitting multiple constraint distributions at genomic scales.

    Science.gov (United States)

    Siragusa, Enrico; Haiminen, Niina; Utro, Filippo; Parida, Laxmi

    2017-10-09

    Computer simulations can be used to study population genetic methods, models and parameters, as well as to predict potential outcomes. For example, in plant populations, predicting the outcome of breeding operations can be studied using simulations. In-silico construction of populations with pre-specified characteristics is an important task in breeding optimization and other population genetic studies. We present two linear time Simulation using Best-fit Algorithms (SimBA) for two classes of problems where each co-fits two distributions: SimBA-LD fits linkage disequilibrium and minimum allele frequency distributions, while SimBA-hap fits founder-haplotype and polyploid allele dosage distributions. An incremental gap-filling version of previously introduced SimBA-LD is here demonstrated to accurately fit the target distributions, allowing efficient large scale simulations. SimBA-hap accuracy and efficiency is demonstrated by simulating tetraploid populations with varying numbers of founder haplotypes, we evaluate both a linear time greedy algoritm and an optimal solution based on mixed-integer programming. SimBA is available on http://researcher.watson.ibm.com/project/5669.

  8. Constraints on silicates formation in the Si-Al-Fe system: Application to hard deposits in steam generators of PWR nuclear reactors

    Science.gov (United States)

    Berger, Gilles; Million-Picallion, Lisa; Lefevre, Grégory; Delaunay, Sophie

    2015-04-01

    Introduction: The hydrothermal crystallization of silicates phases in the Si-Al-Fe system may lead to industrial constraints that can be encountered in the nuclear industry in at least two contexts: the geological repository for nuclear wastes and the formation of hard sludges in the steam generator of the PWR nuclear plants. In the first situation, the chemical reactions between the Fe-canister and the surrounding clays have been extensively studied in laboratory [1-7] and pilot experiments [8]. These studies demonstrated that the high reactivity of metallic iron leads to the formation of Fe-silicates, berthierine like, in a wide range of temperature. By contrast, the formation of deposits in the steam generators of PWR plants, called hard sludges, is a newer and less studied issue which can affect the reactor performance. Experiments: We present here a preliminary set of experiments reproducing the formation of hard sludges under conditions representative of the steam generator of PWR power plant: 275°C, diluted solutions maintained at low potential by hydrazine addition and at alkaline pH by low concentrations of amines and ammoniac. Magnetite, a corrosion by-product of the secondary circuit, is the source of iron while aqueous Si and Al, the major impurities in this system, are supplied either as trace elements in the circulating solution or by addition of amorphous silica and alumina when considering confined zones. The fluid chemistry is monitored by sampling aliquots of the solution. Eh and pH are continuously measured by hydrothermal Cormet© electrodes implanted in a titanium hydrothermal reactor. The transformation, or not, of the solid fraction was examined post-mortem. These experiments evidenced the role of Al colloids as precursor of cements composed of kaolinite and boehmite, and the passivation of amorphous silica (becoming unreactive) likely by sorption of aqueous iron. But no Fe-bearing was formed by contrast to many published studies on the Fe

  9. Improved Iterative Hard- and Soft-Reliability Based Majority-Logic Decoding Algorithms for Non-Binary Low-Density Parity-Check Codes

    Science.gov (United States)

    Xiong, Chenrong; Yan, Zhiyuan

    2014-10-01

    Non-binary low-density parity-check (LDPC) codes have some advantages over their binary counterparts, but unfortunately their decoding complexity is a significant challenge. The iterative hard- and soft-reliability based majority-logic decoding algorithms are attractive for non-binary LDPC codes, since they involve only finite field additions and multiplications as well as integer operations and hence have significantly lower complexity than other algorithms. In this paper, we propose two improvements to the majority-logic decoding algorithms. Instead of the accumulation of reliability information in the existing majority-logic decoding algorithms, our first improvement is a new reliability information update. The new update not only results in better error performance and fewer iterations on average, but also further reduces computational complexity. Since existing majority-logic decoding algorithms tend to have a high error floor for codes whose parity check matrices have low column weights, our second improvement is a re-selection scheme, which leads to much lower error floors, at the expense of more finite field operations and integer operations, by identifying periodic points, re-selecting intermediate hard decisions, and changing reliability information.

  10. Emergency Diesel Generation System Surveillance Test Policy Optimization Through Genetic Algorithms Using Non-Periodic Intervention Frequencies and Seasonal Constraints

    International Nuclear Information System (INIS)

    Lapa, Celso M.F.; Pereira, Claudio M.N.A.; Frutuoso e Melo, P.F.

    2002-01-01

    Nuclear standby safety systems must frequently, be submitted to periodic surveillance tests. The main reason is to detect, as soon as possible, the occurrence of unrevealed failure states. Such interventions may, however, affect the overall system availability due to component outages. Besides, as the components are demanded, deterioration by aging may occur, penalizing again the system performance. By these reasons, planning a good surveillance test policy implies in a trade-off between gains and overheads due to the surveillance test interventions. In order maximize the systems average availability during a given period of time, it has recently been developed a non-periodic surveillance test optimization methodology based on genetic algorithms (GA). The fact of allowing non-periodic tests turns the solution space much more flexible and schedules can be better adjusted, providing gains in the overall system average availability, when compared to those obtained by an optimized periodic tests scheme. The optimization problem becomes, however, more complex. Hence, the use of a powerful optimization technique, such as GAs, is required. Some particular features of certain systems can turn it advisable to introduce other specific constraints in the optimization problem. The Emergency Diesel Generation System (EDGS) of a Nuclear Power Plant (N-PP) is a good example for demonstrating the introduction of seasonal constraints in the optimization problem. This system is responsible for power supply during an external blackout. Therefore, it is desirable during periods of high blackout probability to maintain the system availability as high as possible. Previous applications have demonstrated the robustness and effectiveness of the methodology. However, no seasonal constraints have ever been imposed. This work aims at investigating the application of such methodology in the Angra-II Brazilian NPP EDGS surveillance test policy optimization, considering the blackout probability

  11. Using heuristic algorithms for capacity leasing and task allocation issues in telecommunication networks under fuzzy quality of service constraints

    Science.gov (United States)

    Huseyin Turan, Hasan; Kasap, Nihat; Savran, Huseyin

    2014-03-01

    Nowadays, every firm uses telecommunication networks in different amounts and ways in order to complete their daily operations. In this article, we investigate an optimisation problem that a firm faces when acquiring network capacity from a market in which there exist several network providers offering different pricing and quality of service (QoS) schemes. The QoS level guaranteed by network providers and the minimum quality level of service, which is needed for accomplishing the operations are denoted as fuzzy numbers in order to handle the non-deterministic nature of the telecommunication network environment. Interestingly, the mathematical formulation of the aforementioned problem leads to the special case of a well-known two-dimensional bin packing problem, which is famous for its computational complexity. We propose two different heuristic solution procedures that have the capability of solving the resulting nonlinear mixed integer programming model with fuzzy constraints. In conclusion, the efficiency of each algorithm is tested in several test instances to demonstrate the applicability of the methodology.

  12. A Hybrid Tabu Search Algorithm for a Real-World Open Vehicle Routing Problem Involving Fuel Consumption Constraints

    Directory of Open Access Journals (Sweden)

    Yunyun Niu

    2018-01-01

    Full Text Available Outsourcing logistics operation to third-party logistics has attracted more attention in the past several years. However, very few papers analyzed fuel consumption model in the context of outsourcing logistics. This problem involves more complexity than traditional open vehicle routing problem (OVRP, because the calculation of fuel emissions depends on many factors, such as the speed of vehicles, the road angle, the total load, the engine friction, and the engine displacement. Our paper proposed a green open vehicle routing problem (GOVRP model with fuel consumption constraints for outsourcing logistics operations. Moreover, a hybrid tabu search algorithm was presented to deal with this problem. Experiments were conducted on instances based on realistic road data of Beijing, China, considering that outsourcing logistics plays an increasingly important role in China’s freight transportation. Open routes were compared with closed routes through statistical analysis of the cost components. Compared with closed routes, open routes reduce the total cost by 18.5% with the fuel emissions cost down by nearly 29.1% and the diver cost down by 13.8%. The effect of different vehicle types was also studied. Over all the 60- and 120-node instances, the mean total cost by using the light-duty vehicles is the lowest.

  13. On the use of genetic algorithm to optimize industrial assets lifecycle management under safety and budget constraints

    Energy Technology Data Exchange (ETDEWEB)

    Lonchampt, J.; Fessart, K. [EDF R and D, Departement MRI, 6, quai Watier, 78401 Chatou cedex (France)

    2013-07-01

    The purpose of this paper is to describe the method and tool dedicated to optimize investments planning for industrial assets. These investments may either be preventive maintenance tasks, asset enhancements or logistic investments such as spare parts purchases. The two methodological points to investigate in such an issue are: 1. The measure of the profitability of a portfolio of investments 2. The selection and planning of an optimal set of investments 3. The measure of the risk of a portfolio of investments The measure of the profitability of a set of investments in the IPOP tool is synthesised in the Net Present Value indicator. The NPV is the sum of the differences of discounted cash flows (direct costs, forced outages...) between the situations with and without a given investment. These cash flows are calculated through a pseudo-Markov reliability model representing independently the components of the industrial asset and the spare parts inventories. The component model has been widely discussed over the years but the spare part model is a new one based on some approximations that will be discussed. This model, referred as the NPV function, takes for input an investments portfolio and gives its NPV. The second issue is to optimize the NPV. If all investments were independent, this optimization would be an easy calculation, unfortunately there are two sources of dependency. The first one is introduced by the spare part model, as if components are indeed independent in their reliability model, the fact that several components use the same inventory induces a dependency. The second dependency comes from economic, technical or logistic constraints, such as a global maintenance budget limit or a safety requirement limiting the residual risk of failure of a component or group of component, making the aggregation of individual optimum not necessary feasible. The algorithm used to solve such a difficult optimization problem is a genetic algorithm. After a description

  14. On the use of genetic algorithm to optimize industrial assets lifecycle management under safety and budget constraints

    International Nuclear Information System (INIS)

    Lonchampt, J.; Fessart, K.

    2013-01-01

    The purpose of this paper is to describe the method and tool dedicated to optimize investments planning for industrial assets. These investments may either be preventive maintenance tasks, asset enhancements or logistic investments such as spare parts purchases. The two methodological points to investigate in such an issue are: 1. The measure of the profitability of a portfolio of investments 2. The selection and planning of an optimal set of investments 3. The measure of the risk of a portfolio of investments The measure of the profitability of a set of investments in the IPOP tool is synthesised in the Net Present Value indicator. The NPV is the sum of the differences of discounted cash flows (direct costs, forced outages...) between the situations with and without a given investment. These cash flows are calculated through a pseudo-Markov reliability model representing independently the components of the industrial asset and the spare parts inventories. The component model has been widely discussed over the years but the spare part model is a new one based on some approximations that will be discussed. This model, referred as the NPV function, takes for input an investments portfolio and gives its NPV. The second issue is to optimize the NPV. If all investments were independent, this optimization would be an easy calculation, unfortunately there are two sources of dependency. The first one is introduced by the spare part model, as if components are indeed independent in their reliability model, the fact that several components use the same inventory induces a dependency. The second dependency comes from economic, technical or logistic constraints, such as a global maintenance budget limit or a safety requirement limiting the residual risk of failure of a component or group of component, making the aggregation of individual optimum not necessary feasible. The algorithm used to solve such a difficult optimization problem is a genetic algorithm. After a description

  15. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  16. Efficient constraint-based Sequential Pattern Mining (SPM algorithm to understand customers’ buying behaviour from time stamp-based sequence dataset

    Directory of Open Access Journals (Sweden)

    Niti Ashish Kumar Desai

    2015-12-01

    Full Text Available Business Strategies are formulated based on an understanding of customer needs. This requires development of a strategy to understand customer behaviour and buying patterns, both current and future. This involves understanding, first how an organization currently understands customer needs and second predicting future trends to drive growth. This article focuses on purchase trend of customer, where timing of purchase is more important than association of item to be purchased, and which can be found out with Sequential Pattern Mining (SPM methods. Conventional SPM algorithms worked purely on frequency identifying patterns that were more frequent but suffering from challenges like generation of huge number of uninteresting patterns, lack of user’s interested patterns, rare item problem, etc. Article attempts a solution through development of a SPM algorithm based on various constraints like Gap, Compactness, Item, Recency, Profitability and Length along with Frequency constraint. Incorporation of six additional constraints is as well to ensure that all patterns are recently active (Recency, active for certain time span (Compactness, profitable and indicative of next timeline for purchase (Length―Item―Gap. The article also attempts to throw light on how proposed Constraint-based Prefix Span algorithm is helpful to understand buying behaviour of customer which is in formative stage.

  17. Solving binary-state multi-objective reliability redundancy allocation series-parallel problem using efficient epsilon-constraint, multi-start partial bound enumeration algorithm, and DEA

    International Nuclear Information System (INIS)

    Khalili-Damghani, Kaveh; Amiri, Maghsoud

    2012-01-01

    In this paper, a procedure based on efficient epsilon-constraint method and data envelopment analysis (DEA) is proposed for solving binary-state multi-objective reliability redundancy allocation series-parallel problem (MORAP). In first module, a set of qualified non-dominated solutions on Pareto front of binary-state MORAP is generated using an efficient epsilon-constraint method. In order to test the quality of generated non-dominated solutions in this module, a multi-start partial bound enumeration algorithm is also proposed for MORAP. The performance of both procedures is compared using different metrics on well-known benchmark instance. The statistical analysis represents that not only the proposed efficient epsilon-constraint method outperform the multi-start partial bound enumeration algorithm but also it improves the founded upper bound of benchmark instance. Then, in second module, a DEA model is supplied to prune the generated non-dominated solutions of efficient epsilon-constraint method. This helps reduction of non-dominated solutions in a systematic manner and eases the decision making process for practical implementations. - Highlights: ► A procedure based on efficient epsilon-constraint method and DEA was proposed for solving MORAP. ► The performance of proposed procedure was compared with a multi-start PBEA. ► Methods were statistically compared using multi-objective metrics.

  18. Work Hard / Play Hard

    OpenAIRE

    Burrows, J.; Johnson, V.; Henckel, D.

    2016-01-01

    Work Hard / Play Hard was a participatory performance/workshop or CPD experience hosted by interdisciplinary arts atelier WeAreCodeX, in association with AntiUniversity.org. As a socially/economically engaged arts practice, Work Hard / Play Hard challenged employees/players to get playful, or go to work. 'The game changes you, you never change the game'. Employee PLAYER A 'The faster the better.' Employer PLAYER B

  19. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  20. Correlation-based decimation in constraint satisfaction problems

    International Nuclear Information System (INIS)

    Higuchi, Saburo; Mezard, Marc

    2010-01-01

    We study hard constraint satisfaction problems using some decimation algorithms based on mean-field approximations. The message-passing approach is used to estimate, beside the usual one-variable marginals, the pair correlation functions. The identification of strongly correlated pairs allows to use a new decimation procedure, where the relative orientation of a pair of variables is fixed. We apply this novel decimation to locked occupation problems, a class of hard constraint satisfaction problems where the usual belief-propagation guided decimation performs poorly. The pair-decimation approach provides a significant improvement.

  1. Synthesis of Greedy Algorithms Using Dominance Relations

    Science.gov (United States)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2010-01-01

    Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.

  2. BDDC Algorithms with deluxe scaling and adaptive selection of primal constraints for Raviart-Thomas vector fields

    KAUST Repository

    Oh, Duk-Soon; Widlund, Olof B.; Zampini, Stefano; Dohrmann, Clark R.

    2017-01-01

    A BDDC domain decomposition preconditioner is defined by a coarse component, expressed in terms of primal constraints, a weighted average across the interface between the subdomains, and local components given in terms of solvers of local subdomain problems. BDDC methods for vector field problems discretized with Raviart-Thomas finite elements are introduced. The methods are based on a deluxe type of weighted average and an adaptive selection of primal constraints developed to deal with coefficients with high contrast even inside individual subdomains. For problems with very many subdomains, a third level of the preconditioner is introduced.

  3. BDDC Algorithms with deluxe scaling and adaptive selection of primal constraints for Raviart-Thomas vector fields

    KAUST Repository

    Oh, Duk-Soon

    2017-06-13

    A BDDC domain decomposition preconditioner is defined by a coarse component, expressed in terms of primal constraints, a weighted average across the interface between the subdomains, and local components given in terms of solvers of local subdomain problems. BDDC methods for vector field problems discretized with Raviart-Thomas finite elements are introduced. The methods are based on a deluxe type of weighted average and an adaptive selection of primal constraints developed to deal with coefficients with high contrast even inside individual subdomains. For problems with very many subdomains, a third level of the preconditioner is introduced.

  4. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  5. Topology Control Algorithms for Spacecraft Formation Flying Networks Under Connectivity and Time-Delay Constraints, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — SSCI is proposing to develop, test and deliver a set of topology control algorithms and software for a formation flying spacecraft that can be used to design and...

  6. Topology Control Algorithms for Spacecraft Formation Flying Networks Under Connectivity and Time-Delay Constraints, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — SSCI is proposing to develop a set of topology control algorithms for a formation flying spacecraft that can be used to design and evaluate candidate formation...

  7. Hard electronics; Hard electronics

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    Hard material technologies were surveyed to establish the hard electronic technology which offers superior characteristics under hard operational or environmental conditions as compared with conventional Si devices. The following technologies were separately surveyed: (1) The device and integration technologies of wide gap hard semiconductors such as SiC, diamond and nitride, (2) The technology of hard semiconductor devices for vacuum micro- electronics technology, and (3) The technology of hard new material devices for oxides. The formation technology of oxide thin films made remarkable progress after discovery of oxide superconductor materials, resulting in development of an atomic layer growth method and mist deposition method. This leading research is expected to solve such issues difficult to be easily realized by current Si technology as high-power, high-frequency and low-loss devices in power electronics, high temperature-proof and radiation-proof devices in ultimate electronics, and high-speed and dense- integrated devices in information electronics. 432 refs., 136 figs., 15 tabs.

  8. Algorithms

    Indian Academy of Sciences (India)

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  9. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  10. Statistical physics of hard optimization problems

    International Nuclear Information System (INIS)

    Zdeborova, L.

    2009-01-01

    Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the non-deterministic polynomial (NP)-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an NP-complete problem the practically arising instances might, in fact, be easy to solve. The principal question we address in this article is: How to recognize if an NP-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfy ability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named ”locked” constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfy ability.

  11. Statistical physics of hard optimization problems

    International Nuclear Information System (INIS)

    Zdeborova, L.

    2009-01-01

    Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the non-deterministic polynomial-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an non-deterministic polynomial-complete problem the practically arising instances might, in fact, be easy to solve. The principal the question we address in the article is: How to recognize if an non-deterministic polynomial-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfiability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named 'locked' constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfiability (Authors)

  12. Statistical physics of hard optimization problems

    Science.gov (United States)

    Zdeborová, Lenka

    2009-06-01

    Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the non-deterministic polynomial (NP)-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an NP-complete problem the practically arising instances might, in fact, be easy to solve. The principal question we address in this article is: How to recognize if an NP-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfiability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named "locked" constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfiability.

  13. Algorithms

    Indian Academy of Sciences (India)

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  14. Closing in on a Short-Hard Burst Progenitor: Constraints From Early-Time Optical Imaging and Spectroscopy of a Possible Host Galaxy of GRB 050509b

    Energy Technology Data Exchange (ETDEWEB)

    Bloom, Joshua S.; Prochaska, J.X.; Pooley, D.; Blake, C.W.; Foley, R.J.; Jha, S.; Ramirez-Ruiz, E.; Granot, J.; Filippenko, A.V.; Sigurdsson, S.; Barth, A.J.; Chen,; Cooper, M.C.; Falco, E.E.; Gal, R.R.; Gerke, B.F.; Gladders, M.D.; Greene, J.E.; Hennanwi, J.; Ho, L.C.; Hurley, K.; /UC, Berkeley, Astron. Dept. /Lick Observ.

    2005-06-07

    The localization of the short-duration, hard-spectrum gamma-ray burst GRB050509b by the Swift satellite was a watershed event. Never before had a member of this mysterious subclass of classic GRBs been rapidly and precisely positioned in a sky accessible to the bevy of ground-based follow-up facilities. Thanks to the nearly immediate relay of the GRB position by Swift, we began imaging the GRB field 8 minutes after the burst and have continued during the 8 days since. Though the Swift X-ray Telescope (XRT) discovered an X-ray afterglow of GRB050509b, the first ever of a short-hard burst, thus far no convincing optical/infrared candidate afterglow or supernova has been found for the object. We present a re-analysis of the XRT afterglow and find an absolute position of R.A. = 12h36m13.59s, Decl. = +28{sup o}59'04.9'' (J2000), with a 1{sigma} uncertainty of 3.68'' in R.A., 3.52'' in Decl.; this is about 4'' to the west of the XRT position reported previously. Close to this position is a bright elliptical galaxy with redshift z = 0.2248 {+-} 0.0002, about 1' from the center of a rich cluster of galaxies. This cluster has detectable diffuse emission, with a temperature of kT = 5.25{sub -1.68}{sup +3.36} keV. We also find several ({approx}11) much fainter galaxies consistent with the XRT position from deep Keck imaging and have obtained Gemini spectra of several of these sources. Nevertheless we argue, based on positional coincidences, that the GRB and the bright elliptical are likely to be physically related. We thus have discovered reasonable evidence that at least some short-duration, hard-spectra GRBs are at cosmological distances. We also explore the connection of the properties of the burst and the afterglow, finding that GRB050509b was underluminous in both of these relative to long-duration GRBs. However, we also demonstrate that the ratio of the blast-wave energy to the {gamma}-ray energy is consistent with that

  15. Design with Nonlinear Constraints

    KAUST Repository

    Tang, Chengcheng

    2015-12-10

    Most modern industrial and architectural designs need to satisfy the requirements of their targeted performance and respect the limitations of available fabrication technologies. At the same time, they should reflect the artistic considerations and personal taste of the designers, which cannot be simply formulated as optimization goals with single best solutions. This thesis aims at a general, flexible yet e cient computational framework for interactive creation, exploration and discovery of serviceable, constructible, and stylish designs. By formulating nonlinear engineering considerations as linear or quadratic expressions by introducing auxiliary variables, the constrained space could be e ciently accessed by the proposed algorithm Guided Projection, with the guidance of aesthetic formulations. The approach is introduced through applications in different scenarios, its effectiveness is demonstrated by examples that were difficult or even impossible to be computationally designed before. The first application is the design of meshes under both geometric and static constraints, including self-supporting polyhedral meshes that are not height fields. Then, with a formulation bridging mesh based and spline based representations, the application is extended to developable surfaces including origami with curved creases. Finally, general approaches to extend hard constraints and soft energies are discussed, followed by a concluding remark outlooking possible future studies.

  16. Design and Validation of a Control Algorithm for a SAE J2954-Compliant Wireless Charger to Guarantee the Operational Electrical Constraints

    Directory of Open Access Journals (Sweden)

    José Manuel González-González

    2018-03-01

    Full Text Available Wireless power transfer is foreseen as a suitable technology to provide charge without cables to electric vehicles. This technology is mainly supported by two coupled coils, whose mutual inductance is sensitive to their relative positions. Variations on this coefficient greatly impact the electrical magnitudes of the wireless charger. The aim of this paper is the design and validation of a control algorithm for an Society of Automotive Engineers (SAE J2954-compliant wireless charger to guarantee some operational and electrical constraints. These constraints are designed to prevent some components from being damaged by excessive voltage or current. This paper also presents the details for the design and implementation of the bidirectional charger topology in which the proposed controller is incorporated. The controller is installed on the primary and on the secondary side, given that wireless communication is necessary with the other side. The input data of the controller helps it decide about the phase shift required to apply in the DC/AC converter. The experimental results demonstrate how the system regulates the output voltage of the DC/AC converter so that some electrical magnitudes do not exceed predefined thresholds. The regulation, which has been tested when coil misalignments occur, is proven to be effective.

  17. Metaheurística algoritmo genético para solução de problemas de planejamento florestal com restrições de integridade Genetic algorithm metaheuristic to solve forest planning problem with integer constraints

    Directory of Open Access Journals (Sweden)

    Flávio Lopes Rodrigues

    2004-04-01

    Full Text Available Os objetivos deste trabalho foram desenvolver e testar um algoritmo genético (AG para a solução de problemas de gerenciamento florestal com restrições de integridade. O AG foi testado em quatro problemas, contendo entre 93 e 423 variáveis de decisão, sujeitos às restrições de singularidade, produção mínima e produção máxima, periodicamente. Todos os problemas tiveram como objetivo a maximização do valor presente líquido. O AG foi codificado em ambiente delphi 5.0 e os testes foram realizados em um microcomputador AMD K6II 500 MHZ, com memória RAM de 64 MB e disco rígido de 15GB. O desempenho do AG foi avaliado de acordo com as medidas de eficácia e eficiência. Os valores ou categorias dos parâmetros do AG foram testados e comparados quanto aos seus efeitos na eficácia do algoritmo. A seleção da melhor configuração de parâmetros foi feita com o teste L&O, a 1% de probabilidade, e as análises foram realizadas através de estatísticas descritivas. A melhor configuração de parâmetros propiciou ao AG eficácia média de 94,28%, valor mínimo de 90,01%, valor máximo de 98,48%, com coeficiente de variação de 2,08% do ótimo matemático, obtido pelo algoritmo exato branch and bound. Para o problema de maior porte, a eficiência do AG foi cinco vezes superior à eficiência do algoritmo exato branch and bound. O AG apresentou-se como uma abordagem bastante atrativa para solução de importantes problemas de gerenciamento florestal.The objectives of this work was to develop and test a Genetic Algorithm (GA to solve problems of forest management with integer constraints. GA was tested in five problems containing 93 - 423 decision variables, periodically subject to singularity constraints, minimum and maximum production. The problems had the objective of maximizing the net present value. GA was codified into delphi 5.0 language and the tests were performed in a microcomputer AMD K6II 500 MHZ, with RAM memory of 64 MB

  18. The free energy of the metastable supersaturated vapor via restricted ensemble simulations. III. An extension to the Corti and Debenedetti subcell constraint algorithm

    International Nuclear Information System (INIS)

    Nie, Chu; Geng, Jun; Marlow, William H.

    2016-01-01

    In order to improve the sampling of restricted microstates in our previous work [C. Nie, J. Geng, and W. H. Marlow, J. Chem. Phys. 127, 154505 (2007); 128, 234310 (2008)] and quantitatively predict thermal properties of supersaturated vapors, an extension is made to the Corti and Debenedetti subcell constraint algorithm [D. S. Corti and P. Debenedetti, Chem. Eng. Sci. 49, 2717 (1994)], which restricts the maximum allowed local density at any point in a simulation box. The maximum allowed local density at a point in a simulation box is defined by the maximum number of particles N m allowed to appear inside a sphere of radius R, with this point as the center of the sphere. Both N m and R serve as extra thermodynamic variables for maintaining a certain degree of spatial homogeneity in a supersaturated system. In a restricted canonical ensemble, at a given temperature and an overall density, series of local minima on the Helmholtz free energy surface F(N m , R) are found subject to different (N m , R) pairs. The true equilibrium metastable state is identified through the analysis of the formation free energies of Stillinger clusters of various sizes obtained from these restricted states. The simulation results of a supersaturated Lennard-Jones vapor at reduced temperature 0.7 including the vapor pressure isotherm, formation free energies of critical nuclei, and chemical potential differences are presented and analyzed. In addition, with slight modifications, the current algorithm can be applied to computing thermal properties of superheated liquids.

  19. Minimizing Total Completion Time For Preemptive Scheduling With Release Dates And Deadline Constraints

    Directory of Open Access Journals (Sweden)

    He Cheng

    2014-02-01

    Full Text Available It is known that the single machine preemptive scheduling problem of minimizing total completion time with release date and deadline constraints is NP- hard. Du and Leung solved some special cases by the generalized Baker's algorithm and the generalized Smith's algorithm in O(n2 time. In this paper we give an O(n2 algorithm for the special case where the processing times and deadlines are agreeable. Moreover, for the case where the processing times and deadlines are disagreeable, we present two properties which could enable us to reduce the range of the enumeration algorithm

  20. Separation and Extension of Cover Inequalities for Conic Quadratic Knapsack Constraints with Generalized Upper Bounds

    DEFF Research Database (Denmark)

    Atamtürk, Alper; Muller, Laurent Flindt; Pisinger, David

    2013-01-01

    Motivated by addressing probabilistic 0-1 programs we study the conic quadratic knapsack polytope with generalized upper bound (GUB) constraints. In particular, we investigate separating and extending GUB cover inequalities. We show that, unlike in the linear case, determining whether a cover can...... be extended with a single variable is NP-hard. We describe and compare a number of exact and heuristic separation and extension algorithms which make use of the structure of the constraints. Computational experiments are performed for comparing the proposed separation and extension algorithms...

  1. Separation and extension of cover inequalities for second-order conic knapsack constraints with GUBs

    DEFF Research Database (Denmark)

    Atamtürk, Alper; Muller, Laurent Flindt; Pisinger, David

    We consider the second-order conic equivalent of the classic knapsack polytope where the variables are subject to generalized upper bound constraints. We describe and compare a number of separation and extension algorithms which make use of the extra structure implied by the generalized upper bound...... constraints in order to strengthen the second-order conic equivalent of the classic cover cuts. We show that determining whether a cover can be extended with a variable is NP-hard. Computational experiments are performed comparing the proposed separation and extension algorithms. These experiments show...

  2. New detection systems of bacteria using highly selective media designed by SMART: selective medium-design algorithm restricted by two constraints.

    Directory of Open Access Journals (Sweden)

    Takeshi Kawanishi

    Full Text Available Culturing is an indispensable technique in microbiological research, and culturing with selective media has played a crucial role in the detection of pathogenic microorganisms and the isolation of commercially useful microorganisms from environmental samples. Although numerous selective media have been developed in empirical studies, unintended microorganisms often grow on such media probably due to the enormous numbers of microorganisms in the environment. Here, we present a novel strategy for designing highly selective media based on two selective agents, a carbon source and antimicrobials. We named our strategy SMART for highly Selective Medium-design Algorithm Restricted by Two constraints. To test whether the SMART method is applicable to a wide range of microorganisms, we developed selective media for Burkholderia glumae, Acidovorax avenae, Pectobacterium carotovorum, Ralstonia solanacearum, and Xanthomonas campestris. The series of media developed by SMART specifically allowed growth of the targeted bacteria. Because these selective media exhibited high specificity for growth of the target bacteria compared to established selective media, we applied three notable detection technologies: paper-based, flow cytometry-based, and color change-based detection systems for target bacteria species. SMART facilitates not only the development of novel techniques for detecting specific bacteria, but also our understanding of the ecology and epidemiology of the targeted bacteria.

  3. Designing a fuzzy scheduler for hard real-time systems

    Science.gov (United States)

    Yen, John; Lee, Jonathan; Pfluger, Nathan; Natarajan, Swami

    1992-01-01

    In hard real-time systems, tasks have to be performed not only correctly, but also in a timely fashion. If timing constraints are not met, there might be severe consequences. Task scheduling is the most important problem in designing a hard real-time system, because the scheduling algorithm ensures that tasks meet their deadlines. However, the inherent nature of uncertainty in dynamic hard real-time systems increases the problems inherent in scheduling. In an effort to alleviate these problems, we have developed a fuzzy scheduler to facilitate searching for a feasible schedule. A set of fuzzy rules are proposed to guide the search. The situation we are trying to address is the performance of the system when no feasible solution can be found, and therefore, certain tasks will not be executed. We wish to limit the number of important tasks that are not scheduled.

  4. Hardness of Clustering

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Hardness of Clustering. Both k-means and k-medians intractable (when n and d are both inputs even for k =2). The best known deterministic algorithms. are based on Voronoi partitioning that. takes about time. Need for approximation – “close” to optimal.

  5. Constraints on Dbar uplifts

    International Nuclear Information System (INIS)

    Alwis, S.P. de

    2016-01-01

    We discuss constraints on KKLT/KKLMMT and LVS scenarios that use anti-branes to get an uplift to a deSitter vacuum, coming from requiring the validity of an effective field theory description of the physics. We find these are not always satisfied or are hard to satisfy.

  6. Lot Sizing Based on Stochastic Demand and Service Level Constraint

    Directory of Open Access Journals (Sweden)

    hajar shirneshan

    2012-06-01

    Full Text Available Considering its application, stochastic lot sizing is a significant subject in production planning. Also the concept of service level is more applicable than shortage cost from managers' viewpoint. In this paper, the stochastic multi period multi item capacitated lot sizing problem has been investigated considering service level constraint. First, the single item model has been developed considering service level and with no capacity constraint and then, it has been solved using dynamic programming algorithm and the optimal solution has been derived. Then the model has been generalized to multi item problem with capacity constraint. The stochastic multi period multi item capacitated lot sizing problem is NP-Hard, hence the model could not be solved by exact optimization approaches. Therefore, simulated annealing method has been applied for solving the problem. Finally, in order to evaluate the efficiency of the model, low level criterion has been used .

  7. Constraint-Based Local Search for Constrained Optimum Paths Problems

    Science.gov (United States)

    Pham, Quang Dung; Deville, Yves; van Hentenryck, Pascal

    Constrained Optimum Path (COP) problems arise in many real-life applications and are ubiquitous in communication networks. They have been traditionally approached by dedicated algorithms, which are often hard to extend with side constraints and to apply widely. This paper proposes a constraint-based local search (CBLS) framework for COP applications, bringing the compositionality, reuse, and extensibility at the core of CBLS and CP systems. The modeling contribution is the ability to express compositional models for various COP applications at a high level of abstraction, while cleanly separating the model and the search procedure. The main technical contribution is a connected neighborhood based on rooted spanning trees to find high-quality solutions to COP problems. The framework, implemented in COMET, is applied to Resource Constrained Shortest Path (RCSP) problems (with and without side constraints) and to the edge-disjoint paths problem (EDP). Computational results show the potential significance of the approach.

  8. Constraint-based scheduling applying constraint programming to scheduling problems

    CERN Document Server

    Baptiste, Philippe; Nuijten, Wim

    2001-01-01

    Constraint Programming is a problem-solving paradigm that establishes a clear distinction between two pivotal aspects of a problem: (1) a precise definition of the constraints that define the problem to be solved and (2) the algorithms and heuristics enabling the selection of decisions to solve the problem. It is because of these capabilities that Constraint Programming is increasingly being employed as a problem-solving tool to solve scheduling problems. Hence the development of Constraint-Based Scheduling as a field of study. The aim of this book is to provide an overview of the most widely used Constraint-Based Scheduling techniques. Following the principles of Constraint Programming, the book consists of three distinct parts: The first chapter introduces the basic principles of Constraint Programming and provides a model of the constraints that are the most often encountered in scheduling problems. Chapters 2, 3, 4, and 5 are focused on the propagation of resource constraints, which usually are responsibl...

  9. Parallel-Machine Scheduling with Time-Dependent and Machine Availability Constraints

    Directory of Open Access Journals (Sweden)

    Cuixia Miao

    2015-01-01

    Full Text Available We consider the parallel-machine scheduling problem in which the machines have availability constraints and the processing time of each job is simple linear increasing function of its starting times. For the makespan minimization problem, which is NP-hard in the strong sense, we discuss the Longest Deteriorating Rate algorithm and List Scheduling algorithm; we also provide a lower bound of any optimal schedule. For the total completion time minimization problem, we analyze the strong NP-hardness, and we present a dynamic programming algorithm and a fully polynomial time approximation scheme for the two-machine problem. Furthermore, we extended the dynamic programming algorithm to the total weighted completion time minimization problem.

  10. Portable Health Algorithms Test System

    Science.gov (United States)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  11. Constraint Differentiation

    DEFF Research Database (Denmark)

    Mödersheim, Sebastian Alexander; Basin, David; Viganò, Luca

    2010-01-01

    We introduce constraint differentiation, a powerful technique for reducing search when model-checking security protocols using constraint-based methods. Constraint differentiation works by eliminating certain kinds of redundancies that arise in the search space when using constraints to represent...... results show that constraint differentiation substantially reduces search and considerably improves the performance of OFMC, enabling its application to a wider class of problems....

  12. Strict Constraint Feasibility in Analysis and Design of Uncertain Systems

    Science.gov (United States)

    Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.

    2006-01-01

    This paper proposes a methodology for the analysis and design optimization of models subject to parametric uncertainty, where hard inequality constraints are present. Hard constraints are those that must be satisfied for all parameter realizations prescribed by the uncertainty model. Emphasis is given to uncertainty models prescribed by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles. These models make it possible to consider sets of parameters having comparable as well as dissimilar levels of uncertainty. Two alternative formulations for hyper-rectangular sets are proposed, one based on a transformation of variables and another based on an infinity norm approach. The suite of tools developed enable us to determine if the satisfaction of hard constraints is feasible by identifying critical combinations of uncertain parameters. Since this practice is performed without sampling or partitioning the parameter space, the resulting assessments of robustness are analytically verifiable. Strategies that enable the comparison of the robustness of competing design alternatives, the approximation of the robust design space, and the systematic search for designs with improved robustness characteristics are also proposed. Since the problem formulation is generic and the solution methods only require standard optimization algorithms for their implementation, the tools developed are applicable to a broad range of problems in several disciplines.

  13. 三维装配几何约束闭环系统的递归分解方法%A Recursive Decomposition Algorithm for 3D Assembly Geometric Constraint System with Closed-loops

    Institute of Scientific and Technical Information of China (English)

    黄学良; 李娜; 陈立平

    2013-01-01

    Numerical methods are always employed to solve 3D assembly geometric constraint system with closed-loops which can not be decomposed by the existing decomposition methods,but their inherent inefficiency and instability can not be overcome.In this paper,with the analysis of the structural constraint of serial kinematic chain and the topological structure of geometric constraint closed-loop graph,a recursive decomposition algorithm for 3D geometric constraint system with closed-loops is proposed.The basic idea of the proposed algorithm is to introduce the equivalent geometric constraint combination to substitute the structural constraint of serial kinematic chain,and separate the geometric constraint subsystems which can be solved independently from the geometric constraint system with closed-loops.The proposed method can decompose most 3D geometric constraint closed-loop systems which are always solved by numerical method into a series of geometric constraint subsystems between two rigid bodies which can be solved by analytical or reasoning method,so that the computational efficiency and stability can be improved dramatically.Finally,a typical example has been given to validate the correctness and effectiveness of the proposed method.%由于现有几何约束分解方法无法分解三维装配几何约束闭环系统,故常采用数值迭代方法对其进行求解,但存在效率低、稳定性差等问题.为此,通过分析几何约束闭环图的拓扑结构和串联运动链的结构约束,提出基于串联运动链结构约束等价替换的三维几何约束闭环系统的递归分解方法.该方法通过不断地引入几何约束组合等价替换串联运动链的结构约束,从几何约束闭环系统中分离出可独立求解的子系统,实现几何约束闭环系统的递归分解.该方法可将此前许多必须整体迭代求解的三维几何约束闭环系统分解为一系列可解析求解的2个刚体之间的几何约束

  14. Nature-inspired optimization algorithms

    CERN Document Server

    Yang, Xin-She

    2014-01-01

    Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning

  15. Ant colony optimization and constraint programming

    CERN Document Server

    Solnon, Christine

    2013-01-01

    Ant colony optimization is a metaheuristic which has been successfully applied to a wide range of combinatorial optimization problems. The author describes this metaheuristic and studies its efficiency for solving some hard combinatorial problems, with a specific focus on constraint programming. The text is organized into three parts. The first part introduces constraint programming, which provides high level features to declaratively model problems by means of constraints. It describes the main existing approaches for solving constraint satisfaction problems, including complete tree search

  16. Development of constraint algorithm for the number of electrons in molecular orbitals consisting mainly 4f atomic orbitals of rare-earth elements and its introduction to tight-binding quantum chemical molecular dynamics method

    International Nuclear Information System (INIS)

    Endou, Akira; Onuma, Hiroaki; Jung, Sun-ho

    2007-01-01

    Our original tight-binding quantum chemical molecular dynamics code, Colors', has been successfully applied to the theoretical investigation of complex materials including rare-earth elements, e.g., metal catalysts supported on a CeO 2 surface. To expand our code so as to obtain a good convergence for the electronic structure of a calculation system including a rare-earth element, we developed a novel algorithm to provide a constraint condition for the number of electrons occupying the selected molecular orbitals that mainly consist of 4f atomic orbitals of the rare-earth element. This novel algorithm was introduced in Colors. Using Colors, we succeeded in obtaining the classified electronic configurations of the 4f atomic orbitals of Ce 4+ and reduced Ce ions in a CeO 2 bulk model with one oxygen defect, which makes it difficult to obtain a good convergence using a conventional first-principles quantum chemical calculation code. (author)

  17. Research on an uplink carrier sense multiple access algorithm of large indoor visible light communication networks based on an optical hard core point process.

    Science.gov (United States)

    Nan, Zhufen; Chi, Xuefen

    2016-12-20

    The IEEE 802.15.7 protocol suggests that it could coordinate the channel access process based on the competitive method of carrier sensing. However, the directionality of light and randomness of diffuse reflection would give rise to a serious imperfect carrier sense (ICS) problem [e.g., hidden node (HN) problem and exposed node (EN) problem], which brings great challenges in realizing the optical carrier sense multiple access (CSMA) mechanism. In this paper, the carrier sense process implemented by diffuse reflection light is modeled as the choice of independent sets. We establish an ICS model with the presence of ENs and HNs for the multi-point to multi-point visible light communication (VLC) uplink communications system. Considering the severe optical ICS problem, an optical hard core point process (OHCPP) is developed, which characterizes the optical CSMA for the indoor VLC uplink communications system. Due to the limited coverage of the transmitted optical signal, in our OHCPP, the ENs within the transmitters' carrier sense region could be retained provided that they could not corrupt the ongoing communications. Moreover, because of the directionality of both light emitting diode (LED) transmitters and receivers, theoretical analysis of the HN problem becomes difficult. In this paper, we derive the closed-form expression for approximating the outage probability and transmission capacity of VLC networks with the presence of HNs and ENs. Simulation results validate the analysis and also show the existence of an optimal physical carrier-sensing threshold that maximizes the transmission capacity for a given emission angle of LED.

  18. Standard hardness conversion tables for metals relationship among brinell hardness, vickers hardness, rockwell hardness, superficial hardness, knoop hardness, and scleroscope hardness

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2007-01-01

    1.1 Conversion Table 1 presents data in the Rockwell C hardness range on the relationship among Brinell hardness, Vickers hardness, Rockwell hardness, Rockwell superficial hardness, Knoop hardness, and Scleroscope hardness of non-austenitic steels including carbon, alloy, and tool steels in the as-forged, annealed, normalized, and quenched and tempered conditions provided that they are homogeneous. 1.2 Conversion Table 2 presents data in the Rockwell B hardness range on the relationship among Brinell hardness, Vickers hardness, Rockwell hardness, Rockwell superficial hardness, Knoop hardness, and Scleroscope hardness of non-austenitic steels including carbon, alloy, and tool steels in the as-forged, annealed, normalized, and quenched and tempered conditions provided that they are homogeneous. 1.3 Conversion Table 3 presents data on the relationship among Brinell hardness, Vickers hardness, Rockwell hardness, Rockwell superficial hardness, and Knoop hardness of nickel and high-nickel alloys (nickel content o...

  19. Multi-dimensional Bin Packing Problems with Guillotine Constraints

    DEFF Research Database (Denmark)

    Amossen, Rasmus Resen; Pisinger, David

    2010-01-01

    The problem addressed in this paper is the decision problem of determining if a set of multi-dimensional rectangular boxes can be orthogonally packed into a rectangular bin while satisfying the requirement that the packing should be guillotine cuttable. That is, there should exist a series of face...... parallel straight cuts that can recursively cut the bin into pieces so that each piece contains a box and no box has been intersected by a cut. The unrestricted problem is known to be NP-hard. In this paper we present a generalization of a constructive algorithm for the multi-dimensional bin packing...... problem, with and without the guillotine constraint, based on constraint programming....

  20. Hard coatings

    International Nuclear Information System (INIS)

    Dan, J.P.; Boving, H.J.; Hintermann, H.E.

    1993-01-01

    Hard, wear resistant and low friction coatings are presently produced on a world-wide basis, by different processes such as electrochemical or electroless methods, spray technologies, thermochemical, CVD and PVD. Some of the most advanced processes, especially those dedicated to thin film depositions, basically belong to CVD or PVD technologies, and will be looked at in more detail. The hard coatings mainly consist of oxides, nitrides, carbides, borides or carbon. Over the years, many processes have been developed which are variations and/or combinations of the basic CVD and PVD methods. The main difference between these two families of deposition techniques is that the CVD is an elevated temperature process (≥ 700 C), while the PVD on the contrary, is rather a low temperature process (≤ 500 C); this of course influences the choice of substrates and properties of the coating/substrate systems. Fundamental aspects of the vapor phase deposition techniques and some of their influences on coating properties will be discussed, as well as the very important interactions between deposit and substrate: diffusions, internal stress, etc. Advantages and limitations of CVD and PVD respectively will briefly be reviewed and examples of applications of the layers will be given. Parallel to the development and permanent updating of surface modification technologies, an effort was made to create novel characterisation methods. A close look will be given to the coating adherence control by means of the scratch test, at the coating hardness measurement by means of nanoindentation, at the coating wear resistance by means of a pin-on-disc tribometer, and at the surface quality evaluation by Atomic Force Microscopy (AFM). Finally, main important trends will be highlighted. (orig.)

  1. “Wood Already Touched by Fire is not Hard to Set Alight”; Comment on “Constraints to Applying Systems Thinking Concepts in Health Systems: A Regional Perspective from Surveying Stakeholders in Eastern Mediterranean Countries”

    Directory of Open Access Journals (Sweden)

    Irene Akua Agyepong

    2015-03-01

    Full Text Available A major constraint to the application of any form of knowledge and principles is the awareness, understanding and acceptance of the knowledge and principles. Systems Thinking (ST is a way of understanding and thinking about the nature of health systems and how to make and implement decisions within health systems to maximize desired and minimize undesired effects. A major constraint to applying ST within health systems in Low- and Middle-Income Countries (LMICs would appear to be an awareness and understanding of ST and how to apply it. This is a fundamental constraint and in the increasing desire to enable the application of ST concepts in health systems in LMIC and understand and evaluate the effects; an essential first step is going to be enabling of a wide spread as well as deeper understanding of ST and how to apply this understanding.

  2. "Wood already touched by fire is not hard to set alight": Comment on "Constraints to applying systems thinking concepts in health systems: A regional perspective from surveying stakeholders in Eastern Mediterranean countries".

    Science.gov (United States)

    Agyepong, Irene Akua

    2015-03-01

    A major constraint to the application of any form of knowledge and principles is the awareness, understanding and acceptance of the knowledge and principles. Systems Thinking (ST) is a way of understanding and thinking about the nature of health systems and how to make and implement decisions within health systems to maximize desired and minimize undesired effects. A major constraint to applying ST within health systems in Low- and Middle-Income Countries (LMICs) would appear to be an awareness and understanding of ST and how to apply it. This is a fundamental constraint and in the increasing desire to enable the application of ST concepts in health systems in LMIC and understand and evaluate the effects; an essential first step is going to be enabling of a wide spread as well as deeper understanding of ST and how to apply this understanding.

  3. Constraint Handling Rules with Binders, Patterns and Generic Quantification

    NARCIS (Netherlands)

    Serrano, Alejandro; Hage, J.

    2017-01-01

    Constraint Handling Rules provide descriptions for constraint solvers. However, they fall short when those constraints specify some binding structure, like higher-rank types in a constraint-based type inference algorithm. In this paper, the term syntax of constraints is replaced by λ-tree syntax, in

  4. Optimal dynamic voltage scaling for wireless sensor nodes with real-time constraints

    Science.gov (United States)

    Cassandras, Christos G.; Zhuang, Shixin

    2005-11-01

    Sensors are increasingly embedded in manufacturing systems and wirelessly networked to monitor and manage operations ranging from process and inventory control to tracking equipment and even post-manufacturing product monitoring. In building such sensor networks, a critical issue is the limited and hard to replenish energy in the devices involved. Dynamic voltage scaling is a technique that controls the operating voltage of a processor to provide desired performance while conserving energy and prolonging the overall network's lifetime. We consider such power-limited devices processing time-critical tasks which are non-preemptive, aperiodic and have uncertain arrival times. We treat voltage scaling as a dynamic optimization problem whose objective is to minimize energy consumption subject to hard or soft real-time execution constraints. In the case of hard constraints, we build on prior work (which engages a voltage scaling controller at task completion times) by developing an intra-task controller that acts at all arrival times of incoming tasks. We show that this optimization problem can be decomposed into two simpler ones whose solution leads to an algorithm that does not actually require solving any nonlinear programming problems. In the case of soft constraints, this decomposition must be partly relaxed, but it still leads to a scalable (linear in the number of tasks) algorithm. Simulation results are provided to illustrate performance improvements in systems with intra-task controllers compared to uncontrolled systems or those using inter-task control.

  5. Genetic algorithms for protein threading.

    Science.gov (United States)

    Yadgari, J; Amir, A; Unger, R

    1998-01-01

    Despite many years of efforts, a direct prediction of protein structure from sequence is still not possible. As a result, in the last few years researchers have started to address the "inverse folding problem": Identifying and aligning a sequence to the fold with which it is most compatible, a process known as "threading". In two meetings in which protein folding predictions were objectively evaluated, it became clear that threading as a concept promises a real breakthrough, but that much improvement is still needed in the technique itself. Threading is a NP-hard problem, and thus no general polynomial solution can be expected. Still a practical approach with demonstrated ability to find optimal solutions in many cases, and acceptable solutions in other cases, is needed. We applied the technique of Genetic Algorithms in order to significantly improve the ability of threading algorithms to find the optimal alignment of a sequence to a structure, i.e. the alignment with the minimum free energy. A major progress reported here is the design of a representation of the threading alignment as a string of fixed length. With this representation validation of alignments and genetic operators are effectively implemented. Appropriate data structure and parameters have been selected. It is shown that Genetic Algorithm threading is effective and is able to find the optimal alignment in a few test cases. Furthermore, the described algorithm is shown to perform well even without pre-definition of core elements. Existing threading methods are dependent on such constraints to make their calculations feasible. But the concept of core elements is inherently arbitrary and should be avoided if possible. While a rigorous proof is hard to submit yet an, we present indications that indeed Genetic Algorithm threading is capable of finding consistently good solutions of full alignments in search spaces of size up to 10(70).

  6. A verified LLL algorithm

    NARCIS (Netherlands)

    Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa

    2018-01-01

    The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,

  7. A genetic algorithm approach to optimization for the radiological worker allocation problem

    International Nuclear Information System (INIS)

    Yan Chen; Masakuni Narita; Masashi Tsuji; Sangduk Sa

    1996-01-01

    The worker allocation optimization problem in radiological facilities inevitably involves various types of requirements and constraints relevant to radiological protection and labor management. Some of these goals and constraints are not amenable to a rigorous mathematical formulation. Conventional methods for this problem rely heavily on sophisticated algebraic or numerical algorithms, which cause difficulties in the search for optimal solutions in the search space of worker allocation optimization problems. Genetic algorithms (GAB) are stochastic search algorithms introduced by J. Holland in the 1970s based on ideas and techniques from genetic and evolutionary theories. The most striking characteristic of GAs is the large flexibility allowed in the formulation of the optimal problem and the process of the search for the optimal solution. In the formulation, it is not necessary to define the optimal problem in rigorous mathematical terms, as required in the conventional methods. Furthermore, by designing a model of evolution for the optimal search problem, the optimal solution can be sought efficiently with computational simple manipulations without highly complex mathematical algorithms. We reported a GA approach to the worker allocation problem in radiological facilities in the previous study. In this study, two types of hard constraints were employed to reduce the huge search space, where the optimal solution is sought in such a way as to satisfy as many of soft constraints as possible. It was demonstrated that the proposed evolutionary method could provide the optimal solution efficiently compared with conventional methods. However, although the employed hard constraints could localize the search space into a very small region, it brought some complexities in the designed genetic operators and demanded additional computational burdens. In this paper, we propose a simplified evolutionary model with less restrictive hard constraints and make comparisons between

  8. Using genetic algorithm to determine the optimal order quantities for multi-item multi-period under warehouse capacity constraints in kitchenware manufacturing

    Science.gov (United States)

    Saraswati, D.; Sari, D. K.; Johan, V.

    2017-11-01

    The study was conducted on a manufacturer that produced various kinds of kitchenware with kitchen sink as the main product. There were four types of steel sheets selected as the raw materials of the kitchen sink. The problem was the manufacturer wanted to determine how much steel sheets to order from a single supplier to meet the production requirements in a way to minimize the total inventory cost. In this case, the economic order quantity (EOQ) model was developed using all-unit discount as the price of steel sheets and the warehouse capacity was limited. Genetic algorithm (GA) was used to find the minimum of the total inventory cost as a sum of purchasing cost, ordering cost, holding cost and penalty cost.

  9. A Constrained Algorithm Based NMFα for Image Representation

    Directory of Open Access Journals (Sweden)

    Chenxue Yang

    2014-01-01

    Full Text Available Nonnegative matrix factorization (NMF is a useful tool in learning a basic representation of image data. However, its performance and applicability in real scenarios are limited because of the lack of image information. In this paper, we propose a constrained matrix decomposition algorithm for image representation which contains parameters associated with the characteristics of image data sets. Particularly, we impose label information as additional hard constraints to the α-divergence-NMF unsupervised learning algorithm. The resulted algorithm is derived by using Karush-Kuhn-Tucker (KKT conditions as well as the projected gradient and its monotonic local convergence is proved by using auxiliary functions. In addition, we provide a method to select the parameters to our semisupervised matrix decomposition algorithm in the experiment. Compared with the state-of-the-art approaches, our method with the parameters has the best classification accuracy on three image data sets.

  10. Solar constraints

    International Nuclear Information System (INIS)

    Provost, J.

    1984-01-01

    Accurate tests of the theory of stellar structure and evolution are available from the Sun's observations. The solar constraints are reviewed, with a special attention to the recent progress in observing global solar oscillations. Each constraint is sensitive to a given region of the Sun. The present solar models (standard, low Z, mixed) are discussed with respect to neutrino flux, low and high degree five-minute oscillations and low degree internal gravity modes. It appears that actually there do not exist solar models able to fully account for all the observed quantities. (Auth.)

  11. An Aircraft Service Staff Rostering using a Hybrid GRASP Algorithm

    Directory of Open Access Journals (Sweden)

    W.H. Ip

    2009-10-01

    Full Text Available The aircraft ground service company is responsible for carrying out the regular tasks to aircraft maintenace between their arrival at and departure from the airport. This paper presents the application of a hybrid approach based upon greedy randomized adaptive search procedure (GRASP for rostering technical staff such that they are assigned predefined shift patterns. The rostering of staff is posed as an optimization problem with an aim of minimizing the violations of hard and soft constraints. The proposed algorithm iteratively constructs a set of solutions by GRASP. Furthermore, with multi-agent techniques, we efficiently identify an optimal roster with minimal constraint violations and fair to employees. Experimental results are included to demonstrate the effectiveness of the proposed algorithm.

  12. Two parameter-tuned metaheuristic algorithms for the multi-level lot sizing and scheduling problem

    Directory of Open Access Journals (Sweden)

    S.M.T. Fatemi Ghomi

    2012-10-01

    Full Text Available This paper addresses the problem of lot sizing and scheduling problem for n-products and m-machines in flow shop environment where setups among machines are sequence-dependent and can be carried over. Many products must be produced under capacity constraints and allowing backorders. Since lot sizing and scheduling problems are well-known strongly NP-hard, much attention has been given to heuristics and metaheuristics methods. This paper presents two metaheuristics algorithms namely, Genetic Algorithm (GA and Imperialist Competitive Algorithm (ICA. Moreover, Taguchi robust design methodology is employed to calibrate the parameters of the algorithms for different size problems. In addition, the parameter-tuned algorithms are compared against a presented lower bound on randomly generated problems. At the end, comprehensive numerical examples are presented to demonstrate the effectiveness of the proposed algorithms. The results showed that the performance of both GA and ICA are very promising and ICA outperforms GA statistically.

  13. Empirical tests of the Gradual Learning Algorithm

    NARCIS (Netherlands)

    Boersma, P.; Hayes, B.

    1999-01-01

    The Gradual Learning Algorithm (Boersma 1997) is a constraint ranking algorithm for learning Optimality-theoretic grammars. The purpose of this article is to assess the capabilities of the Gradual Learning Algorithm, particularly in comparison with the Constraint Demotion algorithm of Tesar and

  14. Empirical tests of the Gradual Learning Algorithm

    NARCIS (Netherlands)

    Boersma, P.; Hayes, B.

    2001-01-01

    The Gradual Learning Algorithm (Boersma 1997) is a constraint-ranking algorithm for learning optimality-theoretic grammars. The purpose of this article is to assess the capabilities of the Gradual Learning Algorithm, particularly in comparison with the Constraint Demotion algorithm of Tesar and

  15. Artificial immune algorithm for multi-depot vehicle scheduling problems

    Science.gov (United States)

    Wu, Zhongyi; Wang, Donggen; Xia, Linyuan; Chen, Xiaoling

    2008-10-01

    In the fast-developing logistics and supply chain management fields, one of the key problems in the decision support system is that how to arrange, for a lot of customers and suppliers, the supplier-to-customer assignment and produce a detailed supply schedule under a set of constraints. Solutions to the multi-depot vehicle scheduling problems (MDVRP) help in solving this problem in case of transportation applications. The objective of the MDVSP is to minimize the total distance covered by all vehicles, which can be considered as delivery costs or time consumption. The MDVSP is one of nondeterministic polynomial-time hard (NP-hard) problem which cannot be solved to optimality within polynomial bounded computational time. Many different approaches have been developed to tackle MDVSP, such as exact algorithm (EA), one-stage approach (OSA), two-phase heuristic method (TPHM), tabu search algorithm (TSA), genetic algorithm (GA) and hierarchical multiplex structure (HIMS). Most of the methods mentioned above are time consuming and have high risk to result in local optimum. In this paper, a new search algorithm is proposed to solve MDVSP based on Artificial Immune Systems (AIS), which are inspirited by vertebrate immune systems. The proposed AIS algorithm is tested with 30 customers and 6 vehicles located in 3 depots. Experimental results show that the artificial immune system algorithm is an effective and efficient method for solving MDVSP problems.

  16. Constraint-based scheduling

    Science.gov (United States)

    Zweben, Monte

    1993-01-01

    The GERRY scheduling system developed by NASA Ames with assistance from the Lockheed Space Operations Company, and the Lockheed Artificial Intelligence Center, uses a method called constraint-based iterative repair. Using this technique, one encodes both hard rules and preference criteria into data structures called constraints. GERRY repeatedly attempts to improve schedules by seeking repairs for violated constraints. The system provides a general scheduling framework which is being tested on two NASA applications. The larger of the two is the Space Shuttle Ground Processing problem which entails the scheduling of all the inspection, repair, and maintenance tasks required to prepare the orbiter for flight. The other application involves power allocation for the NASA Ames wind tunnels. Here the system will be used to schedule wind tunnel tests with the goal of minimizing power costs. In this paper, we describe the GERRY system and its application to the Space Shuttle problem. We also speculate as to how the system would be used for manufacturing, transportation, and military problems.

  17. Processing time tolerance-based ACO algorithm for solving job-shop scheduling problem

    Science.gov (United States)

    Luo, Yabo; Waden, Yongo P.

    2017-06-01

    Ordinarily, Job Shop Scheduling Problem (JSSP) is known as NP-hard problem which has uncertainty and complexity that cannot be handled by a linear method. Thus, currently studies on JSSP are concentrated mainly on applying different methods of improving the heuristics for optimizing the JSSP. However, there still exist many problems for efficient optimization in the JSSP, namely, low efficiency and poor reliability, which can easily trap the optimization process of JSSP into local optima. Therefore, to solve this problem, a study on Ant Colony Optimization (ACO) algorithm combined with constraint handling tactics is carried out in this paper. Further, the problem is subdivided into three parts: (1) Analysis of processing time tolerance-based constraint features in the JSSP which is performed by the constraint satisfying model; (2) Satisfying the constraints by considering the consistency technology and the constraint spreading algorithm in order to improve the performance of ACO algorithm. Hence, the JSSP model based on the improved ACO algorithm is constructed; (3) The effectiveness of the proposed method based on reliability and efficiency is shown through comparative experiments which are performed on benchmark problems. Consequently, the results obtained by the proposed method are better, and the applied technique can be used in optimizing JSSP.

  18. The constraints

    International Nuclear Information System (INIS)

    Jones, P.M.S.

    1987-01-01

    There are considerable incentives for the use of nuclear in preference to other sources for base load electricity generation in most of the developed world. These are economic, strategic, environmental and climatic. However, there are two potential constraints which could hinder the development of nuclear power to its full economic potential. These are public opinion and financial regulations which distort the nuclear economic advantage. The concerns of the anti-nuclear lobby are over safety, (especially following the Chernobyl accident), the management of radioactive waste, the potential effects of large scale exposure of the population to radiation and weapons proliferation. These are discussed. The financial constraint is over two factors, the availability of funds and the perception of cost, both of which are discussed. (U.K.)

  19. Quadratic third-order tensor optimization problem with quadratic constraints

    Directory of Open Access Journals (Sweden)

    Lixing Yang

    2014-05-01

    Full Text Available Quadratically constrained quadratic programs (QQPs problems play an important modeling role for many diverse problems. These problems are in general NP hard and numerically intractable. Semidenite programming (SDP relaxations often provide good approximate solutions to these hard problems. For several special cases of QQP, e.g., convex programs and trust region subproblems, SDP relaxation provides the exact optimal value, i.e., there is a zero duality gap. However, this is not true for the general QQP, or even the QQP with two convex constraints, but a nonconvex objective.In this paper, we consider a certain QQP where the variable is neither vector nor matrix but a third-order tensor. This problem can be viewed as a generalization of the ordinary QQP with vector or matrix as it's variant. Under some mild conditions, we rst show that SDP relaxation provides exact optimal solutions for the original problem. Then we focus on two classes of homogeneous quadratic tensor programming problems which have no requirements on the constraints number. For one, we provide an easily implemental polynomial time algorithm to approximately solve the problem and discuss the approximation ratio. For the other, we show there is no gap between the SDP relaxation and itself.

  20. From physical dose constraints to equivalent uniform dose constraints in inverse radiotherapy planning

    International Nuclear Information System (INIS)

    Thieke, Christian; Bortfeld, Thomas; Niemierko, Andrzej; Nill, Simeon

    2003-01-01

    Optimization algorithms in inverse radiotherapy planning need information about the desired dose distribution. Usually the planner defines physical dose constraints for each structure of the treatment plan, either in form of minimum and maximum doses or as dose-volume constraints. The concept of equivalent uniform dose (EUD) was designed to describe dose distributions with a higher clinical relevance. In this paper, we present a method to consider the EUD as an optimization constraint by using the method of projections onto convex sets (POCS). In each iteration of the optimization loop, for the actual dose distribution of an organ that violates an EUD constraint a new dose distribution is calculated that satisfies the EUD constraint, leading to voxel-based physical dose constraints. The new dose distribution is found by projecting the current one onto the convex set of all dose distributions fulfilling the EUD constraint. The algorithm is easy to integrate into existing inverse planning systems, and it allows the planner to choose between physical and EUD constraints separately for each structure. A clinical case of a head and neck tumor is optimized using three different sets of constraints: physical constraints for all structures, physical constraints for the target and EUD constraints for the organs at risk, and EUD constraints for all structures. The results show that the POCS method converges stable and given EUD constraints are reached closely

  1. A Pareto Algorithm for Efficient De Novo Design of Multi-functional Molecules.

    Science.gov (United States)

    Daeyaert, Frits; Deem, Micheal W

    2017-01-01

    We have introduced a Pareto sorting algorithm into Synopsis, a de novo design program that generates synthesizable molecules with desirable properties. We give a detailed description of the algorithm and illustrate its working in 2 different de novo design settings: the design of putative dual and selective FGFR and VEGFR inhibitors, and the successful design of organic structure determining agents (OSDAs) for the synthesis of zeolites. We show that the introduction of Pareto sorting not only enables the simultaneous optimization of multiple properties but also greatly improves the performance of the algorithm to generate molecules with hard-to-meet constraints. This in turn allows us to suggest approaches to address the problem of false positive hits in de novo structure based drug design by introducing structural and physicochemical constraints in the designed molecules, and by forcing essential interactions between these molecules and their target receptor. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Constraint programming and decision making

    CERN Document Server

    Kreinovich, Vladik

    2014-01-01

    In many application areas, it is necessary to make effective decisions under constraints. Several area-specific techniques are known for such decision problems; however, because these techniques are area-specific, it is not easy to apply each technique to other applications areas. Cross-fertilization between different application areas is one of the main objectives of the annual International Workshops on Constraint Programming and Decision Making. Those workshops, held in the US (El Paso, Texas), in Europe (Lyon, France), and in Asia (Novosibirsk, Russia), from 2008 to 2012, have attracted researchers and practitioners from all over the world. This volume presents extended versions of selected papers from those workshops. These papers deal with all stages of decision making under constraints: (1) formulating the problem of multi-criteria decision making in precise terms, (2) determining when the corresponding decision problem is algorithmically solvable; (3) finding the corresponding algorithms, and making...

  3. A message-passing approach to random constraint satisfaction problems with growing domains

    International Nuclear Information System (INIS)

    Zhao, Chunyan; Zheng, Zhiming; Zhou, Haijun; Xu, Ke

    2011-01-01

    Message-passing algorithms based on belief propagation (BP) are implemented on a random constraint satisfaction problem (CSP) referred to as model RB, which is a prototype of hard random CSPs with growing domain size. In model RB, the number of candidate discrete values (the domain size) of each variable increases polynomially with the variable number N of the problem formula. Although the satisfiability threshold of model RB is exactly known, finding solutions for a single problem formula is quite challenging and attempts have been limited to cases of N ∼ 10 2 . In this paper, we propose two different kinds of message-passing algorithms guided by BP for this problem. Numerical simulations demonstrate that these algorithms allow us to find a solution for random formulas of model RB with constraint tightness slightly less than p cr , the threshold value for the satisfiability phase transition. To evaluate the performance of these algorithms, we also provide a local search algorithm (random walk) as a comparison. Besides this, the simulated time dependence of the problem size N and the entropy of the variables for growing domain size are discussed

  4. Improved Monkey-King Genetic Algorithm for Solving Large Winner Determination in Combinatorial Auction

    Science.gov (United States)

    Li, Yuzhong

    Using GA solve the winner determination problem (WDP) with large bids and items, run under different distribution, because the search space is large, constraint complex and it may easy to produce infeasible solution, would affect the efficiency and quality of algorithm. This paper present improved MKGA, including three operator: preprocessing, insert bid and exchange recombination, and use Monkey-king elite preservation strategy. Experimental results show that improved MKGA is better than SGA in population size and computation. The problem that traditional branch and bound algorithm hard to solve, improved MKGA can solve and achieve better effect.

  5. Machine tongues. X. Constraint languages

    Energy Technology Data Exchange (ETDEWEB)

    Levitt, D.

    Constraint languages and programming environments will help the designer produce a lucid description of a problem domain, and then of particular situations and problems in it. Early versions of these languages were given descriptions of real world domain constraints, like the operation of electrical and mechanical parts. More recently, the author has automated a vocabulary for describing musical jazz phrases, using constraint language as a jazz improviser. General constraint languages will handle all of these domains. Once the model is in place, the system will connect built-in code fragments and algorithms to answer questions about situations; that is, to help solve problems. Bugs will surface not in code, but in designs themselves. 15 references.

  6. Introduction to Evolutionary Algorithms

    CERN Document Server

    Yu, Xinjie

    2010-01-01

    Evolutionary algorithms (EAs) are becoming increasingly attractive for researchers from various disciplines, such as operations research, computer science, industrial engineering, electrical engineering, social science, economics, etc. This book presents an insightful, comprehensive, and up-to-date treatment of EAs, such as genetic algorithms, differential evolution, evolution strategy, constraint optimization, multimodal optimization, multiobjective optimization, combinatorial optimization, evolvable hardware, estimation of distribution algorithms, ant colony optimization, particle swarm opti

  7. A Trust-region-based Sequential Quadratic Programming Algorithm

    DEFF Research Database (Denmark)

    Henriksen, Lars Christian; Poulsen, Niels Kjølstad

    This technical note documents the trust-region-based sequential quadratic programming algorithm used in other works by the authors. The algorithm seeks to minimize a convex nonlinear cost function subject to linear inequalty constraints and nonlinear equality constraints.......This technical note documents the trust-region-based sequential quadratic programming algorithm used in other works by the authors. The algorithm seeks to minimize a convex nonlinear cost function subject to linear inequalty constraints and nonlinear equality constraints....

  8. Data assimilation with inequality constraints

    Science.gov (United States)

    Thacker, W. C.

    If values of variables in a numerical model are limited to specified ranges, these restrictions should be enforced when data are assimilated. The simplest option is to assimilate without regard for constraints and then to correct any violations without worrying about additional corrections implied by correlated errors. This paper addresses the incorporation of inequality constraints into the standard variational framework of optimal interpolation with emphasis on our limited knowledge of the underlying probability distributions. Simple examples involving only two or three variables are used to illustrate graphically how active constraints can be treated as error-free data when background errors obey a truncated multi-normal distribution. Using Lagrange multipliers, the formalism is expanded to encompass the active constraints. Two algorithms are presented, both relying on a solution ignoring the inequality constraints to discover violations to be enforced. While explicitly enforcing a subset can, via correlations, correct the others, pragmatism based on our poor knowledge of the underlying probability distributions suggests the expedient of enforcing them all explicitly to avoid the computationally expensive task of determining the minimum active set. If additional violations are encountered with these solutions, the process can be repeated. Simple examples are used to illustrate the algorithms and to examine the nature of the corrections implied by correlated errors.

  9. Heuristic Scheduling Algorithm Oriented Dynamic Tasks for Imaging Satellites

    Directory of Open Access Journals (Sweden)

    Maocai Wang

    2014-01-01

    Full Text Available Imaging satellite scheduling is an NP-hard problem with many complex constraints. This paper researches the scheduling problem for dynamic tasks oriented to some emergency cases. After the dynamic properties of satellite scheduling were analyzed, the optimization model is proposed in this paper. Based on the model, two heuristic algorithms are proposed to solve the problem. The first heuristic algorithm arranges new tasks by inserting or deleting them, then inserting them repeatedly according to the priority from low to high, which is named IDI algorithm. The second one called ISDR adopts four steps: insert directly, insert by shifting, insert by deleting, and reinsert the tasks deleted. Moreover, two heuristic factors, congestion degree of a time window and the overlapping degree of a task, are employed to improve the algorithm’s performance. Finally, a case is given to test the algorithms. The results show that the IDI algorithm is better than ISDR from the running time point of view while ISDR algorithm with heuristic factors is more effective with regard to algorithm performance. Moreover, the results also show that our method has good performance for the larger size of the dynamic tasks in comparison with the other two methods.

  10. MO-FG-CAMPUS-TeP2-01: A Graph Form ADMM Algorithm for Constrained Quadratic Radiation Treatment Planning

    Energy Technology Data Exchange (ETDEWEB)

    Liu, X; Belcher, AH; Wiersma, R [The University of Chicago, Chicago, IL (United States)

    2016-06-15

    Purpose: In radiation therapy optimization the constraints can be either hard constraints which must be satisfied or soft constraints which are included but do not need to be satisfied exactly. Currently the voxel dose constraints are viewed as soft constraints and included as a part of the objective function and approximated as an unconstrained problem. However in some treatment planning cases the constraints should be specified as hard constraints and solved by constrained optimization. The goal of this work is to present a computation efficiency graph form alternating direction method of multipliers (ADMM) algorithm for constrained quadratic treatment planning optimization and compare it with several commonly used algorithms/toolbox. Method: ADMM can be viewed as an attempt to blend the benefits of dual decomposition and augmented Lagrangian methods for constrained optimization. Various proximal operators were first constructed as applicable to quadratic IMRT constrained optimization and the problem was formulated in a graph form of ADMM. A pre-iteration operation for the projection of a point to a graph was also proposed to further accelerate the computation. Result: The graph form ADMM algorithm was tested by the Common Optimization for Radiation Therapy (CORT) dataset including TG119, prostate, liver, and head & neck cases. Both unconstrained and constrained optimization problems were formulated for comparison purposes. All optimizations were solved by LBFGS, IPOPT, Matlab built-in toolbox, CVX (implementing SeDuMi) and Mosek solvers. For unconstrained optimization, it was found that LBFGS performs the best, and it was 3–5 times faster than graph form ADMM. However, for constrained optimization, graph form ADMM was 8 – 100 times faster than the other solvers. Conclusion: A graph form ADMM can be applied to constrained quadratic IMRT optimization. It is more computationally efficient than several other commercial and noncommercial optimizers and it also

  11. On Maximizing the Throughput of Packet Transmission under Energy Constraints.

    Science.gov (United States)

    Wu, Weiwei; Dai, Guangli; Li, Yan; Shan, Feng

    2018-06-23

    More and more Internet of Things (IoT) wireless devices have been providing ubiquitous services over the recent years. Since most of these devices are powered by batteries, a fundamental trade-off to be addressed is the depleted energy and the achieved data throughput in wireless data transmission. By exploiting the rate-adaptive capacities of wireless devices, most existing works on energy-efficient data transmission try to design rate-adaptive transmission policies to maximize the amount of transmitted data bits under the energy constraints of devices. Such solutions, however, cannot apply to scenarios where data packets have respective deadlines and only integrally transmitted data packets contribute. Thus, this paper introduces a notion of weighted throughput, which measures how much total value of data packets are successfully and integrally transmitted before their own deadlines. By designing efficient rate-adaptive transmission policies, this paper aims to make the best use of the energy and maximize the weighted throughput. What is more challenging but with practical significance, we consider the fading effect of wireless channels in both offline and online scenarios. In the offline scenario, we develop an optimal algorithm that computes the optimal solution in pseudo-polynomial time, which is the best possible solution as the problem undertaken is NP-hard. In the online scenario, we propose an efficient heuristic algorithm based on optimal properties derived for the optimal offline solution. Simulation results validate the efficiency of the proposed algorithm.

  12. Portfolios of quantum algorithms.

    Science.gov (United States)

    Maurer, S M; Hogg, T; Huberman, B A

    2001-12-17

    Quantum computation holds promise for the solution of many intractable problems. However, since many quantum algorithms are stochastic in nature they can find the solution of hard problems only probabilistically. Thus the efficiency of the algorithms has to be characterized by both the expected time to completion and the associated variance. In order to minimize both the running time and its uncertainty, we show that portfolios of quantum algorithms analogous to those of finance can outperform single algorithms when applied to the NP-complete problems such as 3-satisfiability.

  13. Embedded System Synthesis under Memory Constraints

    DEFF Research Database (Denmark)

    Madsen, Jan; Bjørn-Jørgensen, Peter

    1999-01-01

    This paper presents a genetic algorithm to solve the system synthesis problem of mapping a time constrained single-rate system specification onto a given heterogeneous architecture which may contain irregular interconnection structures. The synthesis is performed under memory constraints, that is......, the algorithm takes into account the memory size of processors and the size of interface buffers of communication links, and in particular the complicated interplay of these. The presented algorithm is implemented as part of the LY-COS cosynthesis system....

  14. Hybrid Genetic Algorithm with Multiparents Crossover for Job Shop Scheduling Problems

    Directory of Open Access Journals (Sweden)

    Noor Hasnah Moin

    2015-01-01

    Full Text Available The job shop scheduling problem (JSSP is one of the well-known hard combinatorial scheduling problems. This paper proposes a hybrid genetic algorithm with multiparents crossover for JSSP. The multiparents crossover operator known as extended precedence preservative crossover (EPPX is able to recombine more than two parents to generate a single new offspring distinguished from common crossover operators that recombine only two parents. This algorithm also embeds a schedule generation procedure to generate full-active schedule that satisfies precedence constraints in order to reduce the search space. Once a schedule is obtained, a neighborhood search is applied to exploit the search space for better solutions and to enhance the GA. This hybrid genetic algorithm is simulated on a set of benchmarks from the literatures and the results are compared with other approaches to ensure the sustainability of this algorithm in solving JSSP. The results suggest that the implementation of multiparents crossover produces competitive results.

  15. Foundations of genetic algorithms 1991

    CERN Document Server

    1991-01-01

    Foundations of Genetic Algorithms 1991 (FOGA 1) discusses the theoretical foundations of genetic algorithms (GA) and classifier systems.This book compiles research papers on selection and convergence, coding and representation, problem hardness, deception, classifier system design, variation and recombination, parallelization, and population divergence. Other topics include the non-uniform Walsh-schema transform; spurious correlations and premature convergence in genetic algorithms; and variable default hierarchy separation in a classifier system. The grammar-based genetic algorithm; condition

  16. Constraint Embedding for Multibody System Dynamics

    Science.gov (United States)

    Jain, Abhinandan

    2009-01-01

    This paper describes a constraint embedding approach for the handling of local closure constraints in multibody system dynamics. The approach uses spatial operator techniques to eliminate local-loop constraints from the system and effectively convert the system into tree-topology systems. This approach allows the direct derivation of recursive O(N) techniques for solving the system dynamics and avoiding the expensive steps that would otherwise be required for handling the closedchain dynamics. The approach is very effective for systems where the constraints are confined to small-subgraphs within the system topology. The paper provides background on the spatial operator O(N) algorithms, the extensions for handling embedded constraints, and concludes with some examples of such constraints.

  17. Minimizing total weighted tardiness for the single machine scheduling problem with dependent setup time and precedence constraints

    Directory of Open Access Journals (Sweden)

    Hamidreza Haddad

    2012-04-01

    Full Text Available This paper tackles the single machine scheduling problem with dependent setup time and precedence constraints. The primary objective of this paper is minimization of total weighted tardiness. Since the complexity of the resulted problem is NP-hard we use metaheuristics method to solve the resulted model. The proposed model of this paper uses genetic algorithm to solve the problem in reasonable amount of time. Because of high sensitivity of GA to its initial values of parameters, a Taguchi approach is presented to calibrate its parameters. Computational experiments validate the effectiveness and capability of proposed method.

  18. University Course Timetabling using Constraint Programming

    Directory of Open Access Journals (Sweden)

    Hadi Shahmoradi

    2017-03-01

    Full Text Available University course timetabling problem is a challenging and time-consuming task on the overall structure of timetable in every academic environment. The problem deals with many factors such as the number of lessons, classes, teachers, students and working time, and these are influenced by some hard and soft constraints. The aim of solving this problem is to assign courses and classes to teachers and students, so that the restrictions are held. In this paper, a constraint programming method is proposed to satisfy maximum constraints and expectation, in order to address university timetabling problem. For minimizing the penalty of soft constraints, a cost function is introduced and AHP method is used for calculating its coefficients. The proposed model is tested on department of management, University of Isfahan dataset using OPL on the IBM ILOG CPLEX Optimization Studio platform. A statistical analysis has been conducted and shows the performance of the proposed approach in satisfying all hard constraints and also the satisfying degree of the soft constraints is on maximum desirable level. The running time of the model is less than 20 minutes that is significantly better than the non-automated ones.

  19. A mixed integer programming model for a continuous move transportation problem with service constraints

    Directory of Open Access Journals (Sweden)

    J. Fabian Lopez

    2010-01-01

    Full Text Available We consider a Pickup and Delivery Vehicle Routing Problem (PDP commonly encountered in real-world logistics operations. The problem involves a set of practical complications that have received little attention in the vehicle routing literature. In this problem, there are multiple vehicle types available to cover a set of pickup and delivery requests, each of which has pickup time windows and delivery time windows. Transportation orders and vehicle types must satisfy a set of compatibility constraints that specify which orders cannot be covered by which vehicle types. In addition we include some dock service capacity constraints as is required on common real world operations. This problem requires to be attended on large scale instances (orders ≥ 500, (vehicles ≥ 150. As a generalization of the traveling salesman problem, clearly this problem is NP-hard. The exact algorithms are too slow for large scale instances. The PDP-TWDS is both a packing problem (assign order to vehicles, and a routing problem (find the best route for each vehicle. We propose to solve the problem in three stages. The first stage constructs initials solutions at aggregate level relaxing some constraints on the original problem. The other two stages imposes time windows and dock service constraints. Our results are favorable finding good quality solutions in relatively short computational times.

  20. Comprehensive hard materials

    CERN Document Server

    2014-01-01

    Comprehensive Hard Materials deals with the production, uses and properties of the carbides, nitrides and borides of these metals and those of titanium, as well as tools of ceramics, the superhard boron nitrides and diamond and related compounds. Articles include the technologies of powder production (including their precursor materials), milling, granulation, cold and hot compaction, sintering, hot isostatic pressing, hot-pressing, injection moulding, as well as on the coating technologies for refractory metals, hard metals and hard materials. The characterization, testing, quality assurance and applications are also covered. Comprehensive Hard Materials provides meaningful insights on materials at the leading edge of technology. It aids continued research and development of these materials and as such it is a critical information resource to academics and industry professionals facing the technological challenges of the future. Hard materials operate at the leading edge of technology, and continued res...

  1. Optimal Quadratic Programming Algorithms

    CERN Document Server

    Dostal, Zdenek

    2009-01-01

    Quadratic programming (QP) is one technique that allows for the optimization of a quadratic function in several variables in the presence of linear constraints. This title presents various algorithms for solving large QP problems. It is suitable as an introductory text on quadratic programming for graduate students and researchers

  2. An adaptive ES with a ranking based constraint handling strategy

    Directory of Open Access Journals (Sweden)

    Kusakci Ali Osman

    2014-01-01

    Full Text Available To solve a constrained optimization problem, equality constraints can be used to eliminate a problem variable. If it is not feasible, the relations imposed implicitly by the constraints can still be exploited. Most conventional constraint handling methods in Evolutionary Algorithms (EAs do not consider the correlations between problem variables imposed by the constraints. This paper relies on the idea that a proper search operator, which captures mentioned implicit correlations, can improve performance of evolutionary constrained optimization algorithms. To realize this, an Evolution Strategy (ES along with a simplified Covariance Matrix Adaptation (CMA based mutation operator is used with a ranking based constraint-handling method. The proposed algorithm is tested on 13 benchmark problems as well as on a real life design problem. The outperformance of the algorithm is significant when compared with conventional ES-based methods.

  3. Approximate Compilation of Constraints into Multivalued Decision Diagrams

    DEFF Research Database (Denmark)

    Hadzic, Tarik; Hooker, John N.; O’Sullivan, Barry

    2008-01-01

    We present an incremental refinement algorithm for approximate compilation of constraint satisfaction models into multivalued decision diagrams (MDDs). The algorithm uses a vertex splitting operation that relies on the detection of equivalent paths in the MDD. Although the algorithm is quite gene...

  4. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  5. Availability allocation to repairable systems with genetic algorithms: a multi-objective formulation

    International Nuclear Information System (INIS)

    Elegbede, Charles; Adjallah, Kondo

    2003-01-01

    This paper describes a methodology based on genetic algorithms (GA) and experiments plan to optimize the availability and the cost of reparable parallel-series systems. It is a NP-hard problem of multi-objective combinatorial optimization, modeled with continuous and discrete variables. By using the weighting technique, the problem is transformed into a single-objective optimization problem whose constraints are then relaxed by the exterior penalty technique. We then propose a search of solution through GA, whose parameters are adjusted using experiments plan technique. A numerical example is used to assess the method

  6. Sampling from a polytope and hard-disk Monte Carlo

    International Nuclear Information System (INIS)

    Kapfer, Sebastian C; Krauth, Werner

    2013-01-01

    The hard-disk problem, the statics and the dynamics of equal two-dimensional hard spheres in a periodic box, has had a profound influence on statistical and computational physics. Markov-chain Monte Carlo and molecular dynamics were first discussed for this model. Here we reformulate hard-disk Monte Carlo algorithms in terms of another classic problem, namely the sampling from a polytope. Local Markov-chain Monte Carlo, as proposed by Metropolis et al. in 1953, appears as a sequence of random walks in high-dimensional polytopes, while the moves of the more powerful event-chain algorithm correspond to molecular dynamics evolution. We determine the convergence properties of Monte Carlo methods in a special invariant polytope associated with hard-disk configurations, and the implications for convergence of hard-disk sampling. Finally, we discuss parallelization strategies for event-chain Monte Carlo and present results for a multicore implementation

  7. Induced spherococcoid hard wheat

    International Nuclear Information System (INIS)

    Yanev, Sh.

    1981-01-01

    A mutant has been obtained - a spheroccocoid line -through irradiation of hard wheat seed with fast neutrons. It is distinguished by semispherical glumes and smaller grain; the plants have low stem with erect leaves but with shorter spikes and with lesser number of spikelets than those of the initial cultivar. Good productive tillering and resistance to lodging contributed to 23.5% higher yield. The line was superior to the standard and the initial cultivars by 14.2% as regards protein content, and by up to 22.8% - as to flour gluten. It has been successfully used in hybridization producing high-yielding hard wheat lines resistant to lodging, with good technological and other indicators. The possibility stated is of obtaining a spherococcoid mutant in tetraploid (hard) wheat out of the D-genome as well as its being suited to hard wheat breeding to enhance protein content, resistance to lodging, etc. (author)

  8. Hard probes 2006 Asilomar

    CERN Multimedia

    2006-01-01

    "The second international conference on hard and electromagnetic probes of high-energy nuclear collisions was held June 9 to 16, 2006 at the Asilomar Conference grounds in Pacific Grove, California" (photo and 1/2 page)

  9. Hard coal; Steinkohle

    Energy Technology Data Exchange (ETDEWEB)

    Loo, Kai van de; Sitte, Andreas-Peter [Gesamtverband Steinkohle e.V., Herne (Germany)

    2013-04-01

    The year 2012 benefited from a growth of the consumption of hard coal at the national level as well as at the international level. Worldwide, the hard coal still is the number one energy source for power generation. This leads to an increasing demand for power plant coal. In this year, the conversion of hard coal into electricity also increases in this year. In contrast to this, the demand for coking coal as well as for coke of the steel industry is still declining depending on the market conditions. The enhanced utilization of coal for the domestic power generation is due to the reduction of the nuclear power from a relatively bad year for wind power as well as reduced import prices and low CO{sub 2} prices. Both justify a significant price advantage for coal in comparison to the utilisation of natural gas in power plants. This was mainly due to the price erosion of the inexpensive US coal which partly was replaced by the expansion of shale gas on the domestic market. As a result of this, the inexpensive US coal looked for an outlet for sales in Europe. The domestic hard coal has continued the process of adaptation and phase-out as scheduled. Two further hard coal mines were decommissioned in the year 2012. RAG Aktiengesellschaft (Herne, Federal Republic of Germany) running the hard coal mining in this country begins with the preparations for the activities after the time of mining.

  10. Maximum-entropy clustering algorithm and its global convergence analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.

  11. Hybrid particle swarm optimization with Cauchy distribution for solving reentrant flexible flow shop with blocking constraint

    Directory of Open Access Journals (Sweden)

    Chatnugrob Sangsawang

    2016-06-01

    Full Text Available This paper addresses a problem of the two-stage flexible flow shop with reentrant and blocking constraints in Hard Disk Drive Manufacturing. This problem can be formulated as a deterministic FFS|stage=2,rcrc, block|Cmax problem. In this study, adaptive Hybrid Particle Swarm Optimization with Cauchy distribution (HPSO was developed to solve the problem. The objective of this research is to find the sequences in order to minimize the makespan. To show their performances, computational experiments were performed on a number of test problems and the results are reported. Experimental results show that the proposed algorithms give better solutions than the classical Particle Swarm Optimization (PSO for all test problems. Additionally, the relative improvement (RI of the makespan solutions obtained by the proposed algorithms with respect to those of the current practice is performed in order to measure the quality of the makespan solutions generated by the proposed algorithms. The RI results show that the HPSO algorithm can improve the makespan solution by averages of 14.78%.

  12. Inclusive Flavour Tagging Algorithm

    International Nuclear Information System (INIS)

    Likhomanenko, Tatiana; Derkach, Denis; Rogozhnikov, Alex

    2016-01-01

    Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment. (paper)

  13. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  14. Ant system for reliability optimization of a series system with multiple-choice and budget constraints

    International Nuclear Information System (INIS)

    Nahas, Nabil; Nourelfath, Mustapha

    2005-01-01

    Many researchers have shown that insect colonies behavior can be seen as a natural model of collective problem solving. The analogy between the way ants look for food and combinatorial optimization problems has given rise to a new computational paradigm, which is called ant system. This paper presents an application of ant system in a reliability optimization problem for a series system with multiple-choice constraints incorporated at each subsystem, to maximize the system reliability subject to the system budget. The problem is formulated as a nonlinear binary integer programming problem and characterized as an NP-hard problem. This problem is solved by developing and demonstrating a problem-specific ant system algorithm. In this algorithm, solutions of the reliability optimization problem are repeatedly constructed by considering the trace factor and the desirability factor. A local search is used to improve the quality of the solutions obtained by each ant. A penalty factor is introduced to deal with the budget constraint. Simulations have shown that the proposed ant system is efficient with respect to the quality of solutions and the computing time

  15. Using soft constraints to guide users in flexible business process management systems

    DEFF Research Database (Denmark)

    Stefansen, Christian; Borch, Signe Ellegård

    2008-01-01

    Current Business Process Management Systems (BPMS) allow designers to specify processes in highly expressive languages supporting numerous control flow constructs, exceptions, complex predicates, etc., but process specifications are expressed in terms of hard constraints, and this leads...... to an unfortunate trade off: information about preferred practices must either be abandoned or promoted to hard constraints. If abandoned, the BPMS cannot guide its users; if promoted to hard constraints, it becomes a hindrance when unanticipated deviations occur. Soft constraints can make this trade-off less...... painful. Soft constraints specify what rules can be violated and by how much. With soft constraints, the BPMS knows what deviations it can permit, and it can guide the user through the process. The BPMS should allow designers to easily specify soft goals and allow its users to immediately see...

  16. Soft and hard pomerons

    International Nuclear Information System (INIS)

    Maor, Uri; Tel Aviv Univ.

    1995-09-01

    The role of s-channel unitarity screening corrections, calculated in the eikonal approximation, is investigated for soft Pomeron exchange responsible for elastic and diffractive hadron scattering in the high energy limit. We examine the differences between our results and those obtained from the supercritical Pomeron-Regge model with no such corrections. It is shown that screening saturation is attained at different scales for different channels. We then proceed to discuss the new HERA data on hard (PQCD) Pomeron diffractive channels and discuss the relationship between the soft and hard Pomerons and the relevance of our analysis to this problem. (author). 18 refs, 9 figs, 1 tab

  17. Hard exclusive QCD processes

    Energy Technology Data Exchange (ETDEWEB)

    Kugler, W.

    2007-01-15

    Hard exclusive processes in high energy electron proton scattering offer the opportunity to get access to a new generation of parton distributions, the so-called generalized parton distributions (GPDs). This functions provide more detailed informations about the structure of the nucleon than the usual PDFs obtained from DIS. In this work we present a detailed analysis of exclusive processes, especially of hard exclusive meson production. We investigated the influence of exclusive produced mesons on the semi-inclusive production of mesons at fixed target experiments like HERMES. Further we give a detailed analysis of higher order corrections (NLO) for the exclusive production of mesons in a very broad range of kinematics. (orig.)

  18. Hard-hat day

    CERN Multimedia

    2003-01-01

    CERN will be organizing a special information day on Friday, 27th June, designed to promote the wearing of hard hats and ensure that they are worn correctly. A new prevention campaign will also be launched.The event will take place in the hall of the Main Building from 11.30 a.m. to 2.00 p.m., when you will be able to come and try on various models of hard hat, including some of the very latest innovative designs, ask questions and pass on any comments and suggestions.

  19. A Constraint Model for Constrained Hidden Markov Models

    DEFF Research Database (Denmark)

    Christiansen, Henning; Have, Christian Theil; Lassen, Ole Torp

    2009-01-01

    A Hidden Markov Model (HMM) is a common statistical model which is widely used for analysis of biological sequence data and other sequential phenomena. In the present paper we extend HMMs with constraints and show how the familiar Viterbi algorithm can be generalized, based on constraint solving ...

  20. Combining Constraint Types From Public Data in Aerial Image Segmentation

    DEFF Research Database (Denmark)

    Jacobsen, Thomas Stig; Jensen, Jacob Jon; Jensen, Daniel Rune

    2013-01-01

    We introduce a method for image segmentation that constraints the clustering with map and point data. The method is showcased by applying the spectral clustering algorithm on aerial images for building detection with constraints built from a height map and address point data. We automatically det...

  1. A comparison of an algorithm for automated sequential beam orientation selection (Cycle) with simulated annealing

    International Nuclear Information System (INIS)

    Woudstra, Evert; Heijmen, Ben J M; Storchi, Pascal R M

    2008-01-01

    Some time ago we developed and published a new deterministic algorithm (called Cycle) for automatic selection of beam orientations in radiotherapy. This algorithm is a plan generation process aiming at the prescribed PTV dose within hard dose and dose-volume constraints. The algorithm allows a large number of input orientations to be used and selects only the most efficient orientations, surviving the selection process. Efficiency is determined by a score function and is more or less equal to the extent of uninhibited access to the PTV for a specific beam during the selection process. In this paper we compare the capabilities of fast-simulated annealing (FSA) and Cycle for cases where local optima are supposed to be present. Five pancreas and five oesophagus cases previously treated in our institute were selected for this comparison. Plans were generated for FSA and Cycle, using the same hard dose and dose-volume constraints, and the largest possible achieved PTV doses as obtained from these algorithms were compared. The largest achieved PTV dose values were generally very similar for the two algorithms. In some cases FSA resulted in a slightly higher PTV dose than Cycle, at the cost of switching on substantially more beam orientations than Cycle. In other cases, when Cycle generated the solution with the highest PTV dose using only a limited number of non-zero weight beams, FSA seemed to have some difficulty in switching off the unfavourable directions. Cycle was faster than FSA, especially for large-dimensional feasible spaces. In conclusion, for the cases studied in this paper, we have found that despite the inherent drawback of sequential search as used by Cycle (where Cycle could probably get trapped in a local optimum), Cycle is nevertheless able to find comparable or sometimes slightly better treatment plans in comparison with FSA (which in theory finds the global optimum) especially in large-dimensional beam weight spaces

  2. Hard times; Schwere Zeiten

    Energy Technology Data Exchange (ETDEWEB)

    Grunwald, Markus

    2012-10-02

    The prices of silicon and solar wafers keep dropping. According to market research specialist IMS research, this is the result of weak traditional solar markets and global overcapacities. While many manufacturers are facing hard times, big producers of silicon are continuing to expand.

  3. Rock-hard coatings

    OpenAIRE

    Muller, M.

    2007-01-01

    Aircraft jet engines have to be able to withstand infernal conditions. Extreme heat and bitter cold tax coatings to the limit. Materials expert Dr Ir. Wim Sloof fits atoms together to develop rock-hard coatings. The latest invention in this field is known as ceramic matrix composites. Sloof has signed an agreement with a number of parties to investigate this material further.

  4. Rock-hard coatings

    NARCIS (Netherlands)

    Muller, M.

    2007-01-01

    Aircraft jet engines have to be able to withstand infernal conditions. Extreme heat and bitter cold tax coatings to the limit. Materials expert Dr Ir. Wim Sloof fits atoms together to develop rock-hard coatings. The latest invention in this field is known as ceramic matrix composites. Sloof has

  5. Hardness and excitation energy

    Indian Academy of Sciences (India)

    It is shown that the first excitation energy can be given by the Kohn-Sham hardness (i.e. the energy difference of the ground-state lowest unoccupied and highest occupied levels) plus an extra term coming from the partial derivative of the ensemble exchange-correlation energy with respect to the weighting factor in the ...

  6. Randomized Filtering Algorithms

    DEFF Research Database (Denmark)

    Katriel, Irit; Van Hentenryck, Pascal

    2008-01-01

    of AllDifferent and is generalization, the Global Cardinality Constraint. The first delayed filtering scheme is a Monte Carlo algorithm: its running time is superior, in the worst case, to that of enforcing are consistency after every domain event, while its filtering effectiveness is analyzed...... in the expected sense. The second scheme is a Las Vegas algorithm using filtering triggers: Its effectiveness is the same as enforcing are consistency after every domain event, while in the expected case it is faster by a factor of m/n, where n and m are, respectively, the number of nodes and edges...

  7. Application of Multiple-Population Genetic Algorithm in Optimizing the Train-Set Circulation Plan Problem

    Directory of Open Access Journals (Sweden)

    Yu Zhou

    2017-01-01

    Full Text Available The train-set circulation plan problem (TCPP belongs to the rolling stock scheduling (RSS problem and is similar to the aircraft routing problem (ARP in airline operations and the vehicle routing problem (VRP in the logistics field. However, TCPP involves additional complexity due to the maintenance constraint of train-sets: train-sets must conduct maintenance tasks after running for a certain time and distance. The TCPP is nondeterministic polynomial hard (NP-hard. There is no available algorithm that can obtain the optimal global solution, and many factors such as the utilization mode and the maintenance mode impact the solution of the TCPP. This paper proposes a train-set circulation optimization model to minimize the total connection time and maintenance costs and describes the design of an efficient multiple-population genetic algorithm (MPGA to solve this model. A realistic high-speed railway (HSR case is selected to verify our model and algorithm, and, then, a comparison of different algorithms is carried out. Furthermore, a new maintenance mode is proposed, and related implementation requirements are discussed.

  8. Multi-objective problem of the modified distributed parallel machine and assembly scheduling problem (MDPMASP) with eligibility constraints

    Science.gov (United States)

    Amallynda, I.; Santosa, B.

    2017-11-01

    This paper proposes a new generalization of the distributed parallel machine and assembly scheduling problem (DPMASP) with eligibility constraints referred to as the modified distributed parallel machine and assembly scheduling problem (MDPMASP) with eligibility constraints. Within this generalization, we assume that there are a set non-identical factories or production lines, each one with a set unrelated parallel machine with different speeds in processing them disposed to a single assembly machine in series. A set of different products that are manufactured through an assembly program of a set of components (jobs) according to the requested demand. Each product requires several kinds of jobs with different sizes. Beside that we also consider to the multi-objective problem (MOP) of minimizing mean flow time and the number of tardy products simultaneously. This is known to be NP-Hard problem, is important to practice, as the former criterions to reflect the customer's demand and manufacturer's perspective. This is a realistic and complex problem with wide range of possible solutions, we propose four simple heuristics and two metaheuristics to solve it. Various parameters of the proposed metaheuristic algorithms are discussed and calibrated by means of Taguchi technique. All proposed algorithms are tested by Matlab software. Our computational experiments indicate that the proposed problem and fourth proposed algorithms are able to be implemented and can be used to solve moderately-sized instances, and giving efficient solutions, which are close to optimum in most cases.

  9. Financing Constraints and Entrepreneurship

    OpenAIRE

    William R. Kerr; Ramana Nanda

    2009-01-01

    Financing constraints are one of the biggest concerns impacting potential entrepreneurs around the world. Given the important role that entrepreneurship is believed to play in the process of economic growth, alleviating financing constraints for would-be entrepreneurs is also an important goal for policymakers worldwide. We review two major streams of research examining the relevance of financing constraints for entrepreneurship. We then introduce a framework that provides a unified perspecti...

  10. Temporal Concurrent Constraint Programming

    DEFF Research Database (Denmark)

    Nielsen, Mogens; Valencia Posso, Frank Dan

    2002-01-01

    The ntcc calculus is a model of non-deterministic temporal concurrent constraint programming. In this paper we study behavioral notions for this calculus. In the underlying computational model, concurrent constraint processes are executed in discrete time intervals. The behavioral notions studied...... reflect the reactive interactions between concurrent constraint processes and their environment, as well as internal interactions between individual processes. Relationships between the suggested notions are studied, and they are all proved to be decidable for a substantial fragment of the calculus...

  11. Faddeev-Jackiw quantization and constraints

    International Nuclear Information System (INIS)

    Barcelos-Neto, J.; Wotzasek, C.

    1992-01-01

    In a recent Letter, Faddeev and Jackiw have shown that the reduction of constrained systems into its canonical, first-order form, can bring some new insight into the research of this field. For sympletic manifolds the geometrical structure, called Dirac or generalized bracket, is obtained directly from the inverse of the nonsingular sympletic two-form matrix. In the cases of nonsympletic manifolds, this two-form is degenerated and cannot be inverted to provide the generalized brackets. This singular behavior of the sympletic matrix is indicative of the presence of constraints that have to be carefully considered to yield to consistent results. One has two possible routes to treat this problem: Dirac has taught us how to implement the constraints into the potential part (Hamiltonian) of the canonical Lagrangian, leading to the well-known Dirac brackets, which are consistent with the constraints and can be mapped into quantum commutators (modulo ordering terms). The second route, suggested by Faddeev and Jackiw, and followed in this paper, is to implement the constraints directly into the canonical part of the first order Lagrangian, using the fact that the consistence condition for the stability of the constrained manifold is linear in the time derivative. This algorithm may lead to an invertible two-form sympletic matrix from where the Dirac brackets are readily obtained. This algorithm is used in this paper to investigate some aspects of the quantization of constrained systems with first- and second-class constraints in the sympletic approach

  12. Filter Pattern Search Algorithms for Mixed Variable Constrained Optimization Problems

    National Research Council Canada - National Science Library

    Abramson, Mark A; Audet, Charles; Dennis, Jr, J. E

    2004-01-01

    .... This class combines and extends the Audet-Dennis Generalized Pattern Search (GPS) algorithms for bound constrained mixed variable optimization, and their GPS-filter algorithms for general nonlinear constraints...

  13. Sparse Nonlinear Electromagnetic Imaging Accelerated With Projected Steepest Descent Algorithm

    KAUST Repository

    Desmal, Abdulla; Bagci, Hakan

    2017-01-01

    steepest descent algorithm. The algorithm uses a projection operator to enforce the sparsity constraint by thresholding the solution at every iteration. Thresholding level and iteration step are selected carefully to increase the efficiency without

  14. Hard Copy Market Overview

    Science.gov (United States)

    Testan, Peter R.

    1987-04-01

    A number of Color Hard Copy (CHC) market drivers are currently indicating strong growth in the use of CHC technologies for the business graphics marketplace. These market drivers relate to product, software, color monitors and color copiers. The use of color in business graphics allows more information to be relayed than is normally the case in a monochrome format. The communicative powers of full-color computer generated output in the business graphics application area will continue to induce end users to desire and require color in their future applications. A number of color hard copy technologies will be utilized in the presentation graphics arena. Thermal transfer, ink jet, photographic and electrophotographic technologies are all expected to be utilized in the business graphics presentation application area in the future. Since the end of 1984, the availability of color application software packages has grown significantly. Sales revenue generated by business graphics software is expected to grow at a compound annual growth rate of just over 40 percent to 1990. Increased availability of packages to allow the integration of text and graphics is expected. Currently, the latest versions of page description languages such as Postscript, Interpress and DDL all support color output. The use of color monitors will also drive the demand for color hard copy in the business graphics market place. The availability of higher resolution screens is allowing color monitors to be easily used for both text and graphics applications in the office environment. During 1987, the sales of color monitors are expected to surpass the sales of monochrome monitors. Another major color hard copy market driver will be the color copier. In order to take advantage of the communications power of computer generated color output, multiple copies are required for distribution. Product introductions of a new generation of color copiers is now underway with additional introductions expected

  15. About some types of constraints in problems of routing

    Science.gov (United States)

    Petunin, A. A.; Polishuk, E. G.; Chentsov, A. G.; Chentsov, P. A.; Ukolov, S. S.

    2016-12-01

    Many routing problems arising in different applications can be interpreted as a discrete optimization problem with additional constraints. The latter include generalized travelling salesman problem (GTSP), to which task of tool routing for CNC thermal cutting machines is sometimes reduced. Technological requirements bound to thermal fields distribution during cutting process are of great importance when developing algorithms for this task solution. These requirements give rise to some specific constraints for GTSP. This paper provides a mathematical formulation for the problem of thermal fields calculating during metal sheet thermal cutting. Corresponding algorithm with its programmatic implementation is considered. The mathematical model allowing taking such constraints into account considering other routing problems is discussed either.

  16. Simulating non-holonomic constraints within the LCP-based simulation framework

    DEFF Research Database (Denmark)

    Ellekilde, Lars-Peter; Petersen, Henrik Gordon

    2006-01-01

    be incorporated directly, and derive formalism for how the non-holonomic contact constraints can be modelled as a combination of non-holonomic equality constraints and ordinary contacts constraints. For each of these three we are able to guarantee solvability, when using Lemke's algorithm. A number of examples......In this paper, we will extend the linear complementarity problem-based rigid-body simulation framework with non-holonomic constraints. We consider three different types of such, namely equality, inequality and contact constraints. We show how non-holonomic equality and inequality constraints can...... are included to demonstrate the non-holonomic constraints. Udgivelsesdato: Marts...

  17. Orthology and paralogy constraints: satisfiability and consistency.

    Science.gov (United States)

    Lafond, Manuel; El-Mabrouk, Nadia

    2014-01-01

    A variety of methods based on sequence similarity, reconciliation, synteny or functional characteristics, can be used to infer orthology and paralogy relations between genes of a given gene family  G. But is a given set  C of orthology/paralogy constraints possible, i.e., can they simultaneously co-exist in an evolutionary history for  G? While previous studies have focused on full sets of constraints, here we consider the general case where  C does not necessarily involve a constraint for each pair of genes. The problem is subdivided in two parts: (1) Is  C satisfiable, i.e. can we find an event-labeled gene tree G inducing  C? (2) Is there such a G which is consistent, i.e., such that all displayed triplet phylogenies are included in a species tree? Previous results on the Graph sandwich problem can be used to answer to (1), and we provide polynomial-time algorithms for satisfiability and consistency with a given species tree. We also describe a new polynomial-time algorithm for the case of consistency with an unknown species tree and full knowledge of pairwise orthology/paralogy relationships, as well as a branch-and-bound algorithm in the case when unknown relations are present. We show that our algorithms can be used in combination with ProteinOrtho, a sequence similarity-based orthology detection tool, to extract a set of robust orthology/paralogy relationships.

  18. Quiet planting in the locked constraints satisfaction problems

    Energy Technology Data Exchange (ETDEWEB)

    Zdeborova, Lenka [Los Alamos National Laboratory; Krzakala, Florent [Los Alamos National Laboratory

    2009-01-01

    We study the planted ensemble of locked constraint satisfaction problems. We describe the connection between the random and planted ensembles. The use of the cavity method is combined with arguments from reconstruction on trees and first and second moment considerations; in particular the connection with the reconstruction on trees appears to be crucial. Our main result is the location of the hard region in the planted ensemble, thus providing hard satisfiable benchmarks. In a part of that hard region instances have with high probability a single satisfying assignment.

  19. Hard Electromagnetic Processes

    International Nuclear Information System (INIS)

    Richard, F.

    1987-09-01

    Among hard electromagnetic processes, I will use the most recent data and focus on quantitative test of QCD. More specifically, I will retain two items: - hadroproduction of direct photons, - Drell-Yan. In addition, I will briefly discuss a recent analysis of ISR data obtained with AFS (Axial Field Spectrometer) which sheds a new light on the e/π puzzle at low P T

  20. An overview on polynomial approximation of NP-hard problems

    Directory of Open Access Journals (Sweden)

    Paschos Vangelis Th.

    2009-01-01

    Full Text Available The fact that polynomial time algorithm is very unlikely to be devised for an optimal solving of the NP-hard problems strongly motivates both the researchers and the practitioners to try to solve such problems heuristically, by making a trade-off between computational time and solution's quality. In other words, heuristic computation consists of trying to find not the best solution but one solution which is 'close to' the optimal one in reasonable time. Among the classes of heuristic methods for NP-hard problems, the polynomial approximation algorithms aim at solving a given NP-hard problem in poly-nomial time by computing feasible solutions that are, under some predefined criterion, as near to the optimal ones as possible. The polynomial approximation theory deals with the study of such algorithms. This survey first presents and analyzes time approximation algorithms for some classical examples of NP-hard problems. Secondly, it shows how classical notions and tools of complexity theory, such as polynomial reductions, can be matched with polynomial approximation in order to devise structural results for NP-hard optimization problems. Finally, it presents a quick description of what is commonly called inapproximability results. Such results provide limits on the approximability of the problems tackled.

  1. Temporal Concurrent Constraint Programming

    DEFF Research Database (Denmark)

    Nielsen, Mogens; Palamidessi, Catuscia; Valencia, Frank Dan

    2002-01-01

    The ntcc calculus is a model of non-deterministic temporal concurrent constraint programming. In this paper we study behavioral notions for this calculus. In the underlying computational model, concurrent constraint processes are executed in discrete time intervals. The behavioral notions studied...

  2. Evaluating Distributed Timing Constraints

    DEFF Research Database (Denmark)

    Kristensen, C.H.; Drejer, N.

    1994-01-01

    In this paper we describe a solution to the problem of implementing time-optimal evaluation of timing constraints in distributed real-time systems.......In this paper we describe a solution to the problem of implementing time-optimal evaluation of timing constraints in distributed real-time systems....

  3. Theory of Constraints (TOC)

    DEFF Research Database (Denmark)

    Michelsen, Aage U.

    2004-01-01

    Tankegangen bag Theory of Constraints samt planlægningsprincippet Drum-Buffer-Rope. Endvidere skitse af The Thinking Process.......Tankegangen bag Theory of Constraints samt planlægningsprincippet Drum-Buffer-Rope. Endvidere skitse af The Thinking Process....

  4. Multi-Objective Constraint Satisfaction for Mobile Robot Area Defense

    Science.gov (United States)

    2010-03-01

    David A. Van Veldhuizen . Evo- lutionary Algorithms for Solving Multi-Objective Problems. Springer, New York, NY, 2nd edition, 2007. [9] Dean, Thomas...J.I. van Hemert, E. Marchiori, and A. G. Steenbeek. “Solving Binary Constraint Satisfaction Problems using Evolutionary Algorithms with an Adaptive

  5. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  6. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  7. A formal analysis of a dynamic distributed spanning tree algorithm

    NARCIS (Netherlands)

    Mooij, A.J.; Wesselink, J.W.

    2003-01-01

    Abstract. We analyze the spanning tree algorithm in the IEEE 1394.1 draft standard, which correctness has not previously been proved. This algorithm is a fully-dynamic distributed graph algorithm, which, in general, is hard to develop. The approach we use is to formally develop an algorithm that is

  8. Block Pickard Models for Two-Dimensional Constraints

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Justesen, Jørn

    2009-01-01

    In Pickard random fields (PRF), the probabilities of finite configurations and the entropy of the field can be calculated explicitly, but only very simple structures can be incorporated into such a field. Given two Markov chains describing a boundary, an algorithm is presented which determines...... for the domino tiling constraint represented by a quaternary alphabet. PRF models are also presented for higher order constraints, including the no isolated bits (n.i.b.) constraint, and a minimum distance 3 constraint by defining super symbols on blocks of binary symbols....

  9. Parallel Execution of Multi Set Constraint Rewrite Rules

    DEFF Research Database (Denmark)

    Sulzmann, Martin; Lam, Edmund Soon Lee

    2008-01-01

    that the underlying constraint rewrite implementation executes rewrite steps in parallel on increasingly popular becoming multi-core architectures. We design and implement efficient algorithms which allow for the parallel execution of multi-set constraint rewrite rules. Our experiments show that we obtain some......Multi-set constraint rewriting allows for a highly parallel computational model and has been used in a multitude of application domains such as constraint solving, agent specification etc. Rewriting steps can be applied simultaneously as long as they do not interfere with each other.We wish...

  10. Handbook of Memetic Algorithms

    CERN Document Server

    Cotta, Carlos; Moscato, Pablo

    2012-01-01

    Memetic Algorithms (MAs) are computational intelligence structures combining multiple and various operators in order to address optimization problems.  The combination and interaction amongst operators evolves and promotes the diffusion of the most successful units and generates an algorithmic behavior which can handle complex objective functions and hard fitness landscapes.   “Handbook of Memetic Algorithms” organizes, in a structured way, all the the most important results in the field of MAs since their earliest definition until now.  A broad review including various algorithmic solutions as well as successful applications is included in this book. Each class of optimization problems, such as constrained optimization, multi-objective optimization, continuous vs combinatorial problems, uncertainties, are analysed separately and, for each problem,  memetic recipes for tackling the difficulties are given with some successful examples. Although this book contains chapters written by multiple authors, ...

  11. Iterative Schemes for Convex Minimization Problems with Constraints

    Directory of Open Access Journals (Sweden)

    Lu-Chuan Ceng

    2014-01-01

    Full Text Available We first introduce and analyze one implicit iterative algorithm for finding a solution of the minimization problem for a convex and continuously Fréchet differentiable functional, with constraints of several problems: the generalized mixed equilibrium problem, the system of generalized equilibrium problems, and finitely many variational inclusions in a real Hilbert space. We prove strong convergence theorem for the iterative algorithm under suitable conditions. On the other hand, we also propose another implicit iterative algorithm for finding a fixed point of infinitely many nonexpansive mappings with the same constraints, and derive its strong convergence under mild assumptions.

  12. New algorithms for parallel MRI

    International Nuclear Information System (INIS)

    Anzengruber, S; Ramlau, R; Bauer, F; Leitao, A

    2008-01-01

    Magnetic Resonance Imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured lines of the Fourier domain (k-space). In contrast to well-known algorithms like SENSE and GRAPPA and its flavors we consider the problem as a non-linear inverse problem. However, in order to avoid cost intensive derivatives we will use Landweber-Kaczmarz iteration and in order to improve the overall results some additional sparsity constraints.

  13. Development of a hard microcontroller

    International Nuclear Information System (INIS)

    Measel, P.R.; Sivo, L.L.; Quilitz, W.E.; Davidson, T.K.

    1976-01-01

    The applicability of commercially available microprocessors to certain systems requiring radiation survival was assessed. A microcontroller was designed and built to perform a monitor and control function of military operational ground equipment, and demonstrated to exceed the radiation hardness goal. The preparation of the microcontroller module required hardware and software design, selection of LSI and other piece part types, development of piece part and module electrical and radiation test techniques, and the performance of radiation tests on the LSI piece parts and the completed module. The microcontroller has a 16-bit central processor unit, a 4096 word read only memory, and a 256 word read-write memory. The module has circumvention circuitry, including a PIN diode radiation detector. The processor device used was the MMI 6701 T 2 L Schottky bipolar 4-bit slice. Electrical exerciser circuits were developed for in-situ electrical testing of microprocessors and memories during irradiation. A test program was developed for a Terradyne J283 microcircuit tester for more complete electrical characterization of the MMI 6701 microprocessor. A simple self-test algorithm was used in the microcontroller for performance testing during irradiation. For the operational demonstration of the microcontroller a TI 960A minicomputer was used to provide the required complex inputs to the module and verify the module outputs

  14. Constraint-based reachability

    Directory of Open Access Journals (Sweden)

    Arnaud Gotlieb

    2013-02-01

    Full Text Available Iterative imperative programs can be considered as infinite-state systems computing over possibly unbounded domains. Studying reachability in these systems is challenging as it requires to deal with an infinite number of states with standard backward or forward exploration strategies. An approach that we call Constraint-based reachability, is proposed to address reachability problems by exploring program states using a constraint model of the whole program. The keypoint of the approach is to interpret imperative constructions such as conditionals, loops, array and memory manipulations with the fundamental notion of constraint over a computational domain. By combining constraint filtering and abstraction techniques, Constraint-based reachability is able to solve reachability problems which are usually outside the scope of backward or forward exploration strategies. This paper proposes an interpretation of classical filtering consistencies used in Constraint Programming as abstract domain computations, and shows how this approach can be used to produce a constraint solver that efficiently generates solutions for reachability problems that are unsolvable by other approaches.

  15. Total-variation regularization with bound constraints

    International Nuclear Information System (INIS)

    Chartrand, Rick; Wohlberg, Brendt

    2009-01-01

    We present a new algorithm for bound-constrained total-variation (TV) regularization that in comparison with its predecessors is simple, fast, and flexible. We use a splitting approach to decouple TV minimization from enforcing the constraints. Consequently, existing TV solvers can be employed with minimal alteration. This also makes the approach straightforward to generalize to any situation where TV can be applied. We consider deblurring of images with Gaussian or salt-and-pepper noise, as well as Abel inversion of radiographs with Poisson noise. We incorporate previous iterative reweighting algorithms to solve the TV portion.

  16. Latin hypercube sampling with inequality constraints

    International Nuclear Information System (INIS)

    Iooss, B.; Petelet, M.; Asserin, O.; Loredo, A.

    2010-01-01

    In some studies requiring predictive and CPU-time consuming numerical models, the sampling design of the model input variables has to be chosen with caution. For this purpose, Latin hypercube sampling has a long history and has shown its robustness capabilities. In this paper we propose and discuss a new algorithm to build a Latin hypercube sample (LHS) taking into account inequality constraints between the sampled variables. This technique, called constrained Latin hypercube sampling (cLHS), consists in doing permutations on an initial LHS to honor the desired monotonic constraints. The relevance of this approach is shown on a real example concerning the numerical welding simulation, where the inequality constraints are caused by the physical decreasing of some material properties in function of the temperature. (authors)

  17. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  18. Hardness and Approximation for Network Flow Interdiction

    OpenAIRE

    Chestnut, Stephen R.; Zenklusen, Rico

    2015-01-01

    In the Network Flow Interdiction problem an adversary attacks a network in order to minimize the maximum s-t-flow. Very little is known about the approximatibility of this problem despite decades of interest in it. We present the first approximation hardness, showing that Network Flow Interdiction and several of its variants cannot be much easier to approximate than Densest k-Subgraph. In particular, any $n^{o(1)}$-approximation algorithm for Network Flow Interdiction would imply an $n^{o(1)}...

  19. Resources, constraints and capabilities

    NARCIS (Netherlands)

    Dhondt, S.; Oeij, P.R.A.; Schröder, A.

    2018-01-01

    Human and financial resources as well as organisational capabilities are needed to overcome the manifold constraints social innovators are facing. To unlock the potential of social innovation for the whole society new (social) innovation friendly environments and new governance structures

  20. Design with Nonlinear Constraints

    KAUST Repository

    Tang, Chengcheng

    2015-01-01

    . The first application is the design of meshes under both geometric and static constraints, including self-supporting polyhedral meshes that are not height fields. Then, with a formulation bridging mesh based and spline based representations, the application

  1. Revisiting the definition of local hardness and hardness kernel.

    Science.gov (United States)

    Polanco-Ramírez, Carlos A; Franco-Pérez, Marco; Carmona-Espíndola, Javier; Gázquez, José L; Ayers, Paul W

    2017-05-17

    An analysis of the hardness kernel and local hardness is performed to propose new definitions for these quantities that follow a similar pattern to the one that characterizes the quantities associated with softness, that is, we have derived new definitions for which the integral of the hardness kernel over the whole space of one of the variables leads to local hardness, and the integral of local hardness over the whole space leads to global hardness. A basic aspect of the present approach is that global hardness keeps its identity as the second derivative of energy with respect to the number of electrons. Local hardness thus obtained depends on the first and second derivatives of energy and electron density with respect to the number of electrons. When these derivatives are approximated by a smooth quadratic interpolation of energy, the expression for local hardness reduces to the one intuitively proposed by Meneses, Tiznado, Contreras and Fuentealba. However, when one combines the first directional derivatives with smooth second derivatives one finds additional terms that allow one to differentiate local hardness for electrophilic attack from the one for nucleophilic attack. Numerical results related to electrophilic attacks on substituted pyridines, substituted benzenes and substituted ethenes are presented to show the overall performance of the new definition.

  2. Dynamics and causality constraints

    International Nuclear Information System (INIS)

    Sousa, Manoelito M. de

    2001-04-01

    The physical meaning and the geometrical interpretation of causality implementation in classical field theories are discussed. Causality in field theory are kinematical constraints dynamically implemented via solutions of the field equation, but in a limit of zero-distance from the field sources part of these constraints carries a dynamical content that explains old problems of classical electrodynamics away with deep implications to the nature of physicals interactions. (author)

  3. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  4. Momentum constraint relaxation

    International Nuclear Information System (INIS)

    Marronetti, Pedro

    2006-01-01

    Full relativistic simulations in three dimensions invariably develop runaway modes that grow exponentially and are accompanied by violations of the Hamiltonian and momentum constraints. Recently, we introduced a numerical method (Hamiltonian relaxation) that greatly reduces the Hamiltonian constraint violation and helps improve the quality of the numerical model. We present here a method that controls the violation of the momentum constraint. The method is based on the addition of a longitudinal component to the traceless extrinsic curvature A ij -tilde, generated by a vector potential w i , as outlined by York. The components of w i are relaxed to solve approximately the momentum constraint equations, slowly pushing the evolution towards the space of solutions of the constraint equations. We test this method with simulations of binary neutron stars in circular orbits and show that it effectively controls the growth of the aforementioned violations. We also show that a full numerical enforcement of the constraints, as opposed to the gentle correction of the momentum relaxation scheme, results in the development of instabilities that stop the runs shortly

  5. Selection of new constraints

    International Nuclear Information System (INIS)

    Sugier, A.

    2003-01-01

    The selected new constraints should be consistent with the scale of concern i.e. be expressed roughly as fractions or multiples of the average annual background. They should take into account risk considerations and include the values of the currents limits, constraints and other action levels. The recommendation is to select four leading values for the new constraints: 500 mSv ( single event or in a decade) as a maximum value, 0.01 mSv/year as a minimum value; and two intermediate values: 20 mSv/year and 0.3 mSv/year. This new set of dose constraints, representing basic minimum standards of protection for the individuals taking into account the specificity of the exposure situations are thus coherent with the current values which can be found in ICRP Publications. A few warning need however to be noticed: There is no more multi sources limit set by ICRP. The coherence between the proposed value of dose constraint (20 mSv/year) and the current occupational dose limit of 20 mSv/year is valid only if the workers are exposed to one single source. When there is more than one source, it will be necessary to apportion. The value of 1000 mSv lifetimes used for relocation can be expressed into annual dose, which gives approximately 10 mSv/year and is coherent with the proposed dose constraint. (N.C.)

  6. Approximation algorithms for guarding holey polygons ...

    African Journals Online (AJOL)

    Guarding edges of polygons is a version of art gallery problem.The goal is finding the minimum number of guards to cover the edges of a polygon. This problem is NP-hard, and to our knowledge there are approximation algorithms just for simple polygons. In this paper we present two approximation algorithms for guarding ...

  7. Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem.

    Science.gov (United States)

    Rajeswari, M; Amudhavel, J; Pothula, Sujatha; Dhavachelvan, P

    2017-01-01

    The Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shifts per day by considering both hard and soft constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. This MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. The performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. The experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria.

  8. Distributed Unmixing of Hyperspectral Datawith Sparsity Constraint

    Science.gov (United States)

    Khoshsokhan, S.; Rajabi, R.; Zayyani, H.

    2017-09-01

    Spectral unmixing (SU) is a data processing problem in hyperspectral remote sensing. The significant challenge in the SU problem is how to identify endmembers and their weights, accurately. For estimation of signature and fractional abundance matrices in a blind problem, nonnegative matrix factorization (NMF) and its developments are used widely in the SU problem. One of the constraints which was added to NMF is sparsity constraint that was regularized by L1/2 norm. In this paper, a new algorithm based on distributed optimization has been used for spectral unmixing. In the proposed algorithm, a network including single-node clusters has been employed. Each pixel in hyperspectral images considered as a node in this network. The distributed unmixing with sparsity constraint has been optimized with diffusion LMS strategy, and then the update equations for fractional abundance and signature matrices are obtained. Simulation results based on defined performance metrics, illustrate advantage of the proposed algorithm in spectral unmixing of hyperspectral data compared with other methods. The results show that the AAD and SAD of the proposed approach are improved respectively about 6 and 27 percent toward distributed unmixing in SNR=25dB.

  9. Lagrange constraint neural networks for massive pixel parallel image demixing

    Science.gov (United States)

    Szu, Harold H.; Hsu, Charles C.

    2002-03-01

    We have shown that the remote sensing optical imaging to achieve detailed sub-pixel decomposition is a unique application of blind source separation (BSS) that is truly linear of far away weak signal, instantaneous speed of light without delay, and along the line of sight without multiple paths. In early papers, we have presented a direct application of statistical mechanical de-mixing method called Lagrange Constraint Neural Network (LCNN). While the BSAO algorithm (using a posteriori MaxEnt ANN and neighborhood pixel average) is not acceptable for remote sensing, a mirror symmetric LCNN approach is all right assuming a priori MaxEnt for unknown sources to be averaged over the source statistics (not neighborhood pixel data) in a pixel-by-pixel independent fashion. LCNN reduces the computation complexity, save a great number of memory devices, and cut the cost of implementation. The Landsat system is designed to measure the radiation to deduce surface conditions and materials. For any given material, the amount of emitted and reflected radiation varies by the wavelength. In practice, a single pixel of a Landsat image has seven channels receiving 0.1 to 12 microns of radiation from the ground within a 20x20 meter footprint containing a variety of radiation materials. A-priori LCNN algorithm provides the spatial-temporal variation of mixture that is hardly de-mixable by other a-posteriori BSS or ICA methods. We have already compared the Landsat remote sensing using both methods in WCCI 2002 Hawaii. Unfortunately the absolute benchmark is not possible because of lacking of the ground truth. We will arbitrarily mix two incoherent sampled images as the ground truth. However, the constant total probability of co-located sources within the pixel footprint is necessary for the remote sensing constraint (since on a clear day the total reflecting energy is constant in neighborhood receiving pixel sensors), we have to normalized two image pixel-by-pixel as well. Then, the

  10. Constraint Embedding Technique for Multibody System Dynamics

    Science.gov (United States)

    Woo, Simon S.; Cheng, Michael K.

    2011-01-01

    Multibody dynamics play a critical role in simulation testbeds for space missions. There has been a considerable interest in the development of efficient computational algorithms for solving the dynamics of multibody systems. Mass matrix factorization and inversion techniques and the O(N) class of forward dynamics algorithms developed using a spatial operator algebra stand out as important breakthrough on this front. Techniques such as these provide the efficient algorithms and methods for the application and implementation of such multibody dynamics models. However, these methods are limited only to tree-topology multibody systems. Closed-chain topology systems require different techniques that are not as efficient or as broad as those for tree-topology systems. The closed-chain forward dynamics approach consists of treating the closed-chain topology as a tree-topology system subject to additional closure constraints. The resulting forward dynamics solution consists of: (a) ignoring the closure constraints and using the O(N) algorithm to solve for the free unconstrained accelerations for the system; (b) using the tree-topology solution to compute a correction force to enforce the closure constraints; and (c) correcting the unconstrained accelerations with correction accelerations resulting from the correction forces. This constraint-embedding technique shows how to use direct embedding to eliminate local closure-loops in the system and effectively convert the system back to a tree-topology system. At this point, standard tree-topology techniques can be brought to bear on the problem. The approach uses a spatial operator algebra approach to formulating the equations of motion. The operators are block-partitioned around the local body subgroups to convert them into aggregate bodies. Mass matrix operator factorization and inversion techniques are applied to the reformulated tree-topology system. Thus in essence, the new technique allows conversion of a system with

  11. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  12. Genetic algorithm parameters tuning for resource-constrained project scheduling problem

    Science.gov (United States)

    Tian, Xingke; Yuan, Shengrui

    2018-04-01

    Project Scheduling Problem (RCPSP) is a kind of important scheduling problem. To achieve a certain optimal goal such as the shortest duration, the smallest cost, the resource balance and so on, it is required to arrange the start and finish of all tasks under the condition of satisfying project timing constraints and resource constraints. In theory, the problem belongs to the NP-hard problem, and the model is abundant. Many combinatorial optimization problems are special cases of RCPSP, such as job shop scheduling, flow shop scheduling and so on. At present, the genetic algorithm (GA) has been used to deal with the classical RCPSP problem and achieved remarkable results. Vast scholars have also studied the improved genetic algorithm for the RCPSP problem, which makes it to solve the RCPSP problem more efficiently and accurately. However, for the selection of the main parameters of the genetic algorithm, there is no parameter optimization in these studies. Generally, we used the empirical method, but it cannot ensure to meet the optimal parameters. In this paper, the problem was carried out, which is the blind selection of parameters in the process of solving the RCPSP problem. We made sampling analysis, the establishment of proxy model and ultimately solved the optimal parameters.

  13. Algorithms for Decision Tree Construction

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    The study of algorithms for decision tree construction was initiated in 1960s. The first algorithms are based on the separation heuristic [13, 31] that at each step tries dividing the set of objects as evenly as possible. Later Garey and Graham [28] showed that such algorithm may construct decision trees whose average depth is arbitrarily far from the minimum. Hyafil and Rivest in [35] proved NP-hardness of DT problem that is constructing a tree with the minimum average depth for a diagnostic problem over 2-valued information system and uniform probability distribution. Cox et al. in [22] showed that for a two-class problem over information system, even finding the root node attribute for an optimal tree is an NP-hard problem. © Springer-Verlag Berlin Heidelberg 2011.

  14. Path trading : fast algorithms, smoothed analysis, and hardness results

    NARCIS (Netherlands)

    Berger, A.; Röglin, H.; Zwaan, van der G.R.J.; Pardalos, P.M.; Rebennack, S.

    2011-01-01

    The Border Gateway Protocol (BGP) serves as the main routing protocol of the Internet and ensures network reachability among autonomous systems (ASes). When traffic is forwarded between the many ASes on the Internet according to that protocol, each AS selfishly routes the traffic inside its own

  15. Microstructure and macroscopic properties of polydisperse systems of hard spheres

    NARCIS (Netherlands)

    Ogarko, V.

    2014-01-01

    This dissertation describes an investigation of systems of polydisperse smooth hard spheres. This includes the development of a fast contact detection algorithm for computer modelling, the development of macroscopic constitutive laws that are based on microscopic features such as the moments of the

  16. Hard and Soft Governance

    DEFF Research Database (Denmark)

    Moos, Lejf

    2009-01-01

    of Denmark, and finally the third layer: the leadership used in Danish schools. The use of 'soft governance' is shifting the focus of governance and leadership from decisions towards influence and power and thus shifting the focus of the processes from the decision-making itself towards more focus......The governance and leadership at transnational, national and school level seem to be converging into a number of isomorphic forms as we see a tendency towards substituting 'hard' forms of governance, that are legally binding, with 'soft' forms based on persuasion and advice. This article analyses...... and discusses governance forms at several levels. The first layer is the global: the methods of 'soft governance' that are being utilised by transnational agencies. The second layer is the national and local: the shift in national and local governance seen in many countries, but here demonstrated in the case...

  17. Zirconium nitride hard coatings

    International Nuclear Information System (INIS)

    Roman, Daiane; Amorim, Cintia Lugnani Gomes de; Soares, Gabriel Vieira; Figueroa, Carlos Alejandro; Baumvol, Israel Jacob Rabin; Basso, Rodrigo Leonardo de Oliveira

    2010-01-01

    Zirconium nitride (ZrN) nanometric films were deposited onto different substrates, in order to study the surface crystalline microstructure and also to investigate the electrochemical behavior to obtain a better composition that minimizes corrosion reactions. The coatings were produced by physical vapor deposition (PVD). The influence of the nitrogen partial pressure, deposition time and temperature over the surface properties was studied. Rutherford backscattering spectrometry (RBS), X-ray photoelectron spectroscopy (XPS), X-ray diffraction (XRD), scanning electron microscopy (SEM) and corrosion experiments were performed to characterize the ZrN hard coatings. The ZrN films properties and microstructure changes according to the deposition parameters. The corrosion resistance increases with temperature used in the films deposition. Corrosion tests show that ZrN coating deposited by PVD onto titanium substrate can improve the corrosion resistance. (author)

  18. Misconceptions and constraints

    International Nuclear Information System (INIS)

    Whitten, M.; Mahon, R.

    2005-01-01

    In theory, the sterile insect technique (SIT) is applicable to a wide variety of invertebrate pests. However, in practice, the approach has been successfully applied to only a few major pests. Chapters in this volume address possible reasons for this discrepancy, e.g. Klassen, Lance and McInnis, and Robinson and Hendrichs. The shortfall between theory and practice is partly due to the persistence of some common misconceptions, but it is mainly due to one constraint, or a combination of constraints, that are biological, financial, social or political in nature. This chapter's goal is to dispel some major misconceptions, and view the constraints as challenges to overcome, seeing them as opportunities to exploit. Some of the common misconceptions include: (1) released insects retain residual radiation, (2) females must be monogamous, (3) released males must be fully sterile, (4) eradication is the only goal, (5) the SIT is too sophisticated for developing countries, and (6) the SIT is not a component of an area-wide integrated pest management (AW-IPM) strategy. The more obvious constraints are the perceived high costs of the SIT, and the low competitiveness of released sterile males. The perceived high up-front costs of the SIT, their visibility, and the lack of private investment (compared with alternative suppression measures) emerge as serious constraints. Failure to appreciate the true nature of genetic approaches, such as the SIT, may pose a significant constraint to the wider adoption of the SIT and other genetically-based tactics, e.g. transgenic genetically modified organisms (GMOs). Lack of support for the necessary underpinning strategic research also appears to be an important constraint. Hence the case for extensive strategic research in ecology, population dynamics, genetics, and insect behaviour and nutrition is a compelling one. Raising the competitiveness of released sterile males remains the major research objective of the SIT. (author)

  19. Conditionally-uniform Feasible Grid Search Algorithm

    DEFF Research Database (Denmark)

    Dziubinski, Matt P.

    We present and evaluate a numerical optimization method (together with an algorithm for choosing the starting values) pertinent to the constrained optimization problem arising in the estimation of the GARCH models with inequality constraints, in particular the Simplied Component GARCH Model...... (SCGARCH), together with algorithms for the objective function and analytical gradient computation for SCGARCH....

  20. SU-F-T-15: Evaluation of 192Ir, 60Co and 169Yb Sources for High Dose Rate Prostate Brachytherapy Inverse Planning Using An Interior Point Constraint Generation Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Mok Tsze Chung, E; Aleman, D [University of Toronto, Toronto, Ontario (Canada); Safigholi, H; Nicolae, A; Davidson, M; Ravi, A; Song, W [Odette Cancer Centre, Sunnybrook Health Sciences Centre, Toronto, Ontario (Canada)

    2016-06-15

    Purpose: The effectiveness of using a combination of three sources, {sup 60}Co, {sup 192}Ir and {sup 169}Yb, is analyzed. Different combinations are compared against a single {sup 192}Ir source on prostate cancer cases. A novel inverse planning interior point algorithm is developed in-house to generate the treatment plans. Methods: Thirteen prostate cancer patients are separated into two groups: Group A includes eight patients with the prostate as target volume, while group B consists of four patients with a boost nodule inside the prostate that is assigned 150% of the prescription dose. The mean target volume is 35.7±9.3cc and 30.6±8.5cc for groups A and B, respectively. All patients are treated with each source individually, then with paired sources, and finally with all three sources. To compare the results, boost volume V150 and D90, urethra Dmax and D10, and rectum Dmax and V80 are evaluated. For fair comparison, all plans are normalized to a uniform V100=100. Results: Overall, double- and triple-source plans were better than single-source plans. The triple-source plans resulted in an average decrease of 21.7% and 1.5% in urethra Dmax and D10, respectively, and 8.0% and 0.8% in rectum Dmax and V80, respectively, for group A. For group B, boost volume V150 and D90 increased by 4.7% and 3.0%, respectively, while keeping similar dose delivered to the urethra and rectum. {sup 60}Co and {sup 192}Ir produced better plans than their counterparts in the double-source category, whereas {sup 60}Co produced more favorable results than the remaining individual sources. Conclusion: This study demonstrates the potential advantage of using a combination of two or three sources, reflected in dose reduction to organs-at-risk and more conformal dose to the target. three sources, reflected in dose reduction to organs-at-risk and more conformal dose to the target. Our results show that {sup 60}Co, {sup 192}Ir and {sup 169}Yb produce the best plans when used simultaneously and

  1. Using Stochastic Spiking Neural Networks on SpiNNaker to Solve Constraint Satisfaction Problems

    Directory of Open Access Journals (Sweden)

    Gabriel A. Fonseca Guerra

    2017-12-01

    Full Text Available Constraint satisfaction problems (CSP are at the core of numerous scientific and technological applications. However, CSPs belong to the NP-complete complexity class, for which the existence (or not of efficient algorithms remains a major unsolved question in computational complexity theory. In the face of this fundamental difficulty heuristics and approximation methods are used to approach instances of NP (e.g., decision and hard optimization problems. The human brain efficiently handles CSPs both in perception and behavior using spiking neural networks (SNNs, and recent studies have demonstrated that the noise embedded within an SNN can be used as a computational resource to solve CSPs. Here, we provide a software framework for the implementation of such noisy neural solvers on the SpiNNaker massively parallel neuromorphic hardware, further demonstrating their potential to implement a stochastic search that solves instances of P and NP problems expressed as CSPs. This facilitates the exploration of new optimization strategies and the understanding of the computational abilities of SNNs. We demonstrate the basic principles of the framework by solving difficult instances of the Sudoku puzzle and of the map color problem, and explore its application to spin glasses. The solver works as a stochastic dynamical system, which is attracted by the configuration that solves the CSP. The noise allows an optimal exploration of the space of configurations, looking for the satisfiability of all the constraints; if applied discontinuously, it can also force the system to leap to a new random configuration effectively causing a restart.

  2. Using Stochastic Spiking Neural Networks on SpiNNaker to Solve Constraint Satisfaction Problems.

    Science.gov (United States)

    Fonseca Guerra, Gabriel A; Furber, Steve B

    2017-01-01

    Constraint satisfaction problems (CSP) are at the core of numerous scientific and technological applications. However, CSPs belong to the NP-complete complexity class, for which the existence (or not) of efficient algorithms remains a major unsolved question in computational complexity theory. In the face of this fundamental difficulty heuristics and approximation methods are used to approach instances of NP (e.g., decision and hard optimization problems). The human brain efficiently handles CSPs both in perception and behavior using spiking neural networks (SNNs), and recent studies have demonstrated that the noise embedded within an SNN can be used as a computational resource to solve CSPs. Here, we provide a software framework for the implementation of such noisy neural solvers on the SpiNNaker massively parallel neuromorphic hardware, further demonstrating their potential to implement a stochastic search that solves instances of P and NP problems expressed as CSPs. This facilitates the exploration of new optimization strategies and the understanding of the computational abilities of SNNs. We demonstrate the basic principles of the framework by solving difficult instances of the Sudoku puzzle and of the map color problem, and explore its application to spin glasses. The solver works as a stochastic dynamical system, which is attracted by the configuration that solves the CSP. The noise allows an optimal exploration of the space of configurations, looking for the satisfiability of all the constraints; if applied discontinuously, it can also force the system to leap to a new random configuration effectively causing a restart.

  3. Janka hardness using nonstandard specimens

    Science.gov (United States)

    David W. Green; Marshall Begel; William Nelson

    2006-01-01

    Janka hardness determined on 1.5- by 3.5-in. specimens (2×4s) was found to be equivalent to that determined using the 2- by 2-in. specimen specified in ASTM D 143. Data are presented on the relationship between Janka hardness and the strength of clear wood. Analysis of historical data determined using standard specimens indicated no difference between side hardness...

  4. Polynomial algorithms that prove an NP-hard hypothesis implies an NP-hard conclusion

    NARCIS (Netherlands)

    Bauer, D.; Broersma, Haitze J.; Morgana, A.; Schmeichel, E.

    2002-01-01

    An example of such a theorem is the well-known Chvátal–Erdős Theorem, which states that every graph G with ακ is hamiltonian. Here κ is the vertex connectivity of G and α is the cardinality of a largest set of independent vertices of G. In another paper Chvátal points out that the proof of this

  5. 2TB hard disk drive

    CERN Multimedia

    This particular object was used up until 2012 in the Data Centre. It slots into one of the Disk Server trays. Hard disks were invented in the 1950s. They started as large disks up to 20 inches in diameter holding just a few megabytes (link is external). They were originally called "fixed disks" or "Winchesters" (a code name used for a popular IBM product). They later became known as "hard disks" to distinguish them from "floppy disks (link is external)." Hard disks have a hard platter that holds the magnetic medium, as opposed to the flexible plastic film found in tapes and floppies.

  6. Occupational dose constraint

    International Nuclear Information System (INIS)

    Heilbron Filho, Paulo Fernando Lavalle; Xavier, Ana Maria

    2005-01-01

    The revision process of the international radiological protection regulations has resulted in the adoption of new concepts, such as practice, intervention, avoidable and restriction of dose (dose constraint). The latter deserving of special mention since it may involve reducing a priori of the dose limits established both for the public and to individuals occupationally exposed, values that can be further reduced, depending on the application of the principle of optimization. This article aims to present, with clarity, from the criteria adopted to define dose constraint values to the public, a methodology to establish the dose constraint values for occupationally exposed individuals, as well as an example of the application of this methodology to the practice of industrial radiography

  7. Psychological constraints on egalitarianism

    DEFF Research Database (Denmark)

    Kasperbauer, Tyler Joshua

    2015-01-01

    processes motivating people to resist various aspects of egalitarianism. I argue for two theses, one normative and one descriptive. The normative thesis holds that egalitarians must take psychological constraints into account when constructing egalitarian ideals. I draw from non-ideal theories in political...... philosophy, which aim to construct moral goals with current social and political constraints in mind, to argue that human psychology must be part of a non-ideal theory of egalitarianism. The descriptive thesis holds that the most fundamental psychological challenge to egalitarian ideals comes from what......Debates over egalitarianism for the most part are not concerned with constraints on achieving an egalitarian society, beyond discussions of the deficiencies of egalitarian theory itself. This paper looks beyond objections to egalitarianism as such and investigates the relevant psychological...

  8. Distributed fusion estimation for sensor networks with communication constraints

    CERN Document Server

    Zhang, Wen-An; Song, Haiyu; Yu, Li

    2016-01-01

    This book systematically presents energy-efficient robust fusion estimation methods to achieve thorough and comprehensive results in the context of network-based fusion estimation. It summarizes recent findings on fusion estimation with communication constraints; several novel energy-efficient and robust design methods for dealing with energy constraints and network-induced uncertainties are presented, such as delays, packet losses, and asynchronous information... All the results are presented as algorithms, which are convenient for practical applications.

  9. Reasoning about Strategies under Partial Observability and Fairness Constraints

    Directory of Open Access Journals (Sweden)

    Simon Busard

    2013-03-01

    Full Text Available A number of extensions exist for Alternating-time Temporal Logic; some of these mix strategies and partial observability but, to the best of our knowledge, no work provides a unified framework for strategies, partial observability and fairness constraints. In this paper we propose ATLK^F_po, a logic mixing strategies under partial observability and epistemic properties of agents in a system with fairness constraints on states, and we provide a model checking algorithm for it.

  10. Legal, ethical,and economic constraints

    International Nuclear Information System (INIS)

    Libassi, F.P.; Donaldson, L.F.

    1980-01-01

    This paper considers the legal, ethical, and economic constraints to developing a comprehensive knowledge of the biological effects of ionizing radiation. These constraints are not fixed and immutable; rather they are determined by the political process. Political issues cannot be evaded. The basic objective of developing a comprehensive knowledge about the biological effects of ionizing radiation exists as an objective not only because we wish to add to the store of human knowledge but also because we have important use for that knowledge. It will assist our decision-makers to make choices that affect us all. These choices require both hard factual information and application of political judgment. Research supplies some of the hard factual information and should be as free as possible from political influence in its execution. At the same time, the political choices that must be made influence the direction and nature of the research program as a whole. Similarly, the legal, ethical, and economic factors that constrain our ability to expand knowledge through research reflect a judgment by political agents that values other than expansion of knowledge should be recognized and given effect

  11. Hard processes. Vol. 1

    International Nuclear Information System (INIS)

    Ioffe, B.L.; Khoze, V.A.; Lipatov, L.N.

    1984-01-01

    Deep inelastic (hard) processes are now at the epicenter of modern high-energy physics. These processes are governed by short-distance dynamics, which reveals the intrinsic structure of elementary particles. The theory of deep inelastic processes is now sufficiently well settled. The authors' aim was to give an effective tool to theoreticians and experimentalists who are engaged in high-energy physics. This book is intended primarily for physicists who are only beginning to study the field. To read the book, one should be acquainted with the Feynman diagram technique and with some particular topics from elementary particle theory (symmetries, dispersion relations, Regge pole theory, etc.). Theoretical consideration of deep inelastic processes is now based on quantum chromodynamics (QCD). At the same time, analysis of relevant physical phenomena demands a synthesis of QCD notions (quarks, gluons) with certain empirical characteristics. Therefore, the phenomenological approaches presented are a necessary stage in a study of this range of phenomena which should undoubtedly be followed by a detailed description based on QCD and electroweak theory. The authors were naturally unable to dwell on experimental data accumulated during the past decade of intensive investigations. Priority was given to results which allow a direct comparison with theoretical predictions. (Auth.)

  12. Hard coal; Steinkohle

    Energy Technology Data Exchange (ETDEWEB)

    Loo, Kai van de; Sitte, Andreas-Peter [Gesamtverband Steinkohle e.V. (GVSt), Herne (Germany)

    2015-07-01

    International the coal market in 2014 was the first time in a long time in a period of stagnation. In Germany, the coal consumption decreased even significantly, mainly due to the decrease in power generation. Here the national energy transition has now been noticable affected negative for coal use. The political guidances can expect a further significant downward movement for the future. In the present phase-out process of the German hard coal industry with still three active mines there was in 2014 no decommissioning. But the next is at the end of 2015, and the plans for the time after mining have been continued. [German] International war der Markt fuer Steinkohle 2014 erstmals seit langem wieder von einer Stagnation gekennzeichnet. In Deutschland ging der Steinkohlenverbrauch sogar deutlich zurueck, vor allem wegen des Rueckgangs in der Stromerzeugung. Hier hat sich die nationale Energiewende nun spuerbar und fuer die Steinkohlennutzung negativ ausgewirkt. Die politischen Weichenstellungen lassen fuer die Zukunft eine weitere erhebliche Abwaertsbewegung erwarten. Bei dem im Auslaufprozess befindlichen deutschen Steinkohlenbergbau mit noch drei aktiven Bergwerken gab es 2014 keine Stilllegung. Doch die naechste steht zum Jahresende 2015 an, und die Planungen fuer die Zeit nach dem Bergbau sind fortgefuehrt worden.

  13. Ecosystems emerging. 5: Constraints

    Czech Academy of Sciences Publication Activity Database

    Patten, B. C.; Straškraba, Milan; Jorgensen, S. E.

    2011-01-01

    Roč. 222, č. 16 (2011), s. 2945-2972 ISSN 0304-3800 Institutional research plan: CEZ:AV0Z50070508 Keywords : constraint * epistemic * ontic Subject RIV: EH - Ecology, Behaviour Impact factor: 2.326, year: 2011 http://www.sciencedirect.com/science/article/pii/S0304380011002274

  14. Constraints and Ambiguity

    DEFF Research Database (Denmark)

    Dove, Graham; Biskjær, Michael Mose; Lundqvist, Caroline Emilie

    2017-01-01

    groups of students building three models each. We studied groups building with traditional plastic bricks and also using a digital environment. The building tasks students undertake, and our subsequent analysis, are informed by the role constraints and ambiguity play in creative processes. Based...

  15. Efficient algorithms for accurate hierarchical clustering of huge datasets: tackling the entire protein space.

    Science.gov (United States)

    Loewenstein, Yaniv; Portugaly, Elon; Fromer, Menachem; Linial, Michal

    2008-07-01

    UPGMA (average linking) is probably the most popular algorithm for hierarchical data clustering, especially in computational biology. However, UPGMA requires the entire dissimilarity matrix in memory. Due to this prohibitive requirement, UPGMA is not scalable to very large datasets. We present a novel class of memory-constrained UPGMA (MC-UPGMA) algorithms. Given any practical memory size constraint, this framework guarantees the correct clustering solution without explicitly requiring all dissimilarities in memory. The algorithms are general and are applicable to any dataset. We present a data-dependent characterization of hardness and clustering efficiency. The presented concepts are applicable to any agglomerative clustering formulation. We apply our algorithm to the entire collection of protein sequences, to automatically build a comprehensive evolutionary-driven hierarchy of proteins from sequence alone. The newly created tree captures protein families better than state-of-the-art large-scale methods such as CluSTr, ProtoNet4 or single-linkage clustering. We demonstrate that leveraging the entire mass embodied in all sequence similarities allows to significantly improve on current protein family clusterings which are unable to directly tackle the sheer mass of this data. Furthermore, we argue that non-metric constraints are an inherent complexity of the sequence space and should not be overlooked. The robustness of UPGMA allows significant improvement, especially for multidomain proteins, and for large or divergent families. A comprehensive tree built from all UniProt sequence similarities, together with navigation and classification tools will be made available as part of the ProtoNet service. A C++ implementation of the algorithm is available on request.

  16. Melting of polydisperse hard disks

    NARCIS (Netherlands)

    Pronk, S.; Frenkel, D.

    2004-01-01

    The melting of a polydisperse hard-disk system is investigated by Monte Carlo simulations in the semigrand canonical ensemble. This is done in the context of possible continuous melting by a dislocation-unbinding mechanism, as an extension of the two-dimensional hard-disk melting problem. We find

  17. Distribution Agnostic Structured Sparsity Recovery: Algorithms and Applications

    KAUST Repository

    Masood, Mudassir

    2015-05-01

    Compressed sensing has been a very active area of research and several elegant algorithms have been developed for the recovery of sparse signals in the past few years. However, most of these algorithms are either computationally expensive or make some assumptions that are not suitable for all real world problems. Recently, focus has shifted to Bayesian-based approaches that are able to perform sparse signal recovery at much lower complexity while invoking constraint and/or a priori information about the data. While Bayesian approaches have their advantages, these methods must have access to a priori statistics. Usually, these statistics are unknown and are often difficult or even impossible to predict. An effective workaround is to assume a distribution which is typically considered to be Gaussian, as it makes many signal processing problems mathematically tractable. Seemingly attractive, this assumption necessitates the estimation of the associated parameters; which could be hard if not impossible. In the thesis, we focus on this aspect of Bayesian recovery and present a framework to address the challenges mentioned above. The proposed framework allows Bayesian recovery of sparse signals but at the same time is agnostic to the distribution of the unknown sparse signal components. The algorithms based on this framework are agnostic to signal statistics and utilize a priori statistics of additive noise and the sparsity rate of the signal, which are shown to be easily estimated from data if not available. In the thesis, we propose several algorithms based on this framework which utilize the structure present in signals for improved recovery. In addition to the algorithm that considers just the sparsity structure of sparse signals, tools that target additional structure of the sparsity recovery problem are proposed. These include several algorithms for a) block-sparse signal estimation, b) joint reconstruction of several common support sparse signals, and c

  18. An efficient method for minimizing a convex separable logarithmic function subject to a convex inequality constraint or linear equality constraint

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available We consider the problem of minimizing a convex separable logarithmic function over a region defined by a convex inequality constraint or linear equality constraint, and two-sided bounds on the variables (box constraints. Such problems are interesting from both theoretical and practical point of view because they arise in some mathematical programming problems as well as in various practical problems such as problems of production planning and scheduling, allocation of resources, decision making, facility location problems, and so forth. Polynomial algorithms are proposed for solving problems of this form and their convergence is proved. Some examples and results of numerical experiments are also presented.

  19. Near-Optimal Fingerprinting with Constraints

    Directory of Open Access Journals (Sweden)

    Gulyás Gábor György

    2016-10-01

    Full Text Available Several recent studies have demonstrated that people show large behavioural uniqueness. This has serious privacy implications as most individuals become increasingly re-identifiable in large datasets or can be tracked, while they are browsing the web, using only a couple of their attributes, called as their fingerprints. Often, the success of these attacks depends on explicit constraints on the number of attributes learnable about individuals, i.e., the size of their fingerprints. These constraints can be budget as well as technical constraints imposed by the data holder. For instance, Apple restricts the number of applications that can be called by another application on iOS in order to mitigate the potential privacy threats of leaking the list of installed applications on a device. In this work, we address the problem of identifying the attributes (e.g., smartphone applications that can serve as a fingerprint of users given constraints on the size of the fingerprint. We give the best fingerprinting algorithms in general, and evaluate their effectiveness on several real-world datasets. Our results show that current privacy guards limiting the number of attributes that can be queried about individuals is insufficient to mitigate their potential privacy risks in many practical cases.

  20. Towards Practical Whitebox Cryptography: Optimizing Efficiency and Space Hardness

    DEFF Research Database (Denmark)

    Bogdanov, Andrey; Isobe, Takanori; Tischhauser, Elmar Wolfgang

    2016-01-01

    Whitebox cryptography aims to provide security for cryptographic algorithms in an untrusted environment where the adversary has full access to their implementation. Typical security goals for whitebox cryptography include key extraction security and decomposition security: Indeed, it should...... the practical requirements to whitebox cryptography in real-world applications such as DRM or mobile payments. Moreover, we formalize resistance towards decomposition in form of weak and strong space hardness at various security levels. We obtain bounds on space hardness in all those adversarial models...... real-world applications with whitebox cryptography....

  1. An Improved Harmony Search Algorithm for Power Distribution Network Planning

    Directory of Open Access Journals (Sweden)

    Wei Sun

    2015-01-01

    Full Text Available Distribution network planning because of involving many variables and constraints is a multiobjective, discrete, nonlinear, and large-scale optimization problem. Harmony search (HS algorithm is a metaheuristic algorithm inspired by the improvisation process of music players. HS algorithm has several impressive advantages, such as easy implementation, less adjustable parameters, and quick convergence. But HS algorithm still has some defects such as premature convergence and slow convergence speed. According to the defects of the standard algorithm and characteristics of distribution network planning, an improved harmony search (IHS algorithm is proposed in this paper. We set up a mathematical model of distribution network structure planning, whose optimal objective function is to get the minimum annual cost and constraint conditions are overload and radial network. IHS algorithm is applied to solve the complex optimization mathematical model. The empirical results strongly indicate that IHS algorithm can effectively provide better results for solving the distribution network planning problem compared to other optimization algorithms.

  2. Hardness variability in commercial technologies

    International Nuclear Information System (INIS)

    Shaneyfelt, M.R.; Winokur, P.S.; Meisenheimer, T.L.; Sexton, F.W.; Roeske, S.B.; Knoll, M.G.

    1994-01-01

    The radiation hardness of commercial Floating Gate 256K E 2 PROMs from a single diffusion lot was observed to vary between 5 to 25 krad(Si) when irradiated at a low dose rate of 64 mrad(Si)/s. Additional variations in E 2 PROM hardness were found to depend on bias condition and failure mode (i.e., inability to read or write the memory), as well as the foundry at which the part was manufactured. This variability is related to system requirements, and it is shown that hardness level and variability affect the allowable mode of operation for E 2 PROMs in space applications. The radiation hardness of commercial 1-Mbit CMOS SRAMs from Micron, Hitachi, and Sony irradiated at 147 rad(Si)/s was approximately 12, 13, and 19 krad(Si), respectively. These failure levels appear to be related to increases in leakage current during irradiation. Hardness of SRAMs from each manufacturer varied by less than 20%, but differences between manufacturers are significant. The Qualified Manufacturer's List approach to radiation hardness assurance is suggested as a way to reduce variability and to improve the hardness level of commercial technologies

  3. Graphical constraints: a graphical user interface for constraint problems

    OpenAIRE

    Vieira, Nelson Manuel Marques

    2015-01-01

    A constraint satisfaction problem is a classical artificial intelligence paradigm characterized by a set of variables (each variable with an associated domain of possible values), and a set of constraints that specify relations among subsets of these variables. Solutions are assignments of values to all variables that satisfy all the constraints. Many real world problems may be modelled by means of constraints. The range of problems that can use this representation is very diverse and embrace...

  4. Particle swarm genetic algorithm and its application

    International Nuclear Information System (INIS)

    Liu Chengxiang; Yan Changxiang; Wang Jianjun; Liu Zhenhai

    2012-01-01

    To solve the problems of slow convergence speed and tendency to fall into the local optimum of the standard particle swarm optimization while dealing with nonlinear constraint optimization problem, a particle swarm genetic algorithm is designed. The proposed algorithm adopts feasibility principle handles constraint conditions and avoids the difficulty of penalty function method in selecting punishment factor, generates initial feasible group randomly, which accelerates particle swarm convergence speed, and introduces genetic algorithm crossover and mutation strategy to avoid particle swarm falls into the local optimum Through the optimization calculation of the typical test functions, the results show that particle swarm genetic algorithm has better optimized performance. The algorithm is applied in nuclear power plant optimization, and the optimization results are significantly. (authors)

  5. Advanced Hard Real-Time Operating System, the Maruti Project. Part 2.

    Science.gov (United States)

    1997-01-01

    Real - Time Operating System , The Maruti Project DASG-60-92-C-0055 5b. Program Element # 62301E 6. Author(s...The maruti hard real - time " operating system . A CM SIGOPS, Operating Systems Review. 23:90-106, July 1989. 254 !1 110) C. L. Liu and J. Layland...February 14, 1995 Abstract The Maruti Real - Time Operating System was developed for applications that must meet hard real-time constraints. In order

  6. Distance Constraint Satisfaction Problems

    Science.gov (United States)

    Bodirsky, Manuel; Dalmau, Victor; Martin, Barnaby; Pinsker, Michael

    We study the complexity of constraint satisfaction problems for templates Γ that are first-order definable in ({ Z}; {suc}), the integers with the successor relation. Assuming a widely believed conjecture from finite domain constraint satisfaction (we require the tractability conjecture by Bulatov, Jeavons and Krokhin in the special case of transitive finite templates), we provide a full classification for the case that Γ is locally finite (i.e., the Gaifman graph of Γ has finite degree). We show that one of the following is true: The structure Γ is homomorphically equivalent to a structure with a certain majority polymorphism (which we call modular median) and CSP(Γ) can be solved in polynomial time, or Γ is homomorphically equivalent to a finite transitive structure, or CSP(Γ) is NP-complete.

  7. On scale dependence of hardness

    International Nuclear Information System (INIS)

    Shorshorov, M.Kh.; Alekhin, V.P.; Bulychev, S.I.

    1977-01-01

    The concept of hardness as a structure-sensitive characteristic of a material is considered. It is shown that in conditions of a decreasing stress field under the inventor the hardness function is determined by the average distance, Lsub(a), between the stops (fixed and sessile dislocations, segregation particles, etc.). In the general case, Lsub(a) depends on the size of the impression and explains the great diversity of hardness functions. The concept of average true deformation rate on depression is introduced

  8. Photon technology. Hard photon technology; Photon technology. Hard photon gijutsu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-03-01

    Research results of hard photon technology have been summarized as a part of novel technology development highly utilizing the quantum nature of photon. Hard photon technology refers to photon beam technologies which use photon in the 0.1 to 200 nm wavelength region. Hard photon has not been used in industry due to the lack of suitable photon sources and optical devices. However, hard photon in this wavelength region is expected to bring about innovations in such areas as ultrafine processing and material synthesis due to its atom selective reaction, inner shell excitation reaction, and spatially high resolution. Then, technological themes and possibility have been surveyed. Although there are principle proposes and their verification of individual technologies for the technologies of hard photon generation, regulation and utilization, they are still far from the practical applications. For the photon source technology, the laser diode pumped driver laser technology, laser plasma photon source technology, synchrotron radiation photon source technology, and vacuum ultraviolet photon source technology are presented. For the optical device technology, the multi-layer film technology for beam mirrors and the non-spherical lens processing technology are introduced. Also are described the reduction lithography technology, hard photon excitation process, and methods of analysis and measurement. 430 refs., 165 figs., 23 tabs.

  9. Algorithmic power management - Energy minimisation under real-time constraints

    NARCIS (Netherlands)

    Gerards, Marco Egbertus Theodorus

    2014-01-01

    Energy consumption is a major concern for designers of embedded devices. Especially for battery operated systems (like many embedded systems), the energy consumption limits the time for which a device can be active, and the amount of processing that can take place. In this thesis we study how the

  10. Algorithmic power management: energy minimisation under real-time constraints

    NARCIS (Netherlands)

    Gerards, Marco Egbertus Theodorus

    2014-01-01

    Energy consumption is a major concern for designers of embedded devices. Especially for battery operated systems (like many embedded systems), the energy consumption limits the time for which a device can be active, and the amount of processing that can take place. In this thesis we study how the

  11. An Implicit Enumeration Algorithm with Binary-Valued Constraints.

    Science.gov (United States)

    1986-03-01

    problems is the National Basketball Association ( NBA -) schedul- ing problems developed by Bean (1980), as discussed in detail in the Appendix. These...fY! X F L- %n~ P ’ % -C-10 K7 K: K7 -L- -7".i - W. , W V APPENDIX The NBA Scheduling Problem §A.1 Formulation The National Basketball Association...16 2.2 4.9 40.2 15.14 §6.2.3 NBA Scheduling Problem The last set of testing problems involves the NBA scheduling problem. A detailed description of

  12. A focussed dynamic path finding algorithm with constraints

    CSIR Research Space (South Africa)

    Leenen, L

    2013-11-01

    Full Text Available heuristic to focus the search for an optimal path. Existing approaches to solving path planning problems tend to combine path costs with various other criteria such as obstacle avoidance in the objective function which is being optimised. The authors...

  13. A genetic algorithm approach for open-pit mine production scheduling

    Directory of Open Access Journals (Sweden)

    Aref Alipour

    2017-06-01

    Full Text Available In an Open-Pit Production Scheduling (OPPS problem, the goal is to determine the mining sequence of an orebody as a block model. In this article, linear programing formulation is used to aim this goal. OPPS problem is known as an NP-hard problem, so an exact mathematical model cannot be applied to solve in the real state. Genetic Algorithm (GA is a well-known member of evolutionary algorithms that widely are utilized to solve NP-hard problems. Herein, GA is implemented in a hypothetical Two-Dimensional (2D copper orebody model. The orebody is featured as two-dimensional (2D array of blocks. Likewise, counterpart 2D GA array was used to represent the OPPS problem’s solution space. Thereupon, the fitness function is defined according to the OPPS problem’s objective function to assess the solution domain. Also, new normalization method was used for the handling of block sequencing constraint. A numerical study is performed to compare the solutions of the exact and GA-based methods. It is shown that the gap between GA and the optimal solution by the exact method is less than % 5; hereupon GA is found to be efficiently in solving OPPS problem.

  14. WDM Multicast Tree Construction Algorithms and Their Comparative Evaluations

    Science.gov (United States)

    Makabe, Tsutomu; Mikoshi, Taiju; Takenaka, Toyofumi

    We propose novel tree construction algorithms for multicast communication in photonic networks. Since multicast communications consume many more link resources than unicast communications, effective algorithms for route selection and wavelength assignment are required. We propose a novel tree construction algorithm, called the Weighted Steiner Tree (WST) algorithm and a variation of the WST algorithm, called the Composite Weighted Steiner Tree (CWST) algorithm. Because these algorithms are based on the Steiner Tree algorithm, link resources among source and destination pairs tend to be commonly used and link utilization ratios are improved. Because of this, these algorithms can accept many more multicast requests than other multicast tree construction algorithms based on the Dijkstra algorithm. However, under certain delay constraints, the blocking characteristics of the proposed Weighted Steiner Tree algorithm deteriorate since some light paths between source and destinations use many hops and cannot satisfy the delay constraint. In order to adapt the approach to the delay-sensitive environments, we have devised the Composite Weighted Steiner Tree algorithm comprising the Weighted Steiner Tree algorithm and the Dijkstra algorithm for use in a delay constrained environment such as an IPTV application. In this paper, we also give the results of simulation experiments which demonstrate the superiority of the proposed Composite Weighted Steiner Tree algorithm compared with the Distributed Minimum Hop Tree (DMHT) algorithm, from the viewpoint of the light-tree request blocking.

  15. Producing Satisfactory Solutions to Scheduling Problems: An Iterative Constraint Relaxation Approach

    Science.gov (United States)

    Chien, S.; Gratch, J.

    1994-01-01

    One drawback to using constraint-propagation in planning and scheduling systems is that when a problem has an unsatisfiable set of constraints such algorithms typically only show that no solution exists. While, technically correct, in practical situations, it is desirable in these cases to produce a satisficing solution that satisfies the most important constraints (typically defined in terms of maximizing a utility function). This paper describes an iterative constraint relaxation approach in which the scheduler uses heuristics to progressively relax problem constraints until the problem becomes satisfiable. We present empirical results of applying these techniques to the problem of scheduling spacecraft communications for JPL/NASA antenna resources.

  16. Mean-Reverting Portfolio With Budget Constraint

    Science.gov (United States)

    Zhao, Ziping; Palomar, Daniel P.

    2018-05-01

    This paper considers the mean-reverting portfolio design problem arising from statistical arbitrage in the financial markets. We first propose a general problem formulation aimed at finding a portfolio of underlying component assets by optimizing a mean-reversion criterion characterizing the mean-reversion strength, taking into consideration the variance of the portfolio and an investment budget constraint. Then several specific problems are considered based on the general formulation, and efficient algorithms are proposed. Numerical results on both synthetic and market data show that our proposed mean-reverting portfolio design methods can generate consistent profits and outperform the traditional design methods and the benchmark methods in the literature.

  17. Efficient Searching with Linear Constraints

    DEFF Research Database (Denmark)

    Agarwal, Pankaj K.; Arge, Lars Allan; Erickson, Jeff

    2000-01-01

    We show how to preprocess a set S of points in d into an external memory data structure that efficiently supports linear-constraint queries. Each query is in the form of a linear constraint xd a0+∑d−1i=1 aixi; the data structure must report all the points of S that satisfy the constraint. This pr...

  18. Deepening Contractions and Collateral Constraints

    DEFF Research Database (Denmark)

    Jensen, Henrik; Ravn, Søren Hove; Santoro, Emiliano

    and occasionally non-binding credit constraints. Easier credit access increases the likelihood that constraints become slack in the face of expansionary shocks, while contractionary shocks are further amplified due to tighter constraints. As a result, busts gradually become deeper than booms. Based...

  19. Path following mobile robot in the presence of velocity constraints

    DEFF Research Database (Denmark)

    Bak, Martin; Poulsen, Niels Kjølstad; Ravn, Ole

    2001-01-01

    This paper focuses on path following algorithms for mobile robots with velocity constraints on the wheels. The path considered consists of straight lines intersected with given angles. We present a fast real-time receding horizon controller which anticipates the intersections and smoothly control...

  20. Constraint-based job shop scheduling with ILOG SCHEDULER

    NARCIS (Netherlands)

    Nuijten, W.P.M.; Le Pape, C.

    1998-01-01

    We introduce constraint-based scheduling and discuss its main principles. An approximation algorithm based on tree search is developed for the job shop scheduling problem using ILOG SCHEDULER. A new way of calculating lower bounds on the makespan of the job shop scheduling problem is presented and

  1. Teaching Database Design with Constraint-Based Tutors

    Science.gov (United States)

    Mitrovic, Antonija; Suraweera, Pramuditha

    2016-01-01

    Design tasks are difficult to teach, due to large, unstructured solution spaces, underspecified problems, non-existent problem solving algorithms and stopping criteria. In this paper, we comment on our approach to develop KERMIT, a constraint-based tutor that taught database design. In later work, we re-implemented KERMIT as EER-Tutor, and…

  2. A novel constraint for thermodynamically designing DNA sequences.

    Directory of Open Access Journals (Sweden)

    Qiang Zhang

    Full Text Available Biotechnological and biomolecular advances have introduced novel uses for DNA such as DNA computing, storage, and encryption. For these applications, DNA sequence design requires maximal desired (and minimal undesired hybridizations, which are the product of a single new DNA strand from 2 single DNA strands. Here, we propose a novel constraint to design DNA sequences based on thermodynamic properties. Existing constraints for DNA design are based on the Hamming distance, a constraint that does not address the thermodynamic properties of the DNA sequence. Using a unique, improved genetic algorithm, we designed DNA sequence sets which satisfy different distance constraints and employ a free energy gap based on a minimum free energy (MFE to gauge DNA sequences based on set thermodynamic properties. When compared to the best constraints of the Hamming distance, our method yielded better thermodynamic qualities. We then used our improved genetic algorithm to obtain lower-bound DNA sequence sets. Here, we discuss the effects of novel constraint parameters on the free energy gap.

  3. On a Nonstationary Route Problem with Constraints

    Directory of Open Access Journals (Sweden)

    A. G. Chentsov

    2012-01-01

    Full Text Available The extremal route problem of permutations under constraints in the form of preceding conditions is investigated. It is supposed that an executer leaves the initial point (the base after which he visits a system of megalopolises (finite goal sets and performs some work on each megalopolis. The cost functions for executor permutations and interior works depend on the “visiting moment” that can correspond to the real time or can also correspond to the natural regular succession (the first visiting, the second visiting, and so on. An economic variant of the widely interpreted dynamic programming method (DPM is constructed. On this basis an optimal computer realized algorithm is constructed. A variant of a greed algorithm is proposed.

  4. A new PPP algorithm for deformation monitoring with single ...

    Indian Academy of Sciences (India)

    However, the existing SF PPP methods can be hardly implemented for deformation monitoring directly due to their limited ... solutions for various applications, such as survey- ..... SEID model, traditional DF PPP and the new PPP algorithm with ...

  5. On Maximal Hard-Core Thinnings of Stationary Particle Processes

    Science.gov (United States)

    Hirsch, Christian; Last, Günter

    2018-02-01

    The present paper studies existence and distributional uniqueness of subclasses of stationary hard-core particle systems arising as thinnings of stationary particle processes. These subclasses are defined by natural maximality criteria. We investigate two specific criteria, one related to the intensity of the hard-core particle process, the other one being a local optimality criterion on the level of realizations. In fact, the criteria are equivalent under suitable moment conditions. We show that stationary hard-core thinnings satisfying such criteria exist and are frequently distributionally unique. More precisely, distributional uniqueness holds in subcritical and barely supercritical regimes of continuum percolation. Additionally, based on the analysis of a specific example, we argue that fluctuations in grain sizes can play an important role for establishing distributional uniqueness at high intensities. Finally, we provide a family of algorithmically constructible approximations whose volume fractions are arbitrarily close to the maximum.

  6. Parameterized Algorithms for Survivable Network Design with Uniform Demands

    DEFF Research Database (Denmark)

    Bang-Jensen, Jørgen; Klinkby Knudsen, Kristine Vitting; Saurabh, Saket

    2018-01-01

    problem in combinatorial optimization that captures numerous well-studied problems in graph theory and graph algorithms. Consequently, there is a long line of research into exact-polynomial time algorithms as well as approximation algorithms for various restrictions of this problem. An important...... that SNDP is W[1]-hard for both arc and vertex connectivity versions on digraphs. The core of our algorithms is composed of new combinatorial results on connectivity in digraphs and undirected graphs....

  7. Adaptive laser link reconfiguration using constraint propagation

    Science.gov (United States)

    Crone, M. S.; Julich, P. M.; Cook, L. M.

    1993-01-01

    This paper describes Harris AI research performed on the Adaptive Link Reconfiguration (ALR) study for Rome Lab, and focuses on the application of constraint propagation to the problem of link reconfiguration for the proposed space based Strategic Defense System (SDS) Brilliant Pebbles (BP) communications system. According to the concept of operations at the time of the study, laser communications will exist between BP's and to ground entry points. Long-term links typical of RF transmission will not exist. This study addressed an initial implementation of BP's based on the Global Protection Against Limited Strikes (GPALS) SDI mission. The number of satellites and rings studied was representative of this problem. An orbital dynamics program was used to generate line-of-site data for the modeled architecture. This was input into a discrete event simulation implemented in the Harris developed COnstraint Propagation Expert System (COPES) Shell, developed initially on the Rome Lab BM/C3 study. Using a model of the network and several heuristics, the COPES shell was used to develop the Heuristic Adaptive Link Ordering (HALO) Algorithm to rank and order potential laser links according to probability of communication. A reduced set of links based on this ranking would then be used by a routing algorithm to select the next hop. This paper includes an overview of Constraint Propagation as an Artificial Intelligence technique and its embodiment in the COPES shell. It describes the design and implementation of both the simulation of the GPALS BP network and the HALO algorithm in COPES. This is described using a 59 Data Flow Diagram, State Transition Diagrams, and Structured English PDL. It describes a laser communications model and the heuristics involved in rank-ordering the potential communication links. The generation of simulation data is described along with its interface via COPES to the Harris developed View Net graphical tool for visual analysis of communications

  8. Low-cost design of next generation SONET/SDH networks with multiple constraints

    CSIR Research Space (South Africa)

    Karem, TR

    2007-07-01

    Full Text Available on constraints programming satisfaction technology is proposed. The algorithm is tested in OPNET simulation environment using different network models derived from a hypothetical case study of an optical network design for Bellville area in Cape Town, South...

  9. Multi-area economic dispatch with tie-line constraints employing ...

    African Journals Online (AJOL)

    user

    The economic dispatch problem is frequently solved without considering ... programming algorithm was proposed for the MAED solution with tie-line constraints ..... are the difference between two randomly chosen parameter vectors, a concept.

  10. Searching for genomic constraints

    Energy Technology Data Exchange (ETDEWEB)

    Lio` , P [Cambridge, Univ. (United Kingdom). Genetics Dept.; Ruffo, S [Florence, Univ. (Italy). Fac. di Ingegneria. Dipt. di Energetica ` S. Stecco`

    1998-01-01

    The authors have analyzed general properties of very long DNA sequences belonging to simple and complex organisms, by using different correlation methods. They have distinguished those base compositional rules that concern the entire genome which they call `genomic constraints` from the rules that depend on the `external natural selection` acting on single genes, i. e. protein-centered constraints. They show that G + C content, purine / pyrimidine distributions and biological complexity of the organism are the most important factors which determine base compositional rules and genome complexity. Three main facts are here reported: bacteria with high G + C content have more restrictions on base composition than those with low G + C content; at constant G + C content more complex organisms, ranging from prokaryotes to higher eukaryotes (e.g. human) display an increase of repeats 10-20 nucleotides long, which are also partly responsible for long-range correlations; work selection of length 3 to 10 is stronger in human and in bacteria for two distinct reasons. With respect to previous studies, they have also compared the genomic sequence of the archeon Methanococcus jannaschii with those of bacteria and eukaryotes: it shows sometimes an intermediate statistical behaviour.

  11. Searching for genomic constraints

    International Nuclear Information System (INIS)

    Lio', P.; Ruffo, S.

    1998-01-01

    The authors have analyzed general properties of very long DNA sequences belonging to simple and complex organisms, by using different correlation methods. They have distinguished those base compositional rules that concern the entire genome which they call 'genomic constraints' from the rules that depend on the 'external natural selection' acting on single genes, i. e. protein-centered constraints. They show that G + C content, purine / pyrimidine distributions and biological complexity of the organism are the most important factors which determine base compositional rules and genome complexity. Three main facts are here reported: bacteria with high G + C content have more restrictions on base composition than those with low G + C content; at constant G + C content more complex organisms, ranging from prokaryotes to higher eukaryotes (e.g. human) display an increase of repeats 10-20 nucleotides long, which are also partly responsible for long-range correlations; work selection of length 3 to 10 is stronger in human and in bacteria for two distinct reasons. With respect to previous studies, they have also compared the genomic sequence of the archeon Methanococcus jannaschii with those of bacteria and eukaryotes: it shows sometimes an intermediate statistical behaviour

  12. Economic dispatch using chaotic bat algorithm

    International Nuclear Information System (INIS)

    Adarsh, B.R.; Raghunathan, T.; Jayabarathi, T.; Yang, Xin-She

    2016-01-01

    This paper presents the application of a new metaheuristic optimization algorithm, the chaotic bat algorithm for solving the economic dispatch problem involving a number of equality and inequality constraints such as power balance, prohibited operating zones and ramp rate limits. Transmission losses and multiple fuel options are also considered for some problems. The chaotic bat algorithm, a variant of the basic bat algorithm, is obtained by incorporating chaotic sequences to enhance its performance. Five different example problems comprising 6, 13, 20, 40 and 160 generating units are solved to demonstrate the effectiveness of the algorithm. The algorithm requires little tuning by the user, and the results obtained show that it either outperforms or compares favorably with several existing techniques reported in literature. - Highlights: • The chaotic bat algorithm, a new metaheuristic optimization algorithm has been used. • The problem solved – the economic dispatch problem – is nonlinear, discontinuous. • It has number of equality and inequality constraints. • The algorithm has been demonstrated to be applicable on high dimensional problems.

  13. Algorithmic alternatives

    International Nuclear Information System (INIS)

    Creutz, M.

    1987-11-01

    A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/

  14. Combinatorial algorithms

    CERN Document Server

    Hu, T C

    2002-01-01

    Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

  15. Constraint-based solver for the Military unit path finding problem

    CSIR Research Space (South Africa)

    Leenen, L

    2010-04-01

    Full Text Available -based approach because it requires flexibility in modelling. The authors formulate the MUPFP as a constraint satisfaction problem and a constraint-based extension of the search algorithm. The concept demonstrator uses a provided map, for example taken from Google...

  16. Efficient Algorithms for a Family of Matroid Intersection Problems

    National Research Council Canada - National Science Library

    Gabow, Harold N; Tarjan, Robert E

    1982-01-01

    .... its efficiency is demonstrated by implementations on specific matroids. In all cases but one, the running time matches the best-known algorithm for the problem without the red element constraint...

  17. Capacity Constrained Routing Algorithms for Evacuation Route Planning

    National Research Council Canada - National Science Library

    Lu, Qingsong; George, Betsy; Shekhar, Shashi

    2006-01-01

    .... In this paper, we propose a new approach, namely a capacity constrained routing planner which models capacity as a time series and generalizes shortest path algorithms to incorporate capacity constraints...

  18. Unraveling Quantum Annealers using Classical Hardness

    Science.gov (United States)

    Martin-Mayor, Victor; Hen, Itay

    2015-01-01

    Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, commonly referred to as ‘D-Wave’ chips, promise to solve practical optimization problems potentially faster than conventional ‘classical’ computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of experimental quantum annealers from their classical thermal counterparts. Inspired by recent results in spin-glass theory that recognize ‘temperature chaos’ as the underlying mechanism responsible for the computational intractability of hard optimization problems, we devise a general method to quantify the performance of quantum annealers on optimization problems suffering from varying degrees of temperature chaos: A superior performance of quantum annealers over classical algorithms on these may allude to the role that quantum effects play in providing speedup. We utilize our method to experimentally study the D-Wave Two chip on different temperature-chaotic problems and find, surprisingly, that its performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss several purely classical effects that possibly mask the quantum behavior of the chip. PMID:26483257

  19. Maximizing Entropy of Pickard Random Fields for 2x2 Binary Constraints

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren

    2014-01-01

    This paper considers the problem of maximizing the entropy of two-dimensional (2D) Pickard Random Fields (PRF) subject to constraints. We consider binary Pickard Random Fields, which provides a 2D causal finite context model and use it to define stationary probabilities for 2x2 squares, thus...... allowing us to calculate the entropy of the field. All possible binary 2x2 constraints are considered and all constraints are categorized into groups according to their properties. For constraints which can be modeled by a PRF approach and with positive entropy, we characterize and provide statistics...... of the maximum PRF entropy. As examples, we consider the well known hard square constraint along with a few other constraints....

  20. Heat exchanger networks design with constraints

    International Nuclear Information System (INIS)

    Amidpur, M.; Zoghi, A.; Nasiri, N.

    2000-01-01

    So far there have been two approaches to the problem of heat recovery system design where stream matching constraints exist. The first approach involves mathematical techniques for solving the combinational problem taking due recognition of the constraints. These methodologies are now efficient, still suffer from the problem of taking a significant amount of control and direction away from the designer. The second approach based upon so called pinch technology and involves the use of adaptation of standard problem table algorithm. Unfortunately, the proposed methodologies are not very easy to understand, therefore they fail to provide the insight and generally associated with these approaches. Here, a new pinch based methodology is presented. In this method, we modified the traditional numerical targeting procedure-problem table algorithm which is stream cascade table. Unconstrained groups are established by using of artificial intelligence method such that they have minimum utility consumption among different alternatives. Each group is an individual network, therefore, traditional optimization, used in pinch technology, should be employed. By transferring energy between groups heat recovery can be maximized, then each group designs individually and finally networks combine together. One of the advantages of using this method is simple targeting and easy networks-design. Besides the approach has the potential using of new network design methods such as dual temperature approach, flexible pinch design, pseudo pinch design. It is hoped that this methodology provides insight easy network design

  1. Rapid sampling of stochastic displacements in Brownian dynamics simulations with stresslet constraints

    Science.gov (United States)

    Fiore, Andrew M.; Swan, James W.

    2018-01-01

    Brownian Dynamics simulations are an important tool for modeling the dynamics of soft matter. However, accurate and rapid computations of the hydrodynamic interactions between suspended, microscopic components in a soft material are a significant computational challenge. Here, we present a new method for Brownian dynamics simulations of suspended colloidal scale particles such as colloids, polymers, surfactants, and proteins subject to a particular and important class of hydrodynamic constraints. The total computational cost of the algorithm is practically linear with the number of particles modeled and can be further optimized when the characteristic mass fractal dimension of the suspended particles is known. Specifically, we consider the so-called "stresslet" constraint for which suspended particles resist local deformation. This acts to produce a symmetric force dipole in the fluid and imparts rigidity to the particles. The presented method is an extension of the recently reported positively split formulation for Ewald summation of the Rotne-Prager-Yamakawa mobility tensor to higher order terms in the hydrodynamic scattering series accounting for force dipoles [A. M. Fiore et al., J. Chem. Phys. 146(12), 124116 (2017)]. The hydrodynamic mobility tensor, which is proportional to the covariance of particle Brownian displacements, is constructed as an Ewald sum in a novel way which guarantees that the real-space and wave-space contributions to the sum are independently symmetric and positive-definite for all possible particle configurations. This property of the Ewald sum is leveraged to rapidly sample the Brownian displacements from a superposition of statistically independent processes with the wave-space and real-space contributions as respective covariances. The cost of computing the Brownian displacements in this way is comparable to the cost of computing the deterministic displacements. The addition of a stresslet constraint to the over-damped particle

  2. Hard diffraction and rapidity gaps

    International Nuclear Information System (INIS)

    Brandt, A.

    1995-09-01

    The field of hard diffraction, which studies events with a rapidity gap and a hard scattering, has expanded dramatically recently. A review of new results from CDF, D OE, H1 and ZEUS will be given. These results include diffractive jet production, deep-inelastic scattering in large rapidity gap events, rapidity gaps between high transverse energy jets, and a search for diffractive W-boson production. The combination of these results gives new insight into the exchanged object, believed to be the pomeron. The results axe consistent with factorization and with a hard pomeron that contains both quarks and gluons. There is also evidence for the exchange of a strongly interacting color singlet in high momentum transfer (36 2 ) events

  3. Initiative hard coal; Initiative Steinkohle

    Energy Technology Data Exchange (ETDEWEB)

    Leonhardt, J.

    2007-08-02

    In order to decrease the import dependence of hard coal in the European Union, the author has submitted suggestions to the director of conventional sources of energy (directorate general for energy and transport) of the European community, which found a positive resonance. These suggestions are summarized in an elaboration 'Initiative Hard Coal'. After clarifying the starting situation and defining the target the presupposition for a better use of hard coal deposits as raw material in the European Union are pointed out. On that basis concrete suggestions for measures are made. Apart from the conditions of the deposits it concerns thereby also new mining techniques and mining-economical developments, connected with tasks for the mining-machine industry. (orig.)

  4. Service-Oriented Architecture (SOA) Instantiation within a Hard Real-Time, Deterministic Combat System Environment

    Science.gov (United States)

    Moreland, James D., Jr

    2013-01-01

    This research investigates the instantiation of a Service-Oriented Architecture (SOA) within a hard real-time (stringent time constraints), deterministic (maximum predictability) combat system (CS) environment. There are numerous stakeholders across the U.S. Department of the Navy who are affected by this development, and therefore the system…

  5. Conformal symmetry and pion form factor: Soft and hard contributions

    International Nuclear Information System (INIS)

    Choi, Ho-Meoyng; Ji, Chueng-Ryong

    2006-01-01

    We discuss a constraint of conformal symmetry in the analysis of the pion form factor. The usual power-law behavior of the form factor obtained in the perturbative QCD analysis can also be attained by taking negligible quark masses in the nonperturbative quark model analysis, confirming the recent AdS/CFT correspondence. We analyze the transition from soft to hard contributions in the pion form factor considering a momentum-dependent dynamical quark mass from an appreciable constituent quark mass at low momentum region to a negligible current quark mass at high momentum region. We find a correlation between the shape of nonperturbative quark distribution amplitude and the amount of soft and hard contributions to the pion form factor

  6. Autodriver algorithm

    Directory of Open Access Journals (Sweden)

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  7. Photon technology. Hard photon technology; Photon technology. Hard photon gijutsu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    For the application of photon to industrial technologies, in particular, a hard photon technology was surveyed which uses photon beams of 0.1-200nm in wavelength. Its features such as selective atom reaction, dense inner shell excitation and spacial high resolution by quantum energy are expected to provide innovative techniques for various field such as fine machining, material synthesis and advanced inspection technology. This wavelength region has been hardly utilized for industrial fields because of poor development of suitable photon sources and optical devices. The developmental meaning, usable time and issue of a hard photon reduction lithography were surveyed as lithography in ultra-fine region below 0.1{mu}m. On hard photon analysis/evaluation technology, the industrial use of analysis, measurement and evaluation technologies by micro-beam was viewed, and optimum photon sources and optical systems were surveyed. Prediction of surface and surface layer modification by inner shell excitation, the future trend of this process and development of a vacuum ultraviolet light source were also surveyed. 383 refs., 153 figs., 17 tabs.

  8. Evaluation of hard fossil fuel

    International Nuclear Information System (INIS)

    Zivkovic, S.; Nuic, J.

    1999-01-01

    Because of its inexhaustible supplies hard fossil fuel will represent the pillar of the power systems of the 21st century. Only high-calorie fossil fuels have the market value and participate in the world trade. Low-calorie fossil fuels ((brown coal and lignite) are fuels spent on the spot and their value is indirectly expressed through manufactured kWh. For the purpose of determining the real value of a tonne of low-calorie coal, the criteria that help in establishing the value of a tonne of hard coal have to be corrected and thus evaluated and assessed at the market. (author)

  9. Calorimeter triggers for hard collisions

    International Nuclear Information System (INIS)

    Landshoff, P.V.; Polkinghorne, J.C.

    1978-01-01

    We discuss the use of a forward calorimeter to trigger on hard hadron-hadron collisions. We give a derivation in the covariant parton model of the Ochs-Stodolsky scaling law for single-hard-scattering processes, and investigate the conditions when instead a multiple- scattering mechanism might dominate. With a proton beam, this mechanism results in six transverse jets, with a total average multiplicity about twice that seen in ordinary events. We estimate that its cross section is likely to be experimentally accessible at avalues of the beam energy in the region of 100 GeV/c

  10. Hardness of ion implanted ceramics

    International Nuclear Information System (INIS)

    Oliver, W.C.; McHargue, C.J.; Farlow, G.C.; White, C.W.

    1985-01-01

    It has been established that the wear behavior of ceramic materials can be modified through ion implantation. Studies have been done to characterize the effect of implantation on the structure and composition of ceramic surfaces. To understand how these changes affect the wear properties of the ceramic, other mechanical properties must be measured. To accomplish this, a commercially available ultra low load hardness tester has been used to characterize Al 2 O 3 with different implanted species and doses. The hardness of the base material is compared with the highly damaged crystalline state as well as the amorphous material

  11. Genetic algorithms and fuzzy multiobjective optimization

    CERN Document Server

    Sakawa, Masatoshi

    2002-01-01

    Since the introduction of genetic algorithms in the 1970s, an enormous number of articles together with several significant monographs and books have been published on this methodology. As a result, genetic algorithms have made a major contribution to optimization, adaptation, and learning in a wide variety of unexpected fields. Over the years, many excellent books in genetic algorithm optimization have been published; however, they focus mainly on single-objective discrete or other hard optimization problems under certainty. There appears to be no book that is designed to present genetic algorithms for solving not only single-objective but also fuzzy and multiobjective optimization problems in a unified way. Genetic Algorithms And Fuzzy Multiobjective Optimization introduces the latest advances in the field of genetic algorithm optimization for 0-1 programming, integer programming, nonconvex programming, and job-shop scheduling problems under multiobjectiveness and fuzziness. In addition, the book treats a w...

  12. Supergravity constraints on monojets

    International Nuclear Information System (INIS)

    Nandi, S.

    1986-01-01

    In the standard model, supplemented by N = 1 minimal supergravity, all the supersymmetric particle masses can be expressed in terms of a few unknown parameters. The resulting mass relations, and the laboratory and the cosmological bounds on these superpartner masses are used to put constraints on the supersymmetric origin of the CERN monojets. The latest MAC data at PEP excludes the scalar quarks, of masses up to 45 GeV, as the origin of these monojets. The cosmological bounds, for a stable photino, excludes the mass range necessary for the light gluino-heavy squark production interpretation. These difficulties can be avoided by going beyond the minimal supergravity theory. Irrespective of the monojets, the importance of the stable γ as the source of the cosmological dark matter is emphasized

  13. Temporal Concurrent Constraint Programming

    DEFF Research Database (Denmark)

    Valencia, Frank Dan

    Concurrent constraint programming (ccp) is a formalism for concurrency in which agents interact with one another by telling (adding) and asking (reading) information in a shared medium. Temporal ccp extends ccp by allowing agents to be constrained by time conditions. This dissertation studies...... temporal ccp by developing a process calculus called ntcc. The ntcc calculus generalizes the tcc model, the latter being a temporal ccp model for deterministic and synchronouss timed reactive systems. The calculus is built upon few basic ideas but it captures several aspects of timed systems. As tcc, ntcc...... structures, robotic devises, multi-agent systems and music applications. The calculus is provided with a denotational semantics that captures the reactive computations of processes in the presence of arbitrary environments. The denotation is proven to be fully-abstract for a substantial fragment...

  14. Efficient RNA structure comparison algorithms.

    Science.gov (United States)

    Arslan, Abdullah N; Anandan, Jithendar; Fry, Eric; Monschke, Keith; Ganneboina, Nitin; Bowerman, Jason

    2017-12-01

    Recently proposed relative addressing-based ([Formula: see text]) RNA secondary structure representation has important features by which an RNA structure database can be stored into a suffix array. A fast substructure search algorithm has been proposed based on binary search on this suffix array. Using this substructure search algorithm, we present a fast algorithm that finds the largest common substructure of given multiple RNA structures in [Formula: see text] format. The multiple RNA structure comparison problem is NP-hard in its general formulation. We introduced a new problem for comparing multiple RNA structures. This problem has more strict similarity definition and objective, and we propose an algorithm that solves this problem efficiently. We also develop another comparison algorithm that iteratively calls this algorithm to locate nonoverlapping large common substructures in compared RNAs. With the new resulting tools, we improved the RNASSAC website (linked from http://faculty.tamuc.edu/aarslan ). This website now also includes two drawing tools: one specialized for preparing RNA substructures that can be used as input by the search tool, and another one for automatically drawing the entire RNA structure from a given structure sequence.

  15. Chemical hardness and density functional theory

    Indian Academy of Sciences (India)

    Unknown

    RALPH G PEARSON. Chemistry Department, University of California, Santa Barbara, CA 93106, USA. Abstract. The concept of chemical hardness is reviewed from a personal point of view. Keywords. Hardness; softness; hard & soft acids bases (HSAB); principle of maximum hardness. (PMH) density functional theory (DFT) ...

  16. A procedure for empirical initialization of adaptive testing algorithms

    NARCIS (Netherlands)

    van der Linden, Willem J.

    1997-01-01

    In constrained adaptive testing, the numbers of constraints needed to control the content of the tests can easily run into the hundreds. Proper initialization of the algorithm becomes a requirement because the presence of large numbers of constraints slows down the convergence of the ability

  17. A universal algorithm to generate pseudo-random numbers based on uniform mapping as homeomorphism

    International Nuclear Information System (INIS)

    Fu-Lai, Wang

    2010-01-01

    A specific uniform map is constructed as a homeomorphism mapping chaotic time series into [0,1] to obtain sequences of standard uniform distribution. With the uniform map, a chaotic orbit and a sequence orbit obtained are topologically equivalent to each other so the map can preserve the most dynamic properties of chaotic systems such as permutation entropy. Based on the uniform map, a universal algorithm to generate pseudo random numbers is proposed and the pseudo random series is tested to follow the standard 0–1 random distribution both theoretically and experimentally. The algorithm is not complex, which does not impose high requirement on computer hard ware and thus computation speed is fast. The method not only extends the parameter spaces but also avoids the drawback of small function space caused by constraints on chaotic maps used to generate pseudo random numbers. The algorithm can be applied to any chaotic system and can produce pseudo random sequence of high quality, thus can be a good universal pseudo random number generator. (general)

  18. A universal algorithm to generate pseudo-random numbers based on uniform mapping as homeomorphism

    Science.gov (United States)

    Wang, Fu-Lai

    2010-09-01

    A specific uniform map is constructed as a homeomorphism mapping chaotic time series into [0,1] to obtain sequences of standard uniform distribution. With the uniform map, a chaotic orbit and a sequence orbit obtained are topologically equivalent to each other so the map can preserve the most dynamic properties of chaotic systems such as permutation entropy. Based on the uniform map, a universal algorithm to generate pseudo random numbers is proposed and the pseudo random series is tested to follow the standard 0-1 random distribution both theoretically and experimentally. The algorithm is not complex, which does not impose high requirement on computer hard ware and thus computation speed is fast. The method not only extends the parameter spaces but also avoids the drawback of small function space caused by constraints on chaotic maps used to generate pseudo random numbers. The algorithm can be applied to any chaotic system and can produce pseudo random sequence of high quality, thus can be a good universal pseudo random number generator.

  19. Minimal Flavor Constraints for Technicolor

    DEFF Research Database (Denmark)

    Sakuma, Hidenori; Sannino, Francesco

    2010-01-01

    We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self-coupling and mas......We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self...

  20. Social Constraints on Animate Vision

    National Research Council Canada - National Science Library

    Breazeal, Cynthia; Edsinger, Aaron; Fitzpatrick, Paul; Scassellati, Brian

    2000-01-01

    .... In humanoid robotic systems, or in any animate vision system that interacts with people, social dynamics provide additional levels of constraint and provide additional opportunities for processing economy...

  1. Seismic signals hard clipping overcoming

    Science.gov (United States)

    Olszowa, Paula; Sokolowski, Jakub

    2018-01-01

    In signal processing the clipping is understand as the phenomenon of limiting the signal beyond certain threshold. It is often related to overloading of a sensor. Two particular types of clipping are being recognized: soft and hard. Beyond the limiting value soft clipping reduces the signal real gain while the hard clipping stiffly sets the signal values at the limit. In both cases certain amount of signal information is lost. Obviously if one possess the model which describes the considered signal and the threshold value (which might be slightly more difficult to obtain in the soft clipping case), the attempt of restoring the signal can be made. Commonly it is assumed that the seismic signals take form of an impulse response of some specific system. This may lead to belief that the sine wave may be the most appropriate to fit in the clipping period. However, this should be tested. In this paper the possibility of overcoming the hard clipping in seismic signals originating from a geoseismic station belonging to an underground mine is considered. A set of raw signals will be hard-clipped manually and then couple different functions will be fitted and compared in terms of least squares. The results will be then analysed.

  2. Hard equality constrained integer knapsacks

    NARCIS (Netherlands)

    Aardal, K.I.; Lenstra, A.K.; Cook, W.J.; Schulz, A.S.

    2002-01-01

    We consider the following integer feasibility problem: "Given positive integer numbers a 0, a 1,..., a n, with gcd(a 1,..., a n) = 1 and a = (a 1,..., a n), does there exist a nonnegative integer vector x satisfying ax = a 0?" Some instances of this type have been found to be extremely hard to solve

  3. Stress in hard metal films

    NARCIS (Netherlands)

    Janssen, G.C.A.M.; Kamminga, J.D.

    2004-01-01

    In the absence of thermal stress, tensile stress in hard metal films is caused by grain boundary shrinkage and compressive stress is caused by ion peening. It is shown that the two contributions are additive. Moreover tensile stress generated at the grain boundaries does not relax by ion

  4. Greedy Algorithms for Nonnegativity-Constrained Simultaneous Sparse Recovery

    Science.gov (United States)

    Kim, Daeun; Haldar, Justin P.

    2016-01-01

    This work proposes a family of greedy algorithms to jointly reconstruct a set of vectors that are (i) nonnegative and (ii) simultaneously sparse with a shared support set. The proposed algorithms generalize previous approaches that were designed to impose these constraints individually. Similar to previous greedy algorithms for sparse recovery, the proposed algorithms iteratively identify promising support indices. In contrast to previous approaches, the support index selection procedure has been adapted to prioritize indices that are consistent with both the nonnegativity and shared support constraints. Empirical results demonstrate for the first time that the combined use of simultaneous sparsity and nonnegativity constraints can substantially improve recovery performance relative to existing greedy algorithms that impose less signal structure. PMID:26973368

  5. Modifier constraint in alkali borophosphate glasses using topological constraint theory

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xiang [Key Laboratory for Ultrafine Materials of Ministry of Education, School of Materials Science and Engineering, East China University of Science and Technology, Shanghai 200237 (China); Zeng, Huidan, E-mail: hdzeng@ecust.edu.cn [Key Laboratory for Ultrafine Materials of Ministry of Education, School of Materials Science and Engineering, East China University of Science and Technology, Shanghai 200237 (China); Jiang, Qi [Key Laboratory for Ultrafine Materials of Ministry of Education, School of Materials Science and Engineering, East China University of Science and Technology, Shanghai 200237 (China); Zhao, Donghui [Unifrax Corporation, Niagara Falls, NY 14305 (United States); Chen, Guorong [Key Laboratory for Ultrafine Materials of Ministry of Education, School of Materials Science and Engineering, East China University of Science and Technology, Shanghai 200237 (China); Wang, Zhaofeng; Sun, Luyi [Department of Chemical & Biomolecular Engineering and Polymer Program, Institute of Materials Science, University of Connecticut, Storrs, CT 06269 (United States); Chen, Jianding [Key Laboratory for Ultrafine Materials of Ministry of Education, School of Materials Science and Engineering, East China University of Science and Technology, Shanghai 200237 (China)

    2016-12-01

    In recent years, composition-dependent properties of glasses have been successfully predicted using the topological constraint theory. The constraints of the glass network are derived from two main parts: network formers and network modifiers. The constraints of the network formers can be calculated on the basis of the topological structure of the glass. However, the latter cannot be accurately calculated in this way, because of the existing of ionic bonds. In this paper, the constraints of the modifier ions in phosphate glasses were thoroughly investigated using the topological constraint theory. The results show that the constraints of the modifier ions are gradually increased with the addition of alkali oxides. Furthermore, an improved topological constraint theory for borophosphate glasses is proposed by taking the composition-dependent constraints of the network modifiers into consideration. The proposed theory is subsequently evaluated by analyzing the composition dependence of the glass transition temperature in alkali borophosphate glasses. This method is supposed to be extended to other similar glass systems containing alkali ions.

  6. Algorithmic Self

    DEFF Research Database (Denmark)

    Markham, Annette

    This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....

  7. Hard processes in hadronic interactions

    International Nuclear Information System (INIS)

    Satz, H.; Wang, X.N.

    1995-01-01

    Quantum chromodynamics is today accepted as the fundamental theory of strong interactions, even though most hadronic collisions lead to final states for which quantitative QCD predictions are still lacking. It therefore seems worthwhile to take stock of where we stand today and to what extent the presently available data on hard processes in hadronic collisions can be accounted for in terms of QCD. This is one reason for this work. The second reason - and in fact its original trigger - is the search for the quark-gluon plasma in high energy nuclear collisions. The hard processes to be considered here are the production of prompt photons, Drell-Yan dileptons, open charm, quarkonium states, and hard jets. For each of these, we discuss the present theoretical understanding, compare the resulting predictions to available data, and then show what behaviour it leads to at RHIC and LHC energies. All of these processes have the structure mentioned above: they contain a hard partonic interaction, calculable perturbatively, but also the non-perturbative parton distribution within a hadron. These parton distributions, however, can be studied theoretically in terms of counting rule arguments, and they can be checked independently by measurements of the parton structure functions in deep inelastic lepton-hadron scattering. The present volume is the work of Hard Probe Collaboration, a group of theorists who are interested in the problem and were willing to dedicate a considerable amount of their time and work on it. The necessary preparation, planning and coordination of the project were carried out in two workshops of two weeks' duration each, in February 1994 at CERn in Geneva andin July 1994 at LBL in Berkeley

  8. Seismological Constraints on Geodynamics

    Science.gov (United States)

    Lomnitz, C.

    2004-12-01

    Earth is an open thermodynamic system radiating heat energy into space. A transition from geostatic earth models such as PREM to geodynamical models is needed. We discuss possible thermodynamic constraints on the variables that govern the distribution of forces and flows in the deep Earth. In this paper we assume that the temperature distribution is time-invariant, so that all flows vanish at steady state except for the heat flow Jq per unit area (Kuiken, 1994). Superscript 0 will refer to the steady state while x denotes the excited state of the system. We may write σ 0=(J{q}0ṡX{q}0)/T where Xq is the conjugate force corresponding to Jq, and σ is the rate of entropy production per unit volume. Consider now what happens after the occurrence of an earthquake at time t=0 and location (0,0,0). The earthquake introduces a stress drop Δ P(x,y,z) at all points of the system. Response flows are directed along the gradients toward the epicentral area, and the entropy production will increase with time as (Prigogine, 1947) σ x(t)=σ 0+α {1}/(t+β )+α {2}/(t+β )2+etc A seismological constraint on the parameters may be obtained from Omori's empirical relation N(t)=p/(t+q) where N(t) is the number of aftershocks at time t following the main shock. It may be assumed that p/q\\sim\\alpha_{1}/\\beta times a constant. Another useful constraint is the Mexican-hat geometry of the seismic transient as obtained e.g. from InSAR radar interferometry. For strike-slip events such as Landers the distribution of \\DeltaP is quadrantal, and an oval-shaped seismicity gap develops about the epicenter. A weak outer triggering maxiμm is found at a distance of about 17 fault lengths. Such patterns may be extracted from earthquake catalogs by statistical analysis (Lomnitz, 1996). Finally, the energy of the perturbation must be at least equal to the recovery energy. The total energy expended in an aftershock sequence can be found approximately by integrating the local contribution over

  9. An implicit adaptation algorithm for a linear model reference control system

    Science.gov (United States)

    Mabius, L.; Kaufman, H.

    1975-01-01

    This paper presents a stable implicit adaptation algorithm for model reference control. The constraints for stability are found using Lyapunov's second method and do not depend on perfect model following between the system and the reference model. Methods are proposed for satisfying these constraints without estimating the parameters on which the constraints depend.

  10. Formal Constraints on Memory Management for Composite Overloaded Operations

    Directory of Open Access Journals (Sweden)

    Damian W.I. Rouson

    2006-01-01

    Full Text Available The memory management rules for abstract data type calculus presented by Rouson, Morris & Xu [15] are recast as formal statements in the Object Constraint Language (OCL and applied to the design of a thermal energy equation solver. One set of constraints eliminates memory leaks observed in composite overloaded expressions with three current Fortran 95/2003 compilers. A second set of constraints ensures economical memory recycling. The constraints are preconditions, postconditions and invariants on overloaded operators and the objects they receive and return. It is demonstrated that systematic run-time assertion checking inspired by the formal constraints facilitated the pinpointing of an exceptionally hard-to-reproduce compiler bug. It is further demonstrated that the interplay between OCL's modeling capabilities and Fortran's programming capabilities led to a conceptual breakthrough that greatly improved the readability of our code by facilitating operator overloading. The advantages and disadvantages of our memory management rules are discussed in light of other published solutions [11,19]. Finally, it is demonstrated that the run-time assertion checking has a negligible impact on performance.

  11. Observational constraints on interstellar chemistry

    International Nuclear Information System (INIS)

    Winnewisser, G.

    1984-01-01

    The author points out presently existing observational constraints in the detection of interstellar molecular species and the limits they may cast on our knowledge of interstellar chemistry. The constraints which arise from the molecular side are summarised and some technical difficulties encountered in detecting new species are discussed. Some implications for our understanding of molecular formation processes are considered. (Auth.)

  12. Market segmentation using perceived constraints

    Science.gov (United States)

    Jinhee Jun; Gerard Kyle; Andrew Mowen

    2008-01-01

    We examined the practical utility of segmenting potential visitors to Cleveland Metroparks using their constraint profiles. Our analysis identified three segments based on their scores on the dimensions of constraints: Other priorities--visitors who scored the highest on 'other priorities' dimension; Highly Constrained--visitors who scored relatively high on...

  13. Fixed Costs and Hours Constraints

    Science.gov (United States)

    Johnson, William R.

    2011-01-01

    Hours constraints are typically identified by worker responses to questions asking whether they would prefer a job with more hours and more pay or fewer hours and less pay. Because jobs with different hours but the same rate of pay may be infeasible when there are fixed costs of employment or mandatory overtime premia, the constraint in those…

  14. An Introduction to 'Creativity Constraints'

    DEFF Research Database (Denmark)

    Onarheim, Balder; Biskjær, Michael Mose

    2013-01-01

    Constraints play a vital role as both restrainers and enablers in innovation processes by governing what the creative agent/s can and cannot do, and what the output can and cannot be. Notions of constraints are common in creativity research, but current contributions are highly dispersed due to n...

  15. Constraint Programming for Context Comprehension

    DEFF Research Database (Denmark)

    Christiansen, Henning

    2014-01-01

    A close similarity is demonstrated between context comprehension, such as discourse analysis, and constraint programming. The constraint store takes the role of a growing knowledge base learned throughout the discourse, and a suitable con- straint solver does the job of incorporating new pieces...

  16. Statistical quality analysis of schedulers under soft-real-time constraints

    NARCIS (Netherlands)

    Baarsma, H.E.; Hurink, Johann L.; Jansen, P.G.

    2007-01-01

    This paper describes an algorithm to determine the performance of real-time systems with tasks using stochastic processing times. Such an algorithm can be used for guaranteeing Quality of Service of periodic tasks with soft real-time constraints. We use a discrete distribution model of processing

  17. Dynamic I/O Power Management for Hard Real-Time Systems

    Science.gov (United States)

    2005-01-01

    recently emerged as an attractive alternative to inflexible hardware solutions. DPM for hard real - time systems has received relatively little attention...In particular, energy-driven I/O device scheduling for real - time systems has not been considered before. We present the first online DPM algorithm...which we call Low Energy Device Scheduler (LEDES), for hard real - time systems . LEDES takes as inputs a predetermined task schedule and a device-usage

  18. Artificial bee colony algorithm for constrained possibilistic portfolio optimization problem

    Science.gov (United States)

    Chen, Wei

    2015-07-01

    In this paper, we discuss the portfolio optimization problem with real-world constraints under the assumption that the returns of risky assets are fuzzy numbers. A new possibilistic mean-semiabsolute deviation model is proposed, in which transaction costs, cardinality and quantity constraints are considered. Due to such constraints the proposed model becomes a mixed integer nonlinear programming problem and traditional optimization methods fail to find the optimal solution efficiently. Thus, a modified artificial bee colony (MABC) algorithm is developed to solve the corresponding optimization problem. Finally, a numerical example is given to illustrate the effectiveness of the proposed model and the corresponding algorithm.

  19. Genetic algorithm optimization for dynamic construction site layout planning

    Directory of Open Access Journals (Sweden)

    Farmakis Panagiotis M.

    2018-02-01

    Full Text Available The dynamic construction site layout planning (DCSLP problem refers to the efficient placement and relocation of temporary construction facilities within a dynamically changing construction site environment considering the characteristics of facilities and work interrelationships, the shape and topography of the construction site, and the time-varying project needs. A multi-objective dynamic optimization model is developed for this problem that considers construction and relocation costs of facilities, transportation costs of resources moving from one facility to another or to workplaces, as well as safety and environmental considerations resulting from facilities’ operations and interconnections. The latter considerations are taken into account in the form of preferences or constraints regarding the proximity or remoteness of particular facilities to other facilities or work areas. The analysis of multiple project phases and the dynamic facility relocation from phase to phase highly increases the problem size, which, even in its static form, falls within the NP (for Nondeterministic Polynomial time- hard class of combinatorial optimization problems. For this reason, a genetic algorithm has been implemented for the solution due to its capability to robustly search within a large solution space. Several case studies and operational scenarios have been implemented through the Palisade’s Evolver software for model testing and evaluation. The results indi­cate satisfactory model response to time-varying input data in terms of solution quality and computation time. The model can provide decision support to site managers, allowing them to examine alternative scenarios and fine-tune optimal solutions according to their experience by introducing desirable preferences or constraints in the decision process.

  20. On Dirac's conjecture for Hamiltonian systems with first and second class constraints

    International Nuclear Information System (INIS)

    Cabo, A.; Louis Martinez, D.

    1989-07-01

    It is shown for a wide class of systems in the framework of the Total Hamiltonian Procedure that all the first class constraints generate canonical transformations connecting physically equivalent states. It occurs whenever the constraints arising in the Dirac algorithm are effective when considered in the functional form as they appear in the consistency conditions. The property of hereditary separation between first and second class constraints also follows from the above condition. General Poisson bracket relations among constraints in the representation used here are also obtained. The sources of anomalies in the hereditary property reported in the literature are identified. (author). 15 refs

  1. On Dirac's conjecture for Hamiltonian systems with first- and second-class constraints

    International Nuclear Information System (INIS)

    Cabo, A.; Louis-Martinez, D.

    1990-01-01

    It is shown for a wide class of systems in the framework of the total Hamiltonian procedure that all first-class constraints generate canonical transformations connecting physically equivalent states. It occurs whenever the constraints arising in the Dirac algorithm are effective when considered in the functional form as they appear in the consistency conditions. The property of hereditary separation between first- and second-class constraints also follows from the above condition. General Poisson-brackets relations among constraints in the representation used here are also obtained. The sources of anomalies in the hereditary property reported in the literature are identified

  2. Nanomechanics of hard films on compliant substrates.

    Energy Technology Data Exchange (ETDEWEB)

    Reedy, Earl David, Jr. (Sandia National Laboratories, Albuquerque, NM); Emerson, John Allen (Sandia National Laboratories, Albuquerque, NM); Bahr, David F. (Washington State University, Pullman, WA); Moody, Neville Reid; Zhou, Xiao Wang; Hales, Lucas (University of Minnesota, Minneapolis, MN); Adams, David Price (Sandia National Laboratories, Albuquerque, NM); Yeager,John (Washington State University, Pullman, WA); Nyugen, Thao D. (Johns Hopkins University, Baltimore, MD); Corona, Edmundo (Sandia National Laboratories, Albuquerque, NM); Kennedy, Marian S. (Clemson University, Clemson, SC); Cordill, Megan J. (Erich Schmid Institute, Leoben, Austria)

    2009-09-01

    Development of flexible thin film systems for biomedical, homeland security and environmental sensing applications has increased dramatically in recent years [1,2,3,4]. These systems typically combine traditional semiconductor technology with new flexible substrates, allowing for both the high electron mobility of semiconductors and the flexibility of polymers. The devices have the ability to be easily integrated into components and show promise for advanced design concepts, ranging from innovative microelectronics to MEMS and NEMS devices. These devices often contain layers of thin polymer, ceramic and metallic films where differing properties can lead to large residual stresses [5]. As long as the films remain substrate-bonded, they may deform far beyond their freestanding counterpart. Once debonded, substrate constraint disappears leading to film failure where compressive stresses can lead to wrinkling, delamination, and buckling [6,7,8] while tensile stresses can lead to film fracture and decohesion [9,10,11]. In all cases, performance depends on film adhesion. Experimentally it is difficult to measure adhesion. It is often studied using tape [12], pull off [13,14,15], and peel tests [16,17]. More recent techniques for measuring adhesion include scratch testing [18,19,20,21], four point bending [22,23,24], indentation [25,26,27], spontaneous blisters [28,29] and stressed overlayers [7,26,30,31,32,33]. Nevertheless, sample design and test techniques must be tailored for each system. There is a large body of elastic thin film fracture and elastic contact mechanics solutions for elastic films on rigid substrates in the published literature [5,7,34,35,36]. More recent work has extended these solutions to films on compliant substrates and show that increasing compliance markedly changes fracture energies compared with rigid elastic solution results [37,38]. However, the introduction of inelastic substrate response significantly complicates the problem [10,39,40]. As

  3. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  4. Algorithm 865

    DEFF Research Database (Denmark)

    Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy

    2007-01-01

    We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...

  5. Vocabulary Constraint on Texts

    Directory of Open Access Journals (Sweden)

    C. Sutarsyah

    2008-01-01

    Full Text Available This case study was carried out in the English Education Department of State University of Malang. The aim of the study was to identify and describe the vocabulary in the reading text and to seek if the text is useful for reading skill development. A descriptive qualitative design was applied to obtain the data. For this purpose, some available computer programs were used to find the description of vocabulary in the texts. It was found that the 20 texts containing 7,945 words are dominated by low frequency words which account for 16.97% of the words in the texts. The high frequency words occurring in the texts were dominated by function words. In the case of word levels, it was found that the texts have very limited number of words from GSL (General Service List of English Words (West, 1953. The proportion of the first 1,000 words of GSL only accounts for 44.6%. The data also show that the texts contain too large proportion of words which are not in the three levels (the first 2,000 and UWL. These words account for 26.44% of the running words in the texts.  It is believed that the constraints are due to the selection of the texts which are made of a series of short-unrelated texts. This kind of text is subject to the accumulation of low frequency words especially those of content words and limited of words from GSL. It could also defeat the development of students' reading skills and vocabulary enrichment.

  6. Hard-to-fill vacancies.

    Science.gov (United States)

    Williams, Ruth

    2010-09-29

    Skills for Health has launched a set of resources to help healthcare employers tackle hard-to-fill entry-level vacancies and provide sustainable employment for local unemployed people. The Sector Employability Toolkit aims to reduce recruitment and retention costs for entry-level posts and repare people for employment through pre-job training programmes, and support employers to develop local partnerships to gain access to wider pools of candidates and funding streams.

  7. Pushing hard on the accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1987-09-15

    The quest for new techniques to drive future generations of particle accelerators has been pushed hard in recent years, efforts having been highlighted by workshops in Europe, organized by the European Committee for Future Accelerators, and in the US. The latest ECFA Workshop on New Developments in Particle Acceleration Techniques, held at Orsay from 29 June to 4 July, showed how the initial frantic search for innovation is now maturing.

  8. CMS results on hard diffraction

    CERN Document Server

    INSPIRE-00107098

    2013-01-01

    In these proceedings we present CMS results on hard diffraction. Diffractive dijet production in pp collisions at $\\sqrt{s}$=7 TeV is discussed. The cross section for dijet production is presented as a function of $\\tilde{\\xi}$, representing the fractional momentum loss of the scattered proton in single-diffractive events. The observation of W and Z boson production in events with a large pseudo-rapidity gap is also presented.

  9. Motion and deformation estimation from medical imagery by modeling sub-structure interaction and constraints

    KAUST Repository

    Sundaramoorthi, Ganesh

    2012-09-13

    This paper presents a novel medical image registration algorithm that explicitly models the physical constraints imposed by objects or sub-structures of objects that have differing material composition and border each other, which is the case in most medical registration applications. Typical medical image registration algorithms ignore these constraints and therefore are not physically viable, and to incorporate these constraints would require prior segmentation of the image into regions of differing material composition, which is a difficult problem in itself. We present a mathematical model and algorithm for incorporating these physical constraints into registration / motion and deformation estimation that does not require a segmentation of different material regions. Our algorithm is a joint estimation of different material regions and the motion/deformation within these regions. Therefore, the segmentation of different material regions is automatically provided in addition to the image registration satisfying the physical constraints. The algorithm identifies differing material regions (sub-structures or objects) as regions where the deformation has different characteristics. We demonstrate the effectiveness of our method on the analysis of cardiac MRI which includes the detection of the left ventricle boundary and its deformation. The experimental results indicate the potential of the algorithm as an assistant tool for the quantitative analysis of cardiac functions in the diagnosis of heart disease.

  10. Playing Moderately Hard to Get

    Directory of Open Access Journals (Sweden)

    Stephen Reysen

    2013-12-01

    Full Text Available In two studies, we examined the effect of different degrees of attraction reciprocation on ratings of attraction toward a potential romantic partner. Undergraduate college student participants imagined a potential romantic partner who reciprocated a low (reciprocating attraction one day a week, moderate (reciprocating attraction three days a week, high (reciprocating attraction five days a week, or unspecified degree of attraction (no mention of reciprocation. Participants then rated their degree of attraction toward the potential partner. The results of Study 1 provided only partial support for Brehm’s emotion intensity theory. However, after revising the high reciprocation condition vignette in Study 2, supporting Brehm’s emotion intensity theory, results show that a potential partners’ display of reciprocation of attraction acted as a deterrent to participants’ intensity of experienced attraction to the potential partner. The results support the notion that playing moderately hard to get elicits more intense feelings of attraction from potential suitors than playing too easy or too hard to get. Discussion of previous research examining playing hard to get is also re-examined through an emotion intensity theory theoretical lens.

  11. CMOS optimization for radiation hardness

    International Nuclear Information System (INIS)

    Derbenwick, G.F.; Fossum, J.G.

    1975-01-01

    Several approaches to the attainment of radiation-hardened MOS circuits have been investigated in the last few years. These have included implanting the SiO 2 gate insulator with aluminum, using chrome-aluminum layered gate metallization, using Al 2 O 3 as the gate insulator, and optimizing the MOS fabrication process. Earlier process optimization studies were restricted primarily to p-channel devices operating with negative gate biases. Since knowledge of the hardness dependence upon processing and design parameters is essential in producing hardened integrated circuits, a comprehensive investigation of the effects of both process and design optimization on radiation-hardened CMOS integrated circuits was undertaken. The goals are to define and establish a radiation-hardened processing sequence for CMOS integrated circuits and to formulate quantitative relationships between process and design parameters and the radiation hardness. Using these equations, the basic CMOS design can then be optimized for radiation hardness and some understanding of the basic physics responsible for the radiation damage can be gained. Results are presented

  12. An algorithm for learning real-time automata

    NARCIS (Netherlands)

    Verwer, S.E.; De Weerdt, M.M.; Witteveen, C.

    2007-01-01

    We describe an algorithm for learning simple timed automata, known as real-time automata. The transitions of real-time automata can have a temporal constraint on the time of occurrence of the current symbol relative to the previous symbol. The learning algorithm is similar to the redblue fringe

  13. Recursive algorithms for phylogenetic tree counting.

    Science.gov (United States)

    Gavryushkina, Alexandra; Welch, David; Drummond, Alexei J

    2013-10-28

    In Bayesian phylogenetic inference we are interested in distributions over a space of trees. The number of trees in a tree space is an important characteristic of the space and is useful for specifying prior distributions. When all samples come from the same time point and no prior information available on divergence times, the tree counting problem is easy. However, when fossil evidence is used in the inference to constrain the tree or data are sampled serially, new tree spaces arise and counting the number of trees is more difficult. We describe an algorithm that is polynomial in the number of sampled individuals for counting of resolutions of a constraint tree assuming that the number of constraints is fixed. We generalise this algorithm to counting resolutions of a fully ranked constraint tree. We describe a quadratic algorithm for counting the number of possible fully ranked trees on n sampled individuals. We introduce a new type of tree, called a fully ranked tree with sampled ancestors, and describe a cubic time algorithm for counting the number of such trees on n sampled individuals. These algorithms should be employed for Bayesian Markov chain Monte Carlo inference when fossil data are included or data are serially sampled.

  14. Robust information encryption diffractive-imaging-based scheme with special phase retrieval algorithm for a customized data container

    Science.gov (United States)

    Qin, Yi; Wang, Zhipeng; Wang, Hongjuan; Gong, Qiong; Zhou, Nanrun

    2018-06-01

    The diffractive-imaging-based encryption (DIBE) scheme has aroused wide interesting due to its compact architecture and low requirement of conditions. Nevertheless, the primary information can hardly be recovered exactly in the real applications when considering the speckle noise and potential occlusion imposed on the ciphertext. To deal with this issue, the customized data container (CDC) into DIBE is introduced and a new phase retrieval algorithm (PRA) for plaintext retrieval is proposed. The PRA, designed according to the peculiarity of the CDC, combines two key techniques from previous approaches, i.e., input-support-constraint and median-filtering. The proposed scheme can guarantee totally the reconstruction of the primary information despite heavy noise or occlusion and its effectiveness and feasibility have been demonstrated with simulation results.

  15. Tag SNP selection via a genetic algorithm.

    Science.gov (United States)

    Mahdevar, Ghasem; Zahiri, Javad; Sadeghi, Mehdi; Nowzari-Dalini, Abbas; Ahrabian, Hayedeh

    2010-10-01

    Single Nucleotide Polymorphisms (SNPs) provide valuable information on human evolutionary history and may lead us to identify genetic variants responsible for human complex diseases. Unfortunately, molecular haplotyping methods are costly, laborious, and time consuming; therefore, algorithms for constructing full haplotype patterns from small available data through computational methods, Tag SNP selection problem, are convenient and attractive. This problem is proved to be an NP-hard problem, so heuristic methods may be useful. In this paper we present a heuristic method based on genetic algorithm to find reasonable solution within acceptable time. The algorithm was tested on a variety of simulated and experimental data. In comparison with the exact algorithm, based on brute force approach, results show that our method can obtain optimal solutions in almost all cases and runs much faster than exact algorithm when the number of SNP sites is large. Our software is available upon request to the corresponding author.

  16. Adiabatic quantum search algorithm for structured problems

    International Nuclear Information System (INIS)

    Roland, Jeremie; Cerf, Nicolas J.

    2003-01-01

    The study of quantum computation has been motivated by the hope of finding efficient quantum algorithms for solving classically hard problems. In this context, quantum algorithms by local adiabatic evolution have been shown to solve an unstructured search problem with a quadratic speedup over a classical search, just as Grover's algorithm. In this paper, we study how the structure of the search problem may be exploited to further improve the efficiency of these quantum adiabatic algorithms. We show that by nesting a partial search over a reduced set of variables into a global search, it is possible to devise quantum adiabatic algorithms with a complexity that, although still exponential, grows with a reduced order in the problem size

  17. Grand canonical simulations of hard-disk systems by simulated tempering

    DEFF Research Database (Denmark)

    Döge, G.; Mecke, K.; Møller, Jesper

    2004-01-01

    The melting transition of hard disks in two dimensions is still an unsolved problem and improved simulation algorithms may be helpful for its investigation. We suggest the application of simulating tempering for grand canonical hard-disk systems as an efficient alternative to the commonly......-used Monte Carlo algorithms for canonical systems. This approach allows the direct study of the packing fraction as a function of the chemical potential even in the vicinity of the melting transition. Furthermore, estimates of several spatial characteristics including pair correlation function are studied...

  18. Mathematical Modeling and a Hybrid NSGA-II Algorithm for Process Planning Problem Considering Machining Cost and Carbon Emission

    Directory of Open Access Journals (Sweden)

    Jin Huang

    2017-09-01

    Full Text Available Process planning is an important function in a manufacturing system; it specifies the manufacturing requirements and details for the shop floor to convert a part from raw material to the finished form. However, considering only economical criterion with technological constraints is not enough in sustainable manufacturing practice; formerly, criteria about low carbon emission awareness have seldom been taken into account in process planning optimization. In this paper, a mathematical model that considers both machining costs reduction as well as carbon emission reduction is established for the process planning problem. However, due to various flexibilities together with complex precedence constraints between operations, the process planning problem is a non-deterministic polynomial-time (NP hard problem. Aiming at the distinctive feature of the multi-objectives process planning optimization, we then developed a hybrid non-dominated sorting genetic algorithm (NSGA-II to tackle this problem. A local search method that considers both the total cost criterion and the carbon emission criterion are introduced into the proposed algorithm to avoid being trapped into local optima. Moreover, the technique for order preference by similarity to an ideal solution (TOPSIS method is also adopted to determine the best solution from the Pareto front. Experiments have been conducted using Kim’s benchmark. Computational results show that process plan schemes with low carbon emission can be captured, and, more importantly, the proposed hybrid NSGA-II algorithm can obtain more promising optimal Pareto front than the plain NSGA-II algorithm. Meanwhile, according to the computational results of Kim’s benchmark, we find that both of the total machining cost and carbon emission are roughly proportional to the number of operations, and a process plan with less operation may be more satisfactory. This study will draw references for the further research on green

  19. Improved Harmony Search Algorithm with Chaos for Absolute Value Equation

    Directory of Open Access Journals (Sweden)

    Shouheng Tuo

    2013-11-01

    Full Text Available In this paper, an improved harmony search with chaos (HSCH is presented for solving NP-hard absolute value equation (AVE Ax - |x| = b, where A is an arbitrary square matrix whose singular values exceed one. The simulation results in solving some given AVE problems demonstrate that the HSCH algorithm is valid and outperforms the classical HS algorithm (CHS and HS algorithm with differential mutation operator (HSDE.

  20. Banking Competition and Soft Budget Constraints: How Market Power can Threaten Discipline in Lending

    NARCIS (Netherlands)

    Arping, S.

    2012-01-01

    n imperfectly competitive credit markets, banks can face a tradeoff between exploiting their market power and enforcing hard budget constraints. As market power rises, banks eventually find it too costly to discipline underperforming borrowers by stopping their projects. Lending relationships become

  1. Impact of aging on radiation hardness

    International Nuclear Information System (INIS)

    Shaneyfelt, M.R.; Winokur, P.S.; Fleetwood, D.M.

    1997-01-01

    Burn-in effects are used to demonstrate the potential impact of thermally activated aging effects on functional and parametric radiation hardness. These results have implications on hardness assurance testing. Techniques for characterizing aging effects are proposed

  2. Why Are Drugs So Hard to Quit?

    Medline Plus

    Full Text Available ... Quitting drugs is hard because addiction is a brain disease. Your brain is like a control tower that sends out ... and choices. Addiction changes the signals in your brain and makes it hard to feel OK without ...

  3. MM Algorithms for Geometric and Signomial Programming.

    Science.gov (United States)

    Lange, Kenneth; Zhou, Hua

    2014-02-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.

  4. Bit Loading Algorithms for Cooperative OFDM Systems

    Directory of Open Access Journals (Sweden)

    Gui Bo

    2008-01-01

    Full Text Available Abstract We investigate the resource allocation problem for an OFDM cooperative network with a single source-destination pair and multiple relays. Assuming knowledge of the instantaneous channel gains for all links in the entire network, we propose several bit and power allocation schemes aiming at minimizing the total transmission power under a target rate constraint. First, an optimal and efficient bit loading algorithm is proposed when the relay node uses the same subchannel to relay the information transmitted by the source node. To further improve the performance gain, subchannel permutation, in which the subchannels are reallocated at relay nodes, is considered. An optimal subchannel permutation algorithm is first proposed and then an efficient suboptimal algorithm is considered to achieve a better complexity-performance tradeoff. A distributed bit loading algorithm is also proposed for ad hoc networks. Simulation results show that significant performance gains can be achieved by the proposed bit loading algorithms, especially when subchannel permutation is employed.

  5. Bit Loading Algorithms for Cooperative OFDM Systems

    Directory of Open Access Journals (Sweden)

    Bo Gui

    2007-12-01

    Full Text Available We investigate the resource allocation problem for an OFDM cooperative network with a single source-destination pair and multiple relays. Assuming knowledge of the instantaneous channel gains for all links in the entire network, we propose several bit and power allocation schemes aiming at minimizing the total transmission power under a target rate constraint. First, an optimal and efficient bit loading algorithm is proposed when the relay node uses the same subchannel to relay the information transmitted by the source node. To further improve the performance gain, subchannel permutation, in which the subchannels are reallocated at relay nodes, is considered. An optimal subchannel permutation algorithm is first proposed and then an efficient suboptimal algorithm is considered to achieve a better complexity-performance tradeoff. A distributed bit loading algorithm is also proposed for ad hoc networks. Simulation results show that significant performance gains can be achieved by the proposed bit loading algorithms, especially when subchannel permutation is employed.

  6. Relational Constraint Driven Test Case Synthesis for Web Applications

    Directory of Open Access Journals (Sweden)

    Xiang Fu

    2010-09-01

    Full Text Available This paper proposes a relational constraint driven technique that synthesizes test cases automatically for web applications. Using a static analysis, servlets can be modeled as relational transducers, which manipulate backend databases. We present a synthesis algorithm that generates a sequence of HTTP requests for simulating a user session. The algorithm relies on backward symbolic image computation for reaching a certain database state, given a code coverage objective. With a slight adaptation, the technique can be used for discovering workflow attacks on web applications.

  7. Automatic Constraint Detection for 2D Layout Regularization.

    Science.gov (United States)

    Jiang, Haiyong; Nan, Liangliang; Yan, Dong-Ming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter

    2016-08-01

    In this paper, we address the problem of constraint detection for layout regularization. The layout we consider is a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important in digitizing plans or images, such as floor plans and facade images, and in the improvement of user-created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm that automatically detects constraints. We evaluate the proposed framework using a variety of input layouts from different applications. Our results demonstrate that our method has superior performance to the state of the art.

  8. Automatic Constraint Detection for 2D Layout Regularization

    KAUST Repository

    Jiang, Haiyong

    2015-09-18

    In this paper, we address the problem of constraint detection for layout regularization. As layout we consider a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important for digitizing plans or images, such as floor plans and facade images, and for the improvement of user created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate the layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm to automatically detect constraints. In our results, we evaluate the proposed framework on a variety of input layouts from different applications, which demonstrates our method has superior performance to the state of the art.

  9. Radio resource allocation over fading channels under statistical delay constraints

    CERN Document Server

    Le-Ngoc, Tho

    2017-01-01

    This SpringerBrief presents radio resource allocation schemes for buffer-aided communications systems over fading channels under statistical delay constraints in terms of upper-bounded average delay or delay-outage probability. This Brief starts by considering a source-destination communications link with data arriving at the source transmission buffer. The first scenario, the joint optimal data admission control and power allocation problem for throughput maximization is considered, where the source is assumed to have a maximum power and an average delay constraints. The second scenario, optimal power allocation problems for energy harvesting (EH) communications systems under average delay or delay-outage constraints are explored, where the EH source harvests random amounts of energy from renewable energy sources, and stores the harvested energy in a battery during data transmission. Online resource allocation algorithms are developed when the statistical knowledge of the random channel fading, data arrivals...

  10. NP-hardness of decoding quantum error-correction codes

    Science.gov (United States)

    Hsieh, Min-Hsiu; Le Gall, François

    2011-05-01

    Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.

  11. NP-hardness of decoding quantum error-correction codes

    International Nuclear Information System (INIS)

    Hsieh, Min-Hsiu; Le Gall, Francois

    2011-01-01

    Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.

  12. Complementary Set Matrices Satisfying a Column Correlation Constraint

    OpenAIRE

    Wu, Di; Spasojevic, Predrag

    2006-01-01

    Motivated by the problem of reducing the peak to average power ratio (PAPR) of transmitted signals, we consider a design of complementary set matrices whose column sequences satisfy a correlation constraint. The design algorithm recursively builds a collection of $2^{t+1}$ mutually orthogonal (MO) complementary set matrices starting from a companion pair of sequences. We relate correlation properties of column sequences to that of the companion pair and illustrate how to select an appropriate...

  13. Analysis of entropy models with equality and inequality constraints

    Energy Technology Data Exchange (ETDEWEB)

    Jefferson, T R; Scott, C H

    1979-06-01

    Entropy models are emerging as valuable tools in the study of various social problems of spatial interaction. With the development of the modeling has come diversity. Increased flexibility in the model can be obtained by allowing certain constraints to be relaxed from equality to inequality. To provide a better understanding of these entropy models they are analyzed by geometric programming. Dual mathematical programs and algorithms are obtained. 7 references.

  14. A Globally Convergent Parallel SSLE Algorithm for Inequality Constrained Optimization

    Directory of Open Access Journals (Sweden)

    Zhijun Luo

    2014-01-01

    Full Text Available A new parallel variable distribution algorithm based on interior point SSLE algorithm is proposed for solving inequality constrained optimization problems under the condition that the constraints are block-separable by the technology of sequential system of linear equation. Each iteration of this algorithm only needs to solve three systems of linear equations with the same coefficient matrix to obtain the descent direction. Furthermore, under certain conditions, the global convergence is achieved.

  15. A quantum algorithm for Viterbi decoding of classical convolutional codes

    OpenAIRE

    Grice, Jon R.; Meyer, David A.

    2014-01-01

    We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper the proposed algorithm is applied to decoding classical convolutional codes, for instance; large constraint length $Q$ and short decode frames $N$. Other applications of the classical Viterbi algorithm where $Q$ is large (e.g. speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butter...

  16. A HYBRID ALGORITHM FOR THE ROBUST GRAPH COLORING PROBLEM

    Directory of Open Access Journals (Sweden)

    Román Anselmo Mora Gutiérrez

    2016-08-01

    Full Text Available A hybridalgorithm which combines mathematical programming techniques (Kruskal’s algorithm and the strategy of maintaining arc consistency to solve constraint satisfaction problem “CSP” and heuristic methods (musical composition method and DSATUR to resolve the robust graph coloring problem (RGCP is proposed in this paper. Experimental result shows that this algorithm is better than the other algorithms presented on the literature.

  17. Computing group cardinality constraint solutions for logistic regression problems.

    Science.gov (United States)

    Zhang, Yong; Kwon, Dongjin; Pohl, Kilian M

    2017-01-01

    We derive an algorithm to directly solve logistic regression based on cardinality constraint, group sparsity and use it to classify intra-subject MRI sequences (e.g. cine MRIs) of healthy from diseased subjects. Group cardinality constraint models are often applied to medical images in order to avoid overfitting of the classifier to the training data. Solutions within these models are generally determined by relaxing the cardinality constraint to a weighted feature selection scheme. However, these solutions relate to the original sparse problem only under specific assumptions, which generally do not hold for medical image applications. In addition, inferring clinical meaning from features weighted by a classifier is an ongoing topic of discussion. Avoiding weighing features, we propose to directly solve the group cardinality constraint logistic regression problem by generalizing the Penalty Decomposition method. To do so, we assume that an intra-subject series of images represents repeated samples of the same disease patterns. We model this assumption by combining series of measurements created by a feature across time into a single group. Our algorithm then derives a solution within that model by decoupling the minimization of the logistic regression function from enforcing the group sparsity constraint. The minimum to the smooth and convex logistic regression problem is determined via gradient descent while we derive a closed form solution for finding a sparse approximation of that minimum. We apply our method to cine MRI of 38 healthy controls and 44 adult patients that received reconstructive surgery of Tetralogy of Fallot (TOF) during infancy. Our method correctly identifies regions impacted by TOF and generally obtains statistically significant higher classification accuracy than alternative solutions to this model, i.e., ones relaxing group cardinality constraints. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Numerical evaluation of a robust self-triggered MPC algorithm

    NARCIS (Netherlands)

    Brunner, F.D.; Heemels, W.P.M.H.; Allgöwer, F.

    2016-01-01

    We present numerical examples demonstrating the efficacy of a recently proposed self-triggered model predictive control scheme for disturbed linear discrete-time systems with hard constraints on the input and state. In order to reduce the amount of communication between the controller and the

  19. Javascript Library for Developing Interactive Micro-Level Animations for Teaching and Learning Algorithms on One-Dimensional Arrays

    Science.gov (United States)

    Végh, Ladislav

    2016-01-01

    The first data structure that first-year undergraduate students learn during the programming and algorithms courses is the one-dimensional array. For novice programmers, it might be hard to understand different algorithms on arrays (e.g. searching, mirroring, sorting algorithms), because the algorithms dynamically change the values of elements. In…

  20. Fluid convection, constraint and causation

    Science.gov (United States)

    Bishop, Robert C.

    2012-01-01

    Complexity—nonlinear dynamics for my purposes in this essay—is rich with metaphysical and epistemological implications but is receiving sustained philosophical analysis only recently. I will explore some of the subtleties of causation and constraint in Rayleigh–Bénard convection as an example of a complex phenomenon, and extract some lessons for further philosophical reflection on top-down constraint and causation particularly with respect to causal foundationalism. PMID:23386955

  1. The hard problem of cooperation.

    Directory of Open Access Journals (Sweden)

    Kimmo Eriksson

    Full Text Available Based on individual variation in cooperative inclinations, we define the "hard problem of cooperation" as that of achieving high levels of cooperation in a group of non-cooperative types. Can the hard problem be solved by institutions with monitoring and sanctions? In a laboratory experiment we find that the answer is affirmative if the institution is imposed on the group but negative if development of the institution is left to the group to vote on. In the experiment, participants were divided into groups of either cooperative types or non-cooperative types depending on their behavior in a public goods game. In these homogeneous groups they repeatedly played a public goods game regulated by an institution that incorporated several of the key properties identified by Ostrom: operational rules, monitoring, rewards, punishments, and (in one condition change of rules. When change of rules was not possible and punishments were set to be high, groups of both types generally abided by operational rules demanding high contributions to the common good, and thereby achieved high levels of payoffs. Under less severe rules, both types of groups did worse but non-cooperative types did worst. Thus, non-cooperative groups profited the most from being governed by an institution demanding high contributions and employing high punishments. Nevertheless, in a condition where change of rules through voting was made possible, development of the institution in this direction was more often voted down in groups of non-cooperative types. We discuss the relevance of the hard problem and fit our results into a bigger picture of institutional and individual determinants of cooperative behavior.

  2. The hard problem of cooperation.

    Science.gov (United States)

    Eriksson, Kimmo; Strimling, Pontus

    2012-01-01

    Based on individual variation in cooperative inclinations, we define the "hard problem of cooperation" as that of achieving high levels of cooperation in a group of non-cooperative types. Can the hard problem be solved by institutions with monitoring and sanctions? In a laboratory experiment we find that the answer is affirmative if the institution is imposed on the group but negative if development of the institution is left to the group to vote on. In the experiment, participants were divided into groups of either cooperative types or non-cooperative types depending on their behavior in a public goods game. In these homogeneous groups they repeatedly played a public goods game regulated by an institution that incorporated several of the key properties identified by Ostrom: operational rules, monitoring, rewards, punishments, and (in one condition) change of rules. When change of rules was not possible and punishments were set to be high, groups of both types generally abided by operational rules demanding high contributions to the common good, and thereby achieved high levels of payoffs. Under less severe rules, both types of groups did worse but non-cooperative types did worst. Thus, non-cooperative groups profited the most from being governed by an institution demanding high contributions and employing high punishments. Nevertheless, in a condition where change of rules through voting was made possible, development of the institution in this direction was more often voted down in groups of non-cooperative types. We discuss the relevance of the hard problem and fit our results into a bigger picture of institutional and individual determinants of cooperative behavior.

  3. Unique sodium phosphosilicate glasses designed through extended topological constraint theory.

    Science.gov (United States)

    Zeng, Huidan; Jiang, Qi; Liu, Zhao; Li, Xiang; Ren, Jing; Chen, Guorong; Liu, Fude; Peng, Shou

    2014-05-15

    Sodium phosphosilicate glasses exhibit unique properties with mixed network formers, and have various potential applications. However, proper understanding on the network structures and property-oriented methodology based on compositional changes are lacking. In this study, we have developed an extended topological constraint theory and applied it successfully to analyze the composition dependence of glass transition temperature (Tg) and hardness of sodium phosphosilicate glasses. It was found that the hardness and Tg of glasses do not always increase with the content of SiO2, and there exist maximum hardness and Tg at a certain content of SiO2. In particular, a unique glass (20Na2O-17SiO2-63P2O5) exhibits a low glass transition temperature (589 K) but still has relatively high hardness (4.42 GPa) mainly due to the high fraction of highly coordinated network former Si((6)). Because of its convenient forming and manufacturing, such kind of phosphosilicate glasses has a lot of valuable applications in optical fibers, optical amplifiers, biomaterials, and fuel cells. Also, such methodology can be applied to other types of phosphosilicate glasses with similar structures.

  4. Hard electroproduction of hybrid mesons

    International Nuclear Information System (INIS)

    Anikin, I.V.; LPT Universite Paris-Sud, Orsay; Szymanowski, L.; Teryaev, O.V.; ); Wallon, S.

    2005-01-01

    We estimate the sizeable cross section for deep exclusive electroproduction of an exotic J PC = 1 -+ hybrid meson in the Bjorken regime. The production amplitude scales like the one for usual meson electroproduction, i.e. as 1/Q 2 . This is due to the non-vanishing leading twist distribution amplitude for the hybrid meson, which may be normalized thanks to its relation to the energy momentum tensor and to the QCD sum rules technique. The hard amplitude is considered up to next-to-leading order in as and we explore the consequences of fixing the renormalization scale ambiguity through the BLM procedure. (author)

  5. Hard Identity and Soft Identity

    Directory of Open Access Journals (Sweden)

    Hassan Rachik

    2006-04-01

    Full Text Available Often collective identities are classified depending on their contents and rarely depending on their forms. Differentiation between soft identity and hard identity is applied to diverse collective identities: religious, political, national, tribal ones, etc. This classification is made following the principal dimensions of collective identities: type of classification (univocal and exclusive or relative and contextual, the absence or presence of conflictsof loyalty, selective or totalitarian, objective or subjective conception, among others. The different characteristics analysed contribute to outlining an increasingly frequent type of identity: the authoritarian identity.

  6. Leaf sequencing algorithms for segmented multileaf collimation

    International Nuclear Information System (INIS)

    Kamath, Srijit; Sahni, Sartaj; Li, Jonathan; Palta, Jatinder; Ranka, Sanjay

    2003-01-01

    The delivery of intensity-modulated radiation therapy (IMRT) with a multileaf collimator (MLC) requires the conversion of a radiation fluence map into a leaf sequence file that controls the movement of the MLC during radiation delivery. It is imperative that the fluence map delivered using the leaf sequence file is as close as possible to the fluence map generated by the dose optimization algorithm, while satisfying hardware constraints of the delivery system. Optimization of the leaf sequencing algorithm has been the subject of several recent investigations. In this work, we present a systematic study of the optimization of leaf sequencing algorithms for segmental multileaf collimator beam delivery and provide rigorous mathematical proofs of optimized leaf sequence settings in terms of monitor unit (MU) efficiency under most common leaf movement constraints that include minimum leaf separation constraint and leaf interdigitation constraint. Our analytical analysis shows that leaf sequencing based on unidirectional movement of the MLC leaves is as MU efficient as bidirectional movement of the MLC leaves

  7. Leaf sequencing algorithms for segmented multileaf collimation

    Energy Technology Data Exchange (ETDEWEB)

    Kamath, Srijit [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States); Sahni, Sartaj [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States); Li, Jonathan [Department of Radiation Oncology, University of Florida, Gainesville, FL (United States); Palta, Jatinder [Department of Radiation Oncology, University of Florida, Gainesville, FL (United States); Ranka, Sanjay [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States)

    2003-02-07

    The delivery of intensity-modulated radiation therapy (IMRT) with a multileaf collimator (MLC) requires the conversion of a radiation fluence map into a leaf sequence file that controls the movement of the MLC during radiation delivery. It is imperative that the fluence map delivered using the leaf sequence file is as close as possible to the fluence map generated by the dose optimization algorithm, while satisfying hardware constraints of the delivery system. Optimization of the leaf sequencing algorithm has been the subject of several recent investigations. In this work, we present a systematic study of the optimization of leaf sequencing algorithms for segmental multileaf collimator beam delivery and provide rigorous mathematical proofs of optimized leaf sequence settings in terms of monitor unit (MU) efficiency under most common leaf movement constraints that include minimum leaf separation constraint and leaf interdigitation constraint. Our analytical analysis shows that leaf sequencing based on unidirectional movement of the MLC leaves is as MU efficient as bidirectional movement of the MLC leaves.

  8. Bin-packing problems with load balancing and stability constraints

    DEFF Research Database (Denmark)

    Trivella, Alessio; Pisinger, David

    apper in a wide range of disciplines, including transportation and logistics, computer science, engineering, economics and manufacturing. The problem is well-known to be N P-hard and difficult to solve in practice, especially when dealing with the multi-dimensional cases. Closely connected to the BPP...... realistic constraints related to e.g. load balancing, cargo stability and weight limits, in the multi-dimensional BPP. The BPP poses additional challenges compared to the CLP due to the supplementary objective of minimizing the number of bins. In particular, in section 2 we discuss how to integrate bin......-packing and load balancing of items. The problem has only been considered in the literature in simplified versions, e.g. balancing a single bin or introducing a feasible region for the barycenter. In section 3 we generalize the problem to handle cargo stability and weight constraints....

  9. Physical constraints on models of gamma-ray bursters

    International Nuclear Information System (INIS)

    Epstein, R.I.

    1985-01-01

    This report deals with the constraints that can be placed on models of gamma-ray burst sources based on only the well-established observational facts and physical principles. The premise is developed that the very hard x-ray and gamma-ray continua spectra are well-established aspects of gamma-ray bursts. Recent theoretical work on gamma-ray bursts are summarized with emphasis on the geometrical properties of the models. Constraints on the source models which are implied by the x-ray and gamma-ray spectra are described. The allowed ranges for the luminosity and characteristic dimension for gamma-ray burst sources are shown. Some of the deductions and inferences about the nature of the gamma-ray burst sources are summarized. 67 refs., 3 figs

  10. Robust stability in predictive control with soft constraints

    DEFF Research Database (Denmark)

    Thomsen, Sven Creutz; Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2010-01-01

    In this paper we take advantage of the primary and dual Youla parameterizations for setting up a soft constrained model predictive control (MPC) scheme for which stability is guaranteed in face of norm-bounded uncertainties. Under special conditions guarantees are also given for hard input...... constraints. In more detail, we parameterize the MPC predictions in terms of the primary Youla parameter and use this parameter as the online optimization variable. The uncertainty is parameterized in terms of the dual Youla parameter. Stability can then be guaranteed through small gain arguments on the loop...

  11. Aespoe hard rock laboratory Sweden

    International Nuclear Information System (INIS)

    1992-01-01

    The aim of the new Aespoe hard rock laboratory is to demonstrate state of the art of technology and evaluation methods before the start of actual construction work on the planned deep repository for spent nuclear fuel. The nine country OECD/NEA project in the Stripa mine in Sweden has been an excellent example of high quality international research co-operation. In Sweden the new Aespoe hard rock laboratory will gradually take over and finalize this work. SKB very much appreciates the continued international participation in Aespoe which is of great value for the quality efficiency, and confidence in this kind of work. We have invited a number of leading experts to this first international seminar to summarize the current state of a number of key questions. The contributions show the great progress that has taken place during the years. The results show that there is a solid scientific basis for using this knowledge on site specific preparation and work on actual repositories. (au)

  12. Algorithmic chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  13. A Novel Spatial-Temporal Voronoi Diagram-Based Heuristic Approach for Large-Scale Vehicle Routing Optimization with Time Constraints

    Directory of Open Access Journals (Sweden)

    Wei Tu

    2015-10-01

    Full Text Available Vehicle routing optimization (VRO designs the best routes to reduce travel cost, energy consumption, and carbon emission. Due to non-deterministic polynomial-time hard (NP-hard complexity, many VROs involved in real-world applications require too much computing effort. Shortening computing time for VRO is a great challenge for state-of-the-art spatial optimization algorithms. From a spatial-temporal perspective, this paper presents a spatial-temporal Voronoi diagram-based heuristic approach for large-scale vehicle routing problems with time windows (VRPTW. Considering time constraints, a spatial-temporal Voronoi distance is derived from the spatial-temporal Voronoi diagram to find near neighbors in the space-time searching context. A Voronoi distance decay strategy that integrates a time warp operation is proposed to accelerate local search procedures. A spatial-temporal feature-guided search is developed to improve unpromising micro route structures. Experiments on VRPTW benchmarks and real-world instances are conducted to verify performance. The results demonstrate that the proposed approach is competitive with state-of-the-art heuristics and achieves high-quality solutions for large-scale instances of VRPTWs in a short time. This novel approach will contribute to spatial decision support community by developing an effective vehicle routing optimization method for large transportation applications in both public and private sectors.

  14. Finding the optimal Bayesian network given a constraint graph

    Directory of Open Access Journals (Sweden)

    Jacob M. Schreiber

    2017-07-01

    Full Text Available Despite recent algorithmic improvements, learning the optimal structure of a Bayesian network from data is typically infeasible past a few dozen variables. Fortunately, domain knowledge can frequently be exploited to achieve dramatic computational savings, and in many cases domain knowledge can even make structure learning tractable. Several methods have previously been described for representing this type of structural prior knowledge, including global orderings, super-structures, and constraint rules. While super-structures and constraint rules are flexible in terms of what prior knowledge they can encode, they achieve savings in memory and computational time simply by avoiding considering invalid graphs. We introduce the concept of a “constraint graph” as an intuitive method for incorporating rich prior knowledge into the structure learning task. We describe how this graph can be used to reduce the memory cost and computational time required to find the optimal graph subject to the encoded constraints, beyond merely eliminating invalid graphs. In particular, we show that a constraint graph can break the structure learning task into independent subproblems even in the presence of cyclic prior knowledge. These subproblems are well suited to being solved in parallel on a single machine or distributed across many machines without excessive communication cost.

  15. Developmental constraints on behavioural flexibility.

    Science.gov (United States)

    Holekamp, Kay E; Swanson, Eli M; Van Meter, Page E

    2013-05-19

    We suggest that variation in mammalian behavioural flexibility not accounted for by current socioecological models may be explained in part by developmental constraints. From our own work, we provide examples of constraints affecting variation in behavioural flexibility, not only among individuals, but also among species and higher taxonomic units. We first implicate organizational maternal effects of androgens in shaping individual differences in aggressive behaviour emitted by female spotted hyaenas throughout the lifespan. We then compare carnivores and primates with respect to their locomotor and craniofacial adaptations. We inquire whether antagonistic selection pressures on the skull might impose differential functional constraints on evolvability of skulls and brains in these two orders, thus ultimately affecting behavioural flexibility in each group. We suggest that, even when carnivores and primates would theoretically benefit from the same adaptations with respect to behavioural flexibility, carnivores may nevertheless exhibit less behavioural flexibility than primates because of constraints imposed by past adaptations in the morphology of the limbs and skull. Phylogenetic analysis consistent with this idea suggests greater evolutionary lability in relative brain size within families of primates than carnivores. Thus, consideration of developmental constraints may help elucidate variation in mammalian behavioural flexibility.

  16. Development of radiation hard scintillators

    International Nuclear Information System (INIS)

    Markley, F.; Woods, D.; Pla-Dalmau, A.; Foster, G.; Blackburn, R.

    1992-05-01

    Substantial improvements have been made in the radiation hardness of plastic scintillators. Cylinders of scintillating materials 2.2 cm in diameter and 1 cm thick have been exposed to 10 Mrads of gamma rays at a dose rate of 1 Mrad/h in a nitrogen atmosphere. One of the formulations tested showed an immediate decrease in pulse height of only 4% and has remained stable for 12 days while annealing in air. By comparison a commercial PVT scintillator showed an immediate decrease of 58% and after 43 days of annealing in air it improved to a 14% loss. The formulated sample consisted of 70 parts by weight of Dow polystyrene, 30 pbw of pentaphenyltrimethyltrisiloxane (Dow Corning DC 705 oil), 2 pbw of p-terphenyl, 0.2 pbw of tetraphenylbutadiene, and 0.5 pbw of UVASIL299LM from Ferro

  17. Time Extensions of Petri Nets for Modelling and Verification of Hard Real-Time Systems

    Directory of Open Access Journals (Sweden)

    Tomasz Szmuc

    2002-01-01

    Full Text Available The main aim ofthe paper is a presentation of time extensions of Petri nets appropriate for modelling and analysis of hard real-time systems. It is assumed, that the extensions must provide a model of time flow an ability to force a transition to fire within a stated timing constraint (the so-called the strong firing rule, and timing constraints represented by intervals. The presented survey includes extensions of classical Place/Transition Petri nets, as well as the ones applied to high-level Petri nets. An expressiveness of each time extension is illustrated using simple hard real-time system. The paper includes also a brief description of analysis and veryication methods related to the extensions, and a survey of software tools supporting modelling and analysis ofthe considered Petri nets.

  18. Adaptive Selection of Primal Constraints for Isogeometric BDDC Deluxe Preconditioners

    KAUST Repository

    Beirã o Da Veiga, L.; Pavarino, L. F.; Scacchi, S.; Widlund, O. B.; Zampini, Stefano

    2017-01-01

    Isogeometric analysis has been introduced as an alternative to finite element methods in order to simplify the integration of computer-aided design (CAD) software and the discretization of variational problems of continuum mechanics. In contrast with the finite element case, the basis functions of isogeometric analysis are often not nodal. As a consequence, there are fat interfaces which can easily lead to an increase in the number of interface variables after a decomposition of the parameter space into subdomains. Building on earlier work on the deluxe version of the BDDC (balancing domain decomposition by constraints) family of domain decomposition algorithms, several adaptive algorithms are developed in this paper for scalar elliptic problems in an effort to decrease the dimension of the global, coarse component of these preconditioners. Numerical experiments provide evidence that this work can be successful, yielding scalable and quasi-optimal adaptive BDDC algorithms for isogeometric discretizations.

  19. Adaptive Selection of Primal Constraints for Isogeometric BDDC Deluxe Preconditioners

    KAUST Repository

    Beirão Da Veiga, L.

    2017-02-23

    Isogeometric analysis has been introduced as an alternative to finite element methods in order to simplify the integration of computer-aided design (CAD) software and the discretization of variational problems of continuum mechanics. In contrast with the finite element case, the basis functions of isogeometric analysis are often not nodal. As a consequence, there are fat interfaces which can easily lead to an increase in the number of interface variables after a decomposition of the parameter space into subdomains. Building on earlier work on the deluxe version of the BDDC (balancing domain decomposition by constraints) family of domain decomposition algorithms, several adaptive algorithms are developed in this paper for scalar elliptic problems in an effort to decrease the dimension of the global, coarse component of these preconditioners. Numerical experiments provide evidence that this work can be successful, yielding scalable and quasi-optimal adaptive BDDC algorithms for isogeometric discretizations.

  20. The artificial-free technique along the objective direction for the simplex algorithm

    International Nuclear Information System (INIS)

    Boonperm, Aua-aree; Sinapiromsaran, Krung

    2014-01-01

    The simplex algorithm is a popular algorithm for solving linear programming problems. If the origin point satisfies all constraints then the simplex can be started. Otherwise, artificial variables will be introduced to start the simplex algorithm. If we can start the simplex algorithm without using artificial variables then the simplex iterate will require less time. In this paper, we present the artificial-free technique for the simplex algorithm by mapping the problem into the objective plane and splitting constraints into three groups. In the objective plane, one of variables which has a nonzero coefficient of the objective function is fixed in terms of another variable. Then it can split constraints into three groups: the positive coefficient group, the negative coefficient group and the zero coefficient group. Along the objective direction, some constraints from the positive coefficient group will form the optimal solution. If the positive coefficient group is nonempty, the algorithm starts with relaxing constraints from the negative coefficient group and the zero coefficient group. We guarantee the feasible region obtained from the positive coefficient group to be nonempty. The transformed problem is solved using the simplex algorithm. Additional constraints from the negative coefficient group and the zero coefficient group will be added to the solved problem and use the dual simplex method to determine the new optimal solution. An example shows the effectiveness of our algorithm

  1. The artificial-free technique along the objective direction for the simplex algorithm

    Science.gov (United States)

    Boonperm, Aua-aree; Sinapiromsaran, Krung

    2014-03-01

    The simplex algorithm is a popular algorithm for solving linear programming problems. If the origin point satisfies all constraints then the simplex can be started. Otherwise, artificial variables will be introduced to start the simplex algorithm. If we can start the simplex algorithm without using artificial variables then the simplex iterate will require less time. In this paper, we present the artificial-free technique for the simplex algorithm by mapping the problem into the objective plane and splitting constraints into three groups. In the objective plane, one of variables which has a nonzero coefficient of the objective function is fixed in terms of another variable. Then it can split constraints into three groups: the positive coefficient group, the negative coefficient group and the zero coefficient group. Along the objective direction, some constraints from the positive coefficient group will form the optimal solution. If the positive coefficient group is nonempty, the algorithm starts with relaxing constraints from the negative coefficient group and the zero coefficient group. We guarantee the feasible region obtained from the positive coefficient group to be nonempty. The transformed problem is solved using the simplex algorithm. Additional constraints from the negative coefficient group and the zero coefficient group will be added to the solved problem and use the dual simplex method to determine the new optimal solution. An example shows the effectiveness of our algorithm.

  2. Genetic algorithms and supernovae type Ia analysis

    International Nuclear Information System (INIS)

    Bogdanos, Charalampos; Nesseris, Savvas

    2009-01-01

    We introduce genetic algorithms as a means to analyze supernovae type Ia data and extract model-independent constraints on the evolution of the Dark Energy equation of state w(z) ≡ P DE /ρ DE . Specifically, we will give a brief introduction to the genetic algorithms along with some simple examples to illustrate their advantages and finally we will apply them to the supernovae type Ia data. We find that genetic algorithms can lead to results in line with already established parametric and non-parametric reconstruction methods and could be used as a complementary way of treating SNIa data. As a non-parametric method, genetic algorithms provide a model-independent way to analyze data and can minimize bias due to premature choice of a dark energy model

  3. Towards harnessing theories through tool support for hard real-time Java programming

    DEFF Research Database (Denmark)

    Bøgholm, Thomas; Frost, Christian; Hansen, Rene Rydhof

    2013-01-01

    We present a rationale for a selection of tools that assist developers of hard real-time applications to verify that programs conform to a Java real-time profile and that platform-specific resource constraints are satisfied. These tools are specialised instances of more generic static analysis...... and model checking frameworks. The concepts are illustrated by two case studies, and the strengths and the limitations of the tools are discussed....

  4. Towards harnessing theories through tool support for hard real-time Java programming

    DEFF Research Database (Denmark)

    Søndergaard, Hans; Bøgholm, Thomas; Frost, Christian

    2012-01-01

    We present a rationale for a selection of tools that assist developers of hard real-time applications to verify that programs conform to a Java real-time profile and that platform-specific resource constraints are satisfied. These tools are specialised instances of more generic static analysis...... and model checking frameworks. The concepts are illustrated by two case studies, and the strengths and the limitations of the tools are discussed....

  5. A Bee Colony Optimization Approach for Mixed Blocking Constraints Flow Shop Scheduling Problems

    Directory of Open Access Journals (Sweden)

    Mostafa Khorramizadeh

    2015-01-01

    Full Text Available The flow shop scheduling problems with mixed blocking constraints with minimization of makespan are investigated. The Taguchi orthogonal arrays and path relinking along with some efficient local search methods are used to develop a metaheuristic algorithm based on bee colony optimization. In order to compare the performance of the proposed algorithm, two well-known test problems are considered. Computational results show that the presented algorithm has comparative performance with well-known algorithms of the literature, especially for the large sized problems.

  6. Soft And Hard Skills of Social Worker

    OpenAIRE

    HANTOVÁ, Libuše

    2011-01-01

    The work deals with soft and hard skills relevant to the profession of social worker. The theoretical part at first evaluates and analyzes important soft and hard skills necessary for people working in the field of social work. Then these skills are compared. The practical part illustrates the use of soft and hard skills in practice by means of model scenes and deals with the preferences in three groups of people ? students of social work, social workers and people outside the sphere, namely ...

  7. Theory of hard diffraction and rapidity gaps

    International Nuclear Information System (INIS)

    Del Duca, V.

    1995-06-01

    In this talk we review the models describing the hard diffractive production of jets or more generally high-mass states in presence of rapidity gaps in hadron-hadron and lepton-hadron collisions. By rapidity gaps we mean regions on the lego plot in (pseudo)-rapidity and azimuthal angle where no hadrons are produced, between the jet(s) and an elastically scattered hadron (single hard diffraction) or between two jets (double hard diffraction). (orig.)

  8. Theory of hard diffraction and rapidity gaps

    International Nuclear Information System (INIS)

    Del Duca, V.

    1996-01-01

    In this talk we review the models describing the hard diffractive production of jets or more generally high-mass states in presence of rapidity gaps in hadron-hadron and lepton-hadron collisions. By rapidity gaps we mean regions on the lego plot in (pseudo)-rapidity and azimuthal angle where no hadrons are produced, between the jet(s) and an elastically scattered hadron (single hard diffraction) or between two jets (double hard diffraction). copyright 1996 American Institute of Physics

  9. Advances in hard nucleus cataract surgery

    Directory of Open Access Journals (Sweden)

    Wei Cui

    2013-11-01

    Full Text Available Security and perfect vision and fewer complications are our goals in cataract surgery, and hard-nucleus cataract surgery is always a difficulty one. Many new studies indicate that micro-incision phacoemulsification in treating hard nucleus cataract is obviously effective. This article reviews the evolution process of hard nuclear cataract surgery, the new progress in the research of artificial intraocular lens for microincision, and analyse advantages and disadvantages of various surgical methods.

  10. Challenges in the Verification of Reinforcement Learning Algorithms

    Science.gov (United States)

    Van Wesel, Perry; Goodloe, Alwyn E.

    2017-01-01

    Machine learning (ML) is increasingly being applied to a wide array of domains from search engines to autonomous vehicles. These algorithms, however, are notoriously complex and hard to verify. This work looks at the assumptions underlying machine learning algorithms as well as some of the challenges in trying to verify ML algorithms. Furthermore, we focus on the specific challenges of verifying reinforcement learning algorithms. These are highlighted using a specific example. Ultimately, we do not offer a solution to the complex problem of ML verification, but point out possible approaches for verification and interesting research opportunities.

  11. Intelligent optimization models based on hard-ridge penalty and RBF for forecasting global solar radiation

    International Nuclear Information System (INIS)

    Jiang, He; Dong, Yao; Wang, Jianzhou; Li, Yuqin

    2015-01-01

    Highlights: • CS-hard-ridge-RBF and DE-hard-ridge-RBF are proposed to forecast solar radiation. • Pearson and Apriori algorithm are used to analyze correlations between the data. • Hard-ridge penalty is added to reduce the number of nodes in the hidden layer. • CS algorithm and DE algorithm are used to determine the optimal parameters. • Proposed two models have higher forecasting accuracy than RBF and hard-ridge-RBF. - Abstract: Due to the scarcity of equipment and the high costs of maintenance, far fewer observations of solar radiation are made than observations of temperature, precipitation and other weather factors. Therefore, it is increasingly important to study several relevant meteorological factors to accurately forecast solar radiation. For this research, monthly average global solar radiation and 12 meteorological parameters from 1998 to 2010 at four sites in the United States were collected. Pearson correlation coefficients and Apriori association rules were successfully used to analyze correlations between the data, which provided a basis for these relative parameters as input variables. Two effective and innovative methods were developed to forecast monthly average global solar radiation by converting a RBF neural network into a multiple linear regression problem, adding a hard-ridge penalty to reduce the number of nodes in the hidden layer, and applying intelligent optimization algorithms, such as the cuckoo search algorithm (CS) and differential evolution (DE), to determine the optimal center and scale parameters. The experimental results show that the proposed models produce much more accurate forecasts than other models

  12. Constraint elimination in dynamical systems

    Science.gov (United States)

    Singh, R. P.; Likins, P. W.

    1989-01-01

    Large space structures (LSSs) and other dynamical systems of current interest are often extremely complex assemblies of rigid and flexible bodies subjected to kinematical constraints. A formulation is presented for the governing equations of constrained multibody systems via the application of singular value decomposition (SVD). The resulting equations of motion are shown to be of minimum dimension.

  13. Constraint Programming versus Mathematical Programming

    DEFF Research Database (Denmark)

    Hansen, Jesper

    2003-01-01

    Constraint Logic Programming (CLP) is a relatively new technique from the 80's with origins in Computer Science and Artificial Intelligence. Lately, much research have been focused on ways of using CLP within the paradigm of Operations Research (OR) and vice versa. The purpose of this paper...

  14. Sterile neutrino constraints from cosmology

    DEFF Research Database (Denmark)

    Hamann, Jan; Hannestad, Steen; Raffelt, Georg G.

    2012-01-01

    The presence of light particles beyond the standard model's three neutrino species can profoundly impact the physics of decoupling and primordial nucleosynthesis. I review the observational signatures of extra light species, present constraints from recent data, and discuss the implications of po...... of possible sterile neutrinos with O(eV)-masses for cosmology....

  15. Intertemporal consumption and credit constraints

    DEFF Research Database (Denmark)

    Leth-Petersen, Søren

    2010-01-01

    There is continuing controversy over the importance of credit constraints. This paper investigates whether total household expenditure and debt is affected by an exogenous increase in access to credit provided by a credit market reform that enabled Danish house owners to use housing equity...

  16. Financial Constraints: Explaining Your Position.

    Science.gov (United States)

    Cargill, Jennifer

    1988-01-01

    Discusses the importance of educating library patrons about the library's finances and the impact of budget constraints and the escalating cost of serials on materials acquisition. Steps that can be taken in educating patrons by interpreting and publicizing financial information are suggested. (MES)

  17. Overlap Algorithms in Flexible Job-shop Scheduling

    Directory of Open Access Journals (Sweden)

    Celia Gutierrez

    2014-06-01

    Full Text Available The flexible Job-shop Scheduling Problem (fJSP considers the execution of jobs by a set of candidate resources while satisfying time and technological constraints. This work, that follows the hierarchical architecture, is based on an algorithm where each objective (resource allocation, start-time assignment is solved by a genetic algorithm (GA that optimizes a particular fitness function, and enhances the results by the execution of a set of heuristics that evaluate and repair each scheduling constraint on each operation. The aim of this work is to analyze the impact of some algorithmic features of the overlap constraint heuristics, in order to achieve the objectives at a highest degree. To demonstrate the efficiency of this approach, experimentation has been performed and compared with similar cases, tuning the GA parameters correctly.

  18. Pseudo-deterministic Algorithms

    OpenAIRE

    Goldwasser , Shafi

    2012-01-01

    International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...

  19. Maximum Feedrate Interpolator for Multi-axis CNC Machining with Jerk Constraints

    OpenAIRE

    Beudaert , Xavier; Lavernhe , Sylvain; Tournier , Christophe

    2012-01-01

    A key role of the CNC is to perform the feedrate interpolation which means to generate the setpoints for each machine tool axis. The aim of the VPOp algorithm is to make maximum use of the machine tool respecting both tangential and axis jerk on rotary and linear axes. The developed algorithm uses an iterative constraints intersection approach. At each sampling period, all the constraints given by each axis are expressed and by intersecting all of them the allowable interval for the next poin...

  20. Convolutional Encoder and Viterbi Decoder Using SOPC For Variable Constraint Length

    DEFF Research Database (Denmark)

    Kulkarni, Anuradha; Dnyaneshwar, Mantri; Prasad, Neeli R.

    2013-01-01

    Convolution encoder and Viterbi decoder are the basic and important blocks in any Code Division Multiple Accesses (CDMA). They are widely used in communication system due to their error correcting capability But the performance degrades with variable constraint length. In this context to have...... detailed analysis, this paper deals with the implementation of convolution encoder and Viterbi decoder using system on programming chip (SOPC). It uses variable constraint length of 7, 8 and 9 bits for 1/2 and 1/3 code rates. By analyzing the Viterbi algorithm it is seen that our algorithm has a better...

  1. Genetic programming over context-free languages with linear constraints for the knapsack problem: first results.

    Science.gov (United States)

    Bruhn, Peter; Geyer-Schulz, Andreas

    2002-01-01

    In this paper, we introduce genetic programming over context-free languages with linear constraints for combinatorial optimization, apply this method to several variants of the multidimensional knapsack problem, and discuss its performance relative to Michalewicz's genetic algorithm with penalty functions. With respect to Michalewicz's approach, we demonstrate that genetic programming over context-free languages with linear constraints improves convergence. A final result is that genetic programming over context-free languages with linear constraints is ideally suited to modeling complementarities between items in a knapsack problem: The more complementarities in the problem, the stronger the performance in comparison to its competitors.

  2. Improved hybrid optimization algorithm for 3D protein structure prediction.

    Science.gov (United States)

    Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang

    2014-07-01

    A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins.

  3. Robustness of the ATLAS pixel clustering neural network algorithm

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00407780; The ATLAS collaboration

    2016-01-01

    Proton-proton collisions at the energy frontier puts strong constraints on track reconstruction algorithms. In the ATLAS track reconstruction algorithm, an artificial neural network is utilised to identify and split clusters of neighbouring read-out elements in the ATLAS pixel detector created by multiple charged particles. The robustness of the neural network algorithm is presented, probing its sensitivity to uncertainties in the detector conditions. The robustness is studied by evaluating the stability of the algorithm's performance under a range of variations in the inputs to the neural networks. Within reasonable variation magnitudes, the neural networks prove to be robust to most variation types.

  4. Tie Points Extraction for SAR Images Based on Differential Constraints

    Science.gov (United States)

    Xiong, X.; Jin, G.; Xu, Q.; Zhang, H.

    2018-04-01

    Automatically extracting tie points (TPs) on large-size synthetic aperture radar (SAR) images is still challenging because the efficiency and correct ratio of the image matching need to be improved. This paper proposes an automatic TPs extraction method based on differential constraints for large-size SAR images obtained from approximately parallel tracks, between which the relative geometric distortions are small in azimuth direction and large in range direction. Image pyramids are built firstly, and then corresponding layers of pyramids are matched from the top to the bottom. In the process, the similarity is measured by the normalized cross correlation (NCC) algorithm, which is calculated from a rectangular window with the long side parallel to the azimuth direction. False matches are removed by the differential constrained random sample consensus (DC-RANSAC) algorithm, which appends strong constraints in azimuth direction and weak constraints in range direction. Matching points in the lower pyramid images are predicted with the local bilinear transformation model in range direction. Experiments performed on ENVISAT ASAR and Chinese airborne SAR images validated the efficiency, correct ratio and accuracy of the proposed method.

  5. TIE POINTS EXTRACTION FOR SAR IMAGES BASED ON DIFFERENTIAL CONSTRAINTS

    Directory of Open Access Journals (Sweden)

    X. Xiong

    2018-04-01

    Full Text Available Automatically extracting tie points (TPs on large-size synthetic aperture radar (SAR images is still challenging because the efficiency and correct ratio of the image matching need to be improved. This paper proposes an automatic TPs extraction method based on differential constraints for large-size SAR images obtained from approximately parallel tracks, between which the relative geometric distortions are small in azimuth direction and large in range direction. Image pyramids are built firstly, and then corresponding layers of pyramids are matched from the top to the bottom. In the process, the similarity is measured by the normalized cross correlation (NCC algorithm, which is calculated from a rectangular window with the long side parallel to the azimuth direction. False matches are removed by the differential constrained random sample consensus (DC-RANSAC algorithm, which appends strong constraints in azimuth direction and weak constraints in range direction. Matching points in the lower pyramid images are predicted with the local bilinear transformation model in range direction. Experiments performed on ENVISAT ASAR and Chinese airborne SAR images validated the efficiency, correct ratio and accuracy of the proposed method.

  6. FPGA Dynamic Power Minimization through Placement and Routing Constraints

    Directory of Open Access Journals (Sweden)

    Deepak Agarwal

    2006-08-01

    Full Text Available Field-programmable gate arrays (FPGAs are pervasive in embedded systems requiring low-power utilization. A novel power optimization methodology for reducing the dynamic power consumed by the routing of FPGA circuits by modifying the constraints applied to existing commercial tool sets is presented. The power optimization techniques influence commercial FPGA Place and Route (PAR tools by translating power goals into standard throughput and placement-based constraints. The Low-Power Intelligent Tool Environment (LITE is presented, which was developed to support the experimentation of power models and power optimization algorithms. The generated constraints seek to implement one of four power optimization approaches: slack minimization, clock tree paring, N-terminal net colocation, and area minimization. In an experimental study, we optimize dynamic power of circuits mapped into 0.12 μm Xilinx Virtex-II FPGAs. Results show that several optimization algorithms can be combined on a single design, and power is reduced by up to 19.4%, with an average power savings of 10.2%.

  7. 30 CFR 75.1720-1 - Distinctively colored hard hats, or hard caps; identification for newly employed, inexperienced...

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Distinctively colored hard hats, or hard caps... STANDARDS-UNDERGROUND COAL MINES Miscellaneous § 75.1720-1 Distinctively colored hard hats, or hard caps; identification for newly employed, inexperienced miners. Hard hats or hard caps distinctively different in color...

  8. 30 CFR 77.1710-1 - Distinctively colored hard hats or hard caps; identification for newly employed, inexperienced...

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Distinctively colored hard hats or hard caps... Distinctively colored hard hats or hard caps; identification for newly employed, inexperienced miners. Hard hats or hard caps distinctively different in color from those worn by experienced miners shall be worn at...

  9. Hard disks with SCSI interface

    CERN Document Server

    Denisov, O Yu

    1999-01-01

    The testing of 20 models of hard SCSI-disks is carried out: the Fujitsu MAE3091LP; the IBM DDRS-39130, DGHS-318220, DNES-318350, DRHS-36V and DRVS-18V; the Quantum Atlas VI 18.2; the Viking 11 9.1; the Seagate ST118202LW, ST118273LW, ST118273W, ST318203LW, ST318275LW, ST34520W, ST39140LW and ST39173W; and the Western Digital WDE9100-0007, WDE9100-AV0016, WDE9100-AV0030 and WDE9180-0048. All tests ran under the Windows NT 4.0 workstation operating system with Service Pack 4, under video mode with 1024*768 pixel resolution, 32- bit colour depth and V-frequency equal to 85 Hz. The detailed description and characteristics of SCSI stores are presented. Test results (ZD Winstone 99 and ZD WinBench 99 tests) are given in both table and diagram (disk transfer rate) forms. (0 refs).

  10. GraDit: graph-based data repair algorithm for multiple data edits rule violations

    Science.gov (United States)

    Ode Zuhayeni Madjida, Wa; Gusti Bagus Baskara Nugraha, I.

    2018-03-01

    Constraint-based data cleaning captures data violation to a set of rule called data quality rules. The rules consist of integrity constraint and data edits. Structurally, they are similar, where the rule contain left hand side and right hand side. Previous research proposed a data repair algorithm for integrity constraint violation. The algorithm uses undirected hypergraph as rule violation representation. Nevertheless, this algorithm can not be applied for data edits because of different rule characteristics. This study proposed GraDit, a repair algorithm for data edits rule. First, we use bipartite-directed hypergraph as model representation of overall defined rules. These representation is used for getting interaction between violation rules and clean rules. On the other hand, we proposed undirected graph as violation representation. Our experimental study showed that algorithm with undirected graph as violation representation model gave better data quality than algorithm with undirected hypergraph as representation model.

  11. Some Very Hard Problems in Nature (Biology-biochemistry) Solved Using Physical Algorithms that Reduce the Hardness

    Science.gov (United States)

    2008-09-18

    unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Cooperativity at the monomolecular level binding of B or C to the common...alloo- steric effectors such as 2,3- diphosphoglycerate . The theoreti· cal relation between the two coefficients in_ the presence of 2.,3...d.iphosphoglycerate is derived •. Experimental _data on the variation of both coefficients with diphosphoglycerate con· centration are presented and shown to be

  12. Clustering Using Boosted Constrained k-Means Algorithm

    Directory of Open Access Journals (Sweden)

    Masayuki Okabe

    2018-03-01

    Full Text Available This article proposes a constrained clustering algorithm with competitive performance and less computation time to the state-of-the-art methods, which consists of a constrained k-means algorithm enhanced by the boosting principle. Constrained k-means clustering using constraints as background knowledge, although easy to implement and quick, has insufficient performance compared with metric learning-based methods. Since it simply adds a function into the data assignment process of the k-means algorithm to check for constraint violations, it often exploits only a small number of constraints. Metric learning-based methods, which exploit constraints to create a new metric for data similarity, have shown promising results although the methods proposed so far are often slow depending on the amount of data or number of feature dimensions. We present a method that exploits the advantages of the constrained k-means and metric learning approaches. It incorporates a mechanism for accepting constraint priorities and a metric learning framework based on the boosting principle into a constrained k-means algorithm. In the framework, a metric is learned in the form of a kernel matrix that integrates weak cluster hypotheses produced by the constrained k-means algorithm, which works as a weak learner under the boosting principle. Experimental results for 12 data sets from 3 data sources demonstrated that our method has performance competitive to those of state-of-the-art constrained clustering methods for most data sets and that it takes much less computation time. Experimental evaluation demonstrated the effectiveness of controlling the constraint priorities by using the boosting principle and that our constrained k-means algorithm functions correctly as a weak learner of boosting.

  13. [Orthogonal Vector Projection Algorithm for Spectral Unmixing].

    Science.gov (United States)

    Song, Mei-ping; Xu, Xing-wei; Chang, Chein-I; An, Ju-bai; Yao, Li

    2015-12-01

    Spectrum unmixing is an important part of hyperspectral technologies, which is essential for material quantity analysis in hyperspectral imagery. Most linear unmixing algorithms require computations of matrix multiplication and matrix inversion or matrix determination. These are difficult for programming, especially hard for realization on hardware. At the same time, the computation costs of the algorithms increase significantly as the number of endmembers grows. Here, based on the traditional algorithm Orthogonal Subspace Projection, a new method called. Orthogonal Vector Projection is prompted using orthogonal principle. It simplifies this process by avoiding matrix multiplication and inversion. It firstly computes the final orthogonal vector via Gram-Schmidt process for each endmember spectrum. And then, these orthogonal vectors are used as projection vector for the pixel signature. The unconstrained abundance can be obtained directly by projecting the signature to the projection vectors, and computing the ratio of projected vector length and orthogonal vector length. Compared to the Orthogonal Subspace Projection and Least Squares Error algorithms, this method does not need matrix inversion, which is much computation costing and hard to implement on hardware. It just completes the orthogonalization process by repeated vector operations, easy for application on both parallel computation and hardware. The reasonability of the algorithm is proved by its relationship with Orthogonal Sub-space Projection and Least Squares Error algorithms. And its computational complexity is also compared with the other two algorithms', which is the lowest one. At last, the experimental results on synthetic image and real image are also provided, giving another evidence for effectiveness of the method.

  14. A New Segment Building Algorithm for the Cathode Strip Chambers in the CMS Experiment

    Directory of Open Access Journals (Sweden)

    Golutvin I.

    2016-01-01

    Full Text Available A new segment building algorithm for the Cathode Strip Chambers in the CMS experiment is presented. A detailed description of the new algorithm is given along with a comparison with the algorithm used in the CMS software. The new segment builder was tested with different Monte-Carlo data samples. The new algorithm is meant to be robust and effective for hard muons and the higher luminosity that is expected in the future at the LHC.

  15. Agnostic Active Learning Without Constraints

    OpenAIRE

    Beygelzimer, Alina; Hsu, Daniel; Langford, John; Zhang, Tong

    2010-01-01

    We present and analyze an agnostic active learning algorithm that works without keeping a version space. This is unlike all previous approaches where a restricted set of candidate hypotheses is maintained throughout learning, and only hypotheses from this set are ever returned. By avoiding this version space approach, our algorithm sheds the computational burden and brittleness associated with maintaining version spaces, yet still allows for substantial improvements over supervised learning f...

  16. System engineering approach to GPM retrieval algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Rose, C. R. (Chris R.); Chandrasekar, V.

    2004-01-01

    System engineering principles and methods are very useful in large-scale complex systems for developing the engineering requirements from end-user needs. Integrating research into system engineering is a challenging task. The proposed Global Precipitation Mission (GPM) satellite will use a dual-wavelength precipitation radar to measure and map global precipitation with unprecedented accuracy, resolution and areal coverage. The satellite vehicle, precipitation radars, retrieval algorithms, and ground validation (GV) functions are all critical subsystems of the overall GPM system and each contributes to the success of the mission. Errors in the radar measurements and models can adversely affect the retrieved output values. Ground validation (GV) systems are intended to provide timely feedback to the satellite and retrieval algorithms based on measured data. These GV sites will consist of radars and DSD measurement systems and also have intrinsic constraints. One of the retrieval algorithms being studied for use with GPM is the dual-wavelength DSD algorithm that does not use the surface reference technique (SRT). The underlying microphysics of precipitation structures and drop-size distributions (DSDs) dictate the types of models and retrieval algorithms that can be used to estimate precipitation. Many types of dual-wavelength algorithms have been studied. Meneghini (2002) analyzed the performance of single-pass dual-wavelength surface-reference-technique (SRT) based algorithms. Mardiana (2003) demonstrated that a dual-wavelength retrieval algorithm could be successfully used without the use of the SRT. It uses an iterative approach based on measured reflectivities at both wavelengths and complex microphysical models to estimate both No and Do at each range bin. More recently, Liao (2004) proposed a solution to the Do ambiguity problem in rain within the dual-wavelength algorithm and showed a possible melting layer model based on stratified spheres. With the No and Do

  17. Engineering application of in-core fuel management optimization code with CSA algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Zhihong; Hu, Yongming [INET, Tsinghua university, Beijing 100084 (China)

    2009-06-15

    PWR in-core loading (reloading) pattern optimization is a complex combined problem. An excellent fuel management optimization code can greatly improve the efficiency of core reloading design, and bring economic and safety benefits. Today many optimization codes with experiences or searching algorithms (such as SA, GA, ANN, ACO) have been developed, while how to improve their searching efficiency and engineering usability still needs further research. CSA (Characteristic Statistic Algorithm) is a global optimization algorithm with high efficiency developed by our team. The performance of CSA has been proved on many problems (such as Traveling Salesman Problems). The idea of CSA is to induce searching direction by the statistic distribution of characteristic values. This algorithm is quite suitable for fuel management optimization. Optimization code with CSA has been developed and was used on many core models. The research in this paper is to improve the engineering usability of CSA code according to all the actual engineering requirements. Many new improvements have been completed in this code, such as: 1. Considering the asymmetry of burn-up in one assembly, the rotation of each assembly is considered as new optimization variables in this code. 2. Worth of control rods must satisfy the given constraint, so some relative modifications are added into optimization code. 3. To deal with the combination of alternate cycles, multi-cycle optimization is considered in this code. 4. To confirm the accuracy of optimization results, many identifications of the physics calculation module in this code have been done, and the parameters of optimization schemes are checked by SCIENCE code. The improved optimization code with CSA has been used on Qinshan nuclear plant of China. The reloading of cycle 7, 8, 9 (12 months, no burnable poisons) and the 18 months equilibrium cycle (with burnable poisons) reloading are optimized. At last, many optimized schemes are found by CSA code

  18. Complex technique for materials hardness measurement

    Energy Technology Data Exchange (ETDEWEB)

    Krashchenko, V P; Oksametnaya, O B

    1984-01-01

    A review of existing methods of measurement of material hardness in national and foreign practice has been made. A necessity of improving the technique of material hardness measurement in a wide temperature range and insuring load change with indenting, continuity of imprint application, smooth changing of temperatures along a sample length, and deformation rate control has been noted.

  19. Hard scattering and a diffractive trigger

    International Nuclear Information System (INIS)

    Berger, E.L.; Collins, J.C.; Soper, D.E.; Sterman, G.

    1986-02-01

    Conclusions concerning the properties of hard scattering in diffractively produced systems are summarized. One motivation for studying diffractive hard scattering is to investigate the interface between Regge theory and perturbative QCD. Another is to see whether diffractive triggering can result in an improvement in the signal-to-background ratio of measurements of production of very heavy quarks. 5 refs

  20. ERRATUM: Work smart, wear your hard hat

    CERN Multimedia

    2003-01-01

    An error appeared in the article «Work smart, wear your hard hat» published in Weekly Bulletin 27/2003, page 5. The impact which pierced a hole in the hard hat worn by Gerd Fetchenhauer was the equivalent of a box weighing 5 kg and not 50 kg.

  1. 7 CFR 201.57 - Hard seeds.

    Science.gov (United States)

    2010-01-01

    ... REGULATIONS Germination Tests in the Administration of the Act § 201.57 Hard seeds. Seeds which remain hard at the end of the prescribed test because they have not absorbed water, due to an impermeable seed coat... percentage of germination. For flatpea, continue the swollen seed in test for 14 days when germinating at 15...

  2. Exact and Heuristic Algorithms for Runway Scheduling

    Science.gov (United States)

    Malik, Waqar A.; Jung, Yoon C.

    2016-01-01

    This paper explores the Single Runway Scheduling (SRS) problem with arrivals, departures, and crossing aircraft on the airport surface. Constraints for wake vortex separations, departure area navigation separations and departure time window restrictions are explicitly considered. The main objective of this research is to develop exact and heuristic based algorithms that can be used in real-time decision support tools for Air Traffic Control Tower (ATCT) controllers. The paper provides a multi-objective dynamic programming (DP) based algorithm that finds the exact solution to the SRS problem, but may prove unusable for application in real-time environment due to large computation times for moderate sized problems. We next propose a second algorithm that uses heuristics to restrict the search space for the DP based algorithm. A third algorithm based on a combination of insertion and local search (ILS) heuristics is then presented. Simulation conducted for the east side of Dallas/Fort Worth International Airport allows comparison of the three proposed algorithms and indicates that the ILS algorithm performs favorably in its ability to find efficient solutions and its computation times.

  3. Variable depth recursion algorithm for leaf sequencing

    International Nuclear Information System (INIS)

    Siochi, R. Alfredo C.

    2007-01-01

    The processes of extraction and sweep are basic segmentation steps that are used in leaf sequencing algorithms. A modified version of a commercial leaf sequencer changed the way that the extracts are selected and expanded the search space, but the modification maintained the basic search paradigm of evaluating multiple solutions, each one consisting of up to 12 extracts and a sweep sequence. While it generated the best solutions compared to other published algorithms, it used more computation time. A new, faster algorithm selects one extract at a time but calls itself as an evaluation function a user-specified number of times, after which it uses the bidirectional sweeping window algorithm as the final evaluation function. To achieve a performance comparable to that of the modified commercial leaf sequencer, 2-3 calls were needed, and in all test cases, there were only slight improvements beyond two calls. For the 13 clinical test maps, computation speeds improved by a factor between 12 and 43, depending on the constraints, namely the ability to interdigitate and the avoidance of the tongue-and-groove under dose. The new algorithm was compared to the original and modified versions of the commercial leaf sequencer. It was also compared to other published algorithms for 1400, random, 15x15, test maps with 3-16 intensity levels. In every single case the new algorithm provided the best solution

  4. Kalman Filter Constraint Tuning for Turbofan Engine Health Estimation

    Science.gov (United States)

    Simon, Dan; Simon, Donald L.

    2005-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints are often neglected because they do not fit easily into the structure of the Kalman filter. Recently published work has shown a new method for incorporating state variable inequality constraints in the Kalman filter, which has been shown to generally improve the filter s estimation accuracy. However, the incorporation of inequality constraints poses some risk to the estimation accuracy as the Kalman filter is theoretically optimal. This paper proposes a way to tune the filter constraints so that the state estimates follow the unconstrained (theoretically optimal) filter when the confidence in the unconstrained filter is high. When confidence in the unconstrained filter is not so high, then we use our heuristic knowledge to constrain the state estimates. The confidence measure is based on the agreement of measurement residuals with their theoretical values. The algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate engine health.

  5. A maximum feasible subset algorithm with application to radiation therapy

    DEFF Research Database (Denmark)

    Sadegh, Payman

    1999-01-01

    inequalities. Special classes of this problem are of interest in a variety of areas such as pattern recognition, machine learning, operations research, and medical treatment planning. This problem is generally solvable in exponential time. A heuristic polynomial time algorithm is presented in this paper....... The algorithm relies on an iterative constraint removal procedure where constraints are eliminated from a set proposed by solutions to minmax linear programs. The method is illustrated by a simulated example of a linear system with double sided bounds and a case from the area of radiation therapy....

  6. Fixed Parameter Evolutionary Algorithms and Maximum Leaf Spanning Trees: A Matter of Mutations

    DEFF Research Database (Denmark)

    Kratsch, Stefan; Lehre, Per Kristian; Neumann, Frank

    2011-01-01

    Evolutionary algorithms have been shown to be very successful for a wide range of NP-hard combinatorial optimization problems. We investigate the NP-hard problem of computing a spanning tree that has a maximal number of leaves by evolutionary algorithms in the context of fixed parameter tractabil...... two common mutation operators, we show that an operator related to spanning tree problems leads to an FPT running time in contrast to a general mutation operator that does not have this property....

  7. Investigating the Effect of Voltage-Switching on Low-Energy Task Scheduling in Hard Real-Time Systems

    Science.gov (United States)

    2005-01-01

    We investigate the effect of voltage-switching on task execution times and energy consumption for dual-speed hard real - time systems , and present a...scheduling algorithm and apply it to two real-life task sets. Our results show that energy can be conserved in embedded real - time systems using energy...aware task scheduling. We also show that switching times have a significant effect on the energy consumed in hard real - time systems .

  8. Creativity from Constraints in Engineering Design

    DEFF Research Database (Denmark)

    Onarheim, Balder

    2012-01-01

    This paper investigates the role of constraints in limiting and enhancing creativity in engineering design. Based on a review of literature relating constraints to creativity, the paper presents a longitudinal participatory study from Coloplast A/S, a major international producer of disposable...... and ownership of formal constraints played a crucial role in defining their influence on creativity – along with the tacit constraints held by the designers. The designers were found to be highly constraint focused, and four main creative strategies for constraint manipulation were observed: blackboxing...

  9. Thermal spray coatings replace hard chrome

    International Nuclear Information System (INIS)

    Schroeder, M.; Unger, R.

    1997-01-01

    Hard chrome plating provides good wear and erosion resistance, as well as good corrosion protection and fine surface finishes. Until a few years ago, it could also be applied at a reasonable cost. However, because of the many environmental and financial sanctions that have been imposed on the process over the past several years, cost has been on a consistent upward trend, and is projected to continue to escalate. Therefore, it is very important to find a coating or a process that offers the same characteristics as hard chrome plating, but without the consequent risks. This article lists the benefits and limitations of hard chrome plating, and describes the performance of two thermal spray coatings (tungsten carbide and chromium carbide) that compared favorably with hard chrome plating in a series of tests. It also lists three criteria to determine whether plasma spray or hard chrome plating should be selected

  10. Correlating particle hardness with powder compaction performance.

    Science.gov (United States)

    Cao, Xiaoping; Morganti, Mikayla; Hancock, Bruno C; Masterson, Victoria M

    2010-10-01

    Assessing particle mechanical properties of pharmaceutical materials quickly and with little material can be very important to early stages of pharmaceutical research. In this study, a wide range of pharmaceutical materials were studied using atomic force microscopy (AFM) nanoindentation. A significant amount of particle hardness and elastic modulus data were provided. Moreover, powder compact mechanical properties of these materials were investigated in order to build correlation between the particle hardness and powder compaction performance. It was found that the materials with very low or high particle hardness most likely exhibit poor compaction performance while the materials with medium particle hardness usually have good compaction behavior. Additionally, the results from this study enriched Hiestand's special case concept on particle hardness and powder compaction performance. This study suggests that the use of AFM nanoindentation can help to screen mechanical properties of pharmaceutical materials at early development stages of pharmaceutical research.

  11. Optimal Design of Composite Structures Under Manufacturing Constraints

    DEFF Research Database (Denmark)

    Marmaras, Konstantinos

    algorithms to perform the global optimization. The efficiency of the proposed models is examined on a set of well–defined discrete multi material and thickness optimization problems originating from the literature. The inclusion of manufacturing limitations along with structural considerations in the early...... mixed integer 0–1 programming problems. The manufacturing constraints have been treated by developing explicit models with favorable properties. In this thesis we have developed and implemented special purpose global optimization methods and heuristic techniques for solving this class of problems......This thesis considers discrete multi material and thickness optimization of laminated composite structures including local failure criteria and manufacturing constraints. Our models closely follow an immediate extension of the Discrete Material Optimization scheme, which allows simultaneous...

  12. Monocular Visual Odometry Based on Trifocal Tensor Constraint

    Science.gov (United States)

    Chen, Y. J.; Yang, G. L.; Jiang, Y. X.; Liu, X. Y.

    2018-02-01

    For the problem of real-time precise localization in the urban street, a monocular visual odometry based on Extend Kalman fusion of optical-flow tracking and trifocal tensor constraint is proposed. To diminish the influence of moving object, such as pedestrian, we estimate the motion of the camera by extracting the features on the ground, which improves the robustness of the system. The observation equation based on trifocal tensor constraint is derived, which can form the Kalman filter alone with the state transition equation. An Extend Kalman filter is employed to cope with the nonlinear system. Experimental results demonstrate that, compares with Yu’s 2-step EKF method, the algorithm is more accurate which meets the needs of real-time accurate localization in cities.

  13. Quantitative ptychographic reconstruction by applying a probe constraint

    Science.gov (United States)

    Reinhardt, J.; Schroer, C. G.

    2018-04-01

    The coherent scanning technique X-ray ptychography has become a routine tool for high-resolution imaging and nanoanalysis in various fields of research such as chemistry, biology or materials science. Often the ptychographic reconstruction results are analysed in order to yield absolute quantitative values for the object transmission and illuminating probe function. In this work, we address a common ambiguity encountered in scaling the object transmission and probe intensity via the application of an additional constraint to the reconstruction algorithm. A ptychographic measurement of a model sample containing nanoparticles is used as a test data set against which to benchmark in the reconstruction results depending on the type of constraint used. Achieving quantitative absolute values for the reconstructed object transmission is essential for advanced investigation of samples that are changing over time, e.g., during in-situ experiments or in general when different data sets are compared.

  14. A compendium of chameleon constraints

    International Nuclear Information System (INIS)

    Burrage, Clare; Sakstein, Jeremy

    2016-01-01

    The chameleon model is a scalar field theory with a screening mechanism that explains how a cosmologically relevant light scalar can avoid the constraints of intra-solar-system searches for fifth-forces. The chameleon is a popular dark energy candidate and also arises in f ( R ) theories of gravity. Whilst the chameleon is designed to avoid historical searches for fifth-forces it is not unobservable and much effort has gone into identifying the best observables and experiments to detect it. These results are not always presented for the same models or in the same language, a particular problem when comparing astrophysical and laboratory searches making it difficult to understand what regions of parameter space remain. Here we present combined constraints on the chameleon model from astrophysical and laboratory searches for the first time and identify the remaining windows of parameter space. We discuss the implications for cosmological chameleon searches and future small-scale probes.

  15. A compendium of chameleon constraints

    Energy Technology Data Exchange (ETDEWEB)

    Burrage, Clare [School of Physics and Astronomy, University of Nottingham, Nottingham, NG7 2RD (United Kingdom); Sakstein, Jeremy, E-mail: clare.burrage@nottingham.ac.uk, E-mail: jeremy.sakstein@port.ac.uk [Center for Particle Cosmology, Department of Physics and Astronomy, University of Pennsylvania, 209 S. 33rd St., Philadelphia, PA 19104 (United States)

    2016-11-01

    The chameleon model is a scalar field theory with a screening mechanism that explains how a cosmologically relevant light scalar can avoid the constraints of intra-solar-system searches for fifth-forces. The chameleon is a popular dark energy candidate and also arises in f ( R ) theories of gravity. Whilst the chameleon is designed to avoid historical searches for fifth-forces it is not unobservable and much effort has gone into identifying the best observables and experiments to detect it. These results are not always presented for the same models or in the same language, a particular problem when comparing astrophysical and laboratory searches making it difficult to understand what regions of parameter space remain. Here we present combined constraints on the chameleon model from astrophysical and laboratory searches for the first time and identify the remaining windows of parameter space. We discuss the implications for cosmological chameleon searches and future small-scale probes.

  16. Self-Imposed Creativity Constraints

    DEFF Research Database (Denmark)

    Biskjaer, Michael Mose

    2013-01-01

    Abstract This dissertation epitomizes three years of research guided by the research question: how can we conceptualize creative self-binding as a resource in art and design processes? Concretely, the dissertation seeks to offer insight into the puzzling observation that highly skilled creative...... practitioners sometimes freely and intentionally impose rigid rules, peculiar principles, and other kinds of creative obstructions on themselves as a means to spur momentum in the process and reach a distinctly original outcome. To investigate this the dissertation is composed of four papers (Part II) framed...... of analysis. Informed by the insight that constraints both enable and restrain creative agency, the dissertation’s main contention is that creative self- binding may profitably be conceptualized as the exercise of self-imposed creativity constraints. Thus, the dissertation marks an analytical move from vague...

  17. Unitarity constraints on trimaximal mixing

    International Nuclear Information System (INIS)

    Kumar, Sanjeev

    2010-01-01

    When the neutrino mass eigenstate ν 2 is trimaximally mixed, the mixing matrix is called trimaximal. The middle column of the trimaximal mixing matrix is identical to tribimaximal mixing and the other two columns are subject to unitarity constraints. This corresponds to a mixing matrix with four independent parameters in the most general case. Apart from the two Majorana phases, the mixing matrix has only one free parameter in the CP conserving limit. Trimaximality results in interesting interplay between mixing angles and CP violation. A notion of maximal CP violation naturally emerges here: CP violation is maximal for maximal 2-3 mixing. Similarly, there is a natural constraint on the deviation from maximal 2-3 mixing which takes its maximal value in the CP conserving limit.

  18. Phase Transitions in Planning Problems: Design and Analysis of Parameterized Families of Hard Planning Problems

    Science.gov (United States)

    Hen, Itay; Rieffel, Eleanor G.; Do, Minh; Venturelli, Davide

    2014-01-01

    There are two common ways to evaluate algorithms: performance on benchmark problems derived from real applications and analysis of performance on parametrized families of problems. The two approaches complement each other, each having its advantages and disadvantages. The planning community has concentrated on the first approach, with few ways of generating parametrized families of hard problems known prior to this work. Our group's main interest is in comparing approaches to solving planning problems using a novel type of computational device - a quantum annealer - to existing state-of-the-art planning algorithms. Because only small-scale quantum annealers are available, we must compare on small problem sizes. Small problems are primarily useful for comparison only if they are instances of parametrized families of problems for which scaling analysis can be done. In this technical report, we discuss our approach to the generation of hard planning problems from classes of well-studied NP-complete problems that map naturally to planning problems or to aspects of planning problems that many practical planning problems share. These problem classes exhibit a phase transition between easy-to-solve and easy-to-show-unsolvable planning problems. The parametrized families of hard planning problems lie at the phase transition. The exponential scaling of hardness with problem size is apparent in these families even at very small problem sizes, thus enabling us to characterize even very small problems as hard. The families we developed will prove generally useful to the planning community in analyzing the performance of planning algorithms, providing a complementary approach to existing evaluation methods. We illustrate the hardness of these problems and their scaling with results on four state-of-the-art planners, observing significant differences between these planners on these problem families. Finally, we describe two general, and quite different, mappings of planning

  19. Hamiltonian Algorithm Sound Synthesis

    OpenAIRE

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  20. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.

    2015-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms