WorldWideScience

Sample records for computationally hard problems

  1. The Role of the Goal in Solving Hard Computational Problems: Do People Really Optimize?

    Science.gov (United States)

    Carruthers, Sarah; Stege, Ulrike; Masson, Michael E. J.

    2018-01-01

    The role that the mental, or internal, representation plays when people are solving hard computational problems has largely been overlooked to date, despite the reality that this internal representation drives problem solving. In this work we investigate how performance on versions of two hard computational problems differs based on what internal…

  2. An overview on polynomial approximation of NP-hard problems

    Directory of Open Access Journals (Sweden)

    Paschos Vangelis Th.

    2009-01-01

    Full Text Available The fact that polynomial time algorithm is very unlikely to be devised for an optimal solving of the NP-hard problems strongly motivates both the researchers and the practitioners to try to solve such problems heuristically, by making a trade-off between computational time and solution's quality. In other words, heuristic computation consists of trying to find not the best solution but one solution which is 'close to' the optimal one in reasonable time. Among the classes of heuristic methods for NP-hard problems, the polynomial approximation algorithms aim at solving a given NP-hard problem in poly-nomial time by computing feasible solutions that are, under some predefined criterion, as near to the optimal ones as possible. The polynomial approximation theory deals with the study of such algorithms. This survey first presents and analyzes time approximation algorithms for some classical examples of NP-hard problems. Secondly, it shows how classical notions and tools of complexity theory, such as polynomial reductions, can be matched with polynomial approximation in order to devise structural results for NP-hard optimization problems. Finally, it presents a quick description of what is commonly called inapproximability results. Such results provide limits on the approximability of the problems tackled.

  3. Phase Transitions in Planning Problems: Design and Analysis of Parameterized Families of Hard Planning Problems

    Science.gov (United States)

    Hen, Itay; Rieffel, Eleanor G.; Do, Minh; Venturelli, Davide

    2014-01-01

    There are two common ways to evaluate algorithms: performance on benchmark problems derived from real applications and analysis of performance on parametrized families of problems. The two approaches complement each other, each having its advantages and disadvantages. The planning community has concentrated on the first approach, with few ways of generating parametrized families of hard problems known prior to this work. Our group's main interest is in comparing approaches to solving planning problems using a novel type of computational device - a quantum annealer - to existing state-of-the-art planning algorithms. Because only small-scale quantum annealers are available, we must compare on small problem sizes. Small problems are primarily useful for comparison only if they are instances of parametrized families of problems for which scaling analysis can be done. In this technical report, we discuss our approach to the generation of hard planning problems from classes of well-studied NP-complete problems that map naturally to planning problems or to aspects of planning problems that many practical planning problems share. These problem classes exhibit a phase transition between easy-to-solve and easy-to-show-unsolvable planning problems. The parametrized families of hard planning problems lie at the phase transition. The exponential scaling of hardness with problem size is apparent in these families even at very small problem sizes, thus enabling us to characterize even very small problems as hard. The families we developed will prove generally useful to the planning community in analyzing the performance of planning algorithms, providing a complementary approach to existing evaluation methods. We illustrate the hardness of these problems and their scaling with results on four state-of-the-art planners, observing significant differences between these planners on these problem families. Finally, we describe two general, and quite different, mappings of planning

  4. Statistical physics of hard optimization problems

    International Nuclear Information System (INIS)

    Zdeborova, L.

    2009-01-01

    Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the non-deterministic polynomial (NP)-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an NP-complete problem the practically arising instances might, in fact, be easy to solve. The principal question we address in this article is: How to recognize if an NP-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfy ability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named ”locked” constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfy ability.

  5. Statistical physics of hard optimization problems

    International Nuclear Information System (INIS)

    Zdeborova, L.

    2009-01-01

    Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the non-deterministic polynomial-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an non-deterministic polynomial-complete problem the practically arising instances might, in fact, be easy to solve. The principal the question we address in the article is: How to recognize if an non-deterministic polynomial-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfiability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named 'locked' constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfiability (Authors)

  6. Statistical physics of hard optimization problems

    Science.gov (United States)

    Zdeborová, Lenka

    2009-06-01

    Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the non-deterministic polynomial (NP)-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an NP-complete problem the practically arising instances might, in fact, be easy to solve. The principal question we address in this article is: How to recognize if an NP-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfiability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named "locked" constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfiability.

  7. Solving the 0/1 Knapsack Problem by a Biomolecular DNA Computer

    Directory of Open Access Journals (Sweden)

    Hassan Taghipour

    2013-01-01

    Full Text Available Solving some mathematical problems such as NP-complete problems by conventional silicon-based computers is problematic and takes so long time. DNA computing is an alternative method of computing which uses DNA molecules for computing purposes. DNA computers have massive degrees of parallel processing capability. The massive parallel processing characteristic of DNA computers is of particular interest in solving NP-complete and hard combinatorial problems. NP-complete problems such as knapsack problem and other hard combinatorial problems can be easily solved by DNA computers in a very short period of time comparing to conventional silicon-based computers. Sticker-based DNA computing is one of the methods of DNA computing. In this paper, the sticker based DNA computing was used for solving the 0/1 knapsack problem. At first, a biomolecular solution space was constructed by using appropriate DNA memory complexes. Then, by the application of a sticker-based parallel algorithm using biological operations, knapsack problem was resolved in polynomial time.

  8. Heuristics for NP-hard optimization problems - simpler is better!?

    Directory of Open Access Journals (Sweden)

    Žerovnik Janez

    2015-11-01

    Full Text Available We provide several examples showing that local search, the most basic metaheuristics, may be a very competitive choice for solving computationally hard optimization problems. In addition, generation of starting solutions by greedy heuristics should be at least considered as one of very natural possibilities. In this critical survey, selected examples discussed include the traveling salesman, the resource-constrained project scheduling, the channel assignment, and computation of bounds for the Shannon capacity.

  9. NP-hardness of the cluster minimization problem revisited

    Science.gov (United States)

    Adib, Artur B.

    2005-10-01

    The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested.

  10. NP-hardness of the cluster minimization problem revisited

    International Nuclear Information System (INIS)

    Adib, Artur B

    2005-01-01

    The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested

  11. NP-hardness of the cluster minimization problem revisited

    Energy Technology Data Exchange (ETDEWEB)

    Adib, Artur B [Physics Department, Brown University, Providence, RI 02912 (United States)

    2005-10-07

    The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested.

  12. A Comparison of Approaches for Solving Hard Graph-Theoretic Problems

    Science.gov (United States)

    2015-04-29

    and Search”, in Discrete Mathematics and Its Applications, Book 7, CRC Press (1998): Boca Raton. [6] A. Lucas, “Ising Formulations of Many NP Problems...owner. 14. ABSTRACT In order to formulate mathematical conjectures likely to be true, a number of base cases must be determined. However, many... combinatorial problems are NP-hard and the computational complexity makes this research approach difficult using a standard brute force approach on a

  13. Constraint satisfaction problems with isolated solutions are hard

    International Nuclear Information System (INIS)

    Zdeborová, Lenka; Mézard, Marc

    2008-01-01

    We study the phase diagram and the algorithmic hardness of the random 'locked' constraint satisfaction problems, and compare them to the commonly studied 'non-locked' problems like satisfiability of Boolean formulae or graph coloring. The special property of the locked problems is that clusters of solutions are isolated points. This simplifies significantly the determination of the phase diagram, which makes the locked problems particularly appealing from the mathematical point of view. On the other hand, we show empirically that the clustered phase of these problems is extremely hard from the algorithmic point of view: the best known algorithms all fail to find solutions. Our results suggest that the easy/hard transition (for currently known algorithms) in the locked problems coincides with the clustering transition. These should thus be regarded as new benchmarks of really hard constraint satisfaction problems

  14. Complex network problems in physics, computer science and biology

    Science.gov (United States)

    Cojocaru, Radu Ionut

    There is a close relation between physics and mathematics and the exchange of ideas between these two sciences are well established. However until few years ago there was no such a close relation between physics and computer science. Even more, only recently biologists started to use methods and tools from statistical physics in order to study the behavior of complex system. In this thesis we concentrate on applying and analyzing several methods borrowed from computer science to biology and also we use methods from statistical physics in solving hard problems from computer science. In recent years physicists have been interested in studying the behavior of complex networks. Physics is an experimental science in which theoretical predictions are compared to experiments. In this definition, the term prediction plays a very important role: although the system is complex, it is still possible to get predictions for its behavior, but these predictions are of a probabilistic nature. Spin glasses, lattice gases or the Potts model are a few examples of complex systems in physics. Spin glasses and many frustrated antiferromagnets map exactly to computer science problems in the NP-hard class defined in Chapter 1. In Chapter 1 we discuss a common result from artificial intelligence (AI) which shows that there are some problems which are NP-complete, with the implication that these problems are difficult to solve. We introduce a few well known hard problems from computer science (Satisfiability, Coloring, Vertex Cover together with Maximum Independent Set and Number Partitioning) and then discuss their mapping to problems from physics. In Chapter 2 we provide a short review of combinatorial optimization algorithms and their applications to ground state problems in disordered systems. We discuss the cavity method initially developed for studying the Sherrington-Kirkpatrick model of spin glasses. We extend this model to the study of a specific case of spin glass on the Bethe

  15. A further problem of the hard problem of consciousness | Gbenga ...

    African Journals Online (AJOL)

    Justifying this assertion is identified as the further problem of the hard problem of consciousness. This shows that assertions about phenomenal properties of mental experiences are wholly epistemological. Hence, the problem of explaining phenomenal properties of a mental state is not a metaphysical problem, and what is ...

  16. The hard problem of cooperation.

    Directory of Open Access Journals (Sweden)

    Kimmo Eriksson

    Full Text Available Based on individual variation in cooperative inclinations, we define the "hard problem of cooperation" as that of achieving high levels of cooperation in a group of non-cooperative types. Can the hard problem be solved by institutions with monitoring and sanctions? In a laboratory experiment we find that the answer is affirmative if the institution is imposed on the group but negative if development of the institution is left to the group to vote on. In the experiment, participants were divided into groups of either cooperative types or non-cooperative types depending on their behavior in a public goods game. In these homogeneous groups they repeatedly played a public goods game regulated by an institution that incorporated several of the key properties identified by Ostrom: operational rules, monitoring, rewards, punishments, and (in one condition change of rules. When change of rules was not possible and punishments were set to be high, groups of both types generally abided by operational rules demanding high contributions to the common good, and thereby achieved high levels of payoffs. Under less severe rules, both types of groups did worse but non-cooperative types did worst. Thus, non-cooperative groups profited the most from being governed by an institution demanding high contributions and employing high punishments. Nevertheless, in a condition where change of rules through voting was made possible, development of the institution in this direction was more often voted down in groups of non-cooperative types. We discuss the relevance of the hard problem and fit our results into a bigger picture of institutional and individual determinants of cooperative behavior.

  17. The hard problem of cooperation.

    Science.gov (United States)

    Eriksson, Kimmo; Strimling, Pontus

    2012-01-01

    Based on individual variation in cooperative inclinations, we define the "hard problem of cooperation" as that of achieving high levels of cooperation in a group of non-cooperative types. Can the hard problem be solved by institutions with monitoring and sanctions? In a laboratory experiment we find that the answer is affirmative if the institution is imposed on the group but negative if development of the institution is left to the group to vote on. In the experiment, participants were divided into groups of either cooperative types or non-cooperative types depending on their behavior in a public goods game. In these homogeneous groups they repeatedly played a public goods game regulated by an institution that incorporated several of the key properties identified by Ostrom: operational rules, monitoring, rewards, punishments, and (in one condition) change of rules. When change of rules was not possible and punishments were set to be high, groups of both types generally abided by operational rules demanding high contributions to the common good, and thereby achieved high levels of payoffs. Under less severe rules, both types of groups did worse but non-cooperative types did worst. Thus, non-cooperative groups profited the most from being governed by an institution demanding high contributions and employing high punishments. Nevertheless, in a condition where change of rules through voting was made possible, development of the institution in this direction was more often voted down in groups of non-cooperative types. We discuss the relevance of the hard problem and fit our results into a bigger picture of institutional and individual determinants of cooperative behavior.

  18. Soft Computing Techniques for the Protein Folding Problem on High Performance Computing Architectures.

    Science.gov (United States)

    Llanes, Antonio; Muñoz, Andrés; Bueno-Crespo, Andrés; García-Valverde, Teresa; Sánchez, Antonia; Arcas-Túnez, Francisco; Pérez-Sánchez, Horacio; Cecilia, José M

    2016-01-01

    The protein-folding problem has been extensively studied during the last fifty years. The understanding of the dynamics of global shape of a protein and the influence on its biological function can help us to discover new and more effective drugs to deal with diseases of pharmacological relevance. Different computational approaches have been developed by different researchers in order to foresee the threedimensional arrangement of atoms of proteins from their sequences. However, the computational complexity of this problem makes mandatory the search for new models, novel algorithmic strategies and hardware platforms that provide solutions in a reasonable time frame. We present in this revision work the past and last tendencies regarding protein folding simulations from both perspectives; hardware and software. Of particular interest to us are both the use of inexact solutions to this computationally hard problem as well as which hardware platforms have been used for running this kind of Soft Computing techniques.

  19. Micro-computer cards for hard industrial environment

    Energy Technology Data Exchange (ETDEWEB)

    Breton, J M

    1984-03-15

    Approximately 60% of present or future distributed systems have, or will have, operational units installed in hard environments. In these applications, which include canalization and industrial motor control, robotics and process control, systems must be easily applied in environments not made for electronic use. The development of card systems in this hard industrial environment, which is found in petrochemical industry and mines is described. National semiconductor CIM card system CMOS technology allows the real time micro computer application to be efficient and functional in hard industrial environments.

  20. An Introduction to Parallel Cluster Computing Using PVM for Computer Modeling and Simulation of Engineering Problems

    International Nuclear Information System (INIS)

    Spencer, VN

    2001-01-01

    An investigation has been conducted regarding the ability of clustered personal computers to improve the performance of executing software simulations for solving engineering problems. The power and utility of personal computers continues to grow exponentially through advances in computing capabilities such as newer microprocessors, advances in microchip technologies, electronic packaging, and cost effective gigabyte-size hard drive capacity. Many engineering problems require significant computing power. Therefore, the computation has to be done by high-performance computer systems that cost millions of dollars and need gigabytes of memory to complete the task. Alternately, it is feasible to provide adequate computing in the form of clustered personal computers. This method cuts the cost and size by linking (clustering) personal computers together across a network. Clusters also have the advantage that they can be used as stand-alone computers when they are not operating as a parallel computer. Parallel computing software to exploit clusters is available for computer operating systems like Unix, Windows NT, or Linux. This project concentrates on the use of Windows NT, and the Parallel Virtual Machine (PVM) system to solve an engineering dynamics problem in Fortran

  1. Greedy and metaheuristics for the offline scheduling problem in grid computing

    DEFF Research Database (Denmark)

    Gamst, Mette

    In grid computing a number of geographically distributed resources connected through a wide area network, are utilized as one computations unit. The NP-hard offline scheduling problem in grid computing consists of assigning jobs to resources in advance. In this paper, five greedy heuristics and two....... All heuristics solve instances with up to 2000 jobs and 1000 resources, thus the results are useful both with respect to running times and to solution values....

  2. Deaf and hard of hearing students' problem-solving strategies with signed arithmetic story problems.

    Science.gov (United States)

    Pagliaro, Claudia M; Ansell, Ellen

    2012-01-01

    The use of problem-solving strategies by 59 deaf and hard of hearing children, grades K-3, was investigated. The children were asked to solve 9 arithmetic story problems presented to them in American Sign Language. The researchers found that while the children used the same general types of strategies that are used by hearing children (i.e., modeling, counting, and fact-based strategies), they showed an overwhelming use of counting strategies for all types of problems and at all ages. This difference may have its roots in language or instruction (or in both), and calls attention to the need for conceptual rather than procedural mathematics instruction for deaf and hard of hearing students.

  3. Rad-hard embedded computers for nuclear robotics

    International Nuclear Information System (INIS)

    Giraud, A.; Joffre, F.; Marceau, M.; Robiolle, M.; Brunet, J.P.; Mijuin, D.

    1993-01-01

    For requirements of nuclear industries, it is necessary to use robots with embedded rad hard electronics and high level safety. The computer developed for french research program SYROCO is presented in this paper. (authors). 8 refs., 5 figs

  4. Computational search for rare-earth free hard-magnetic materials

    Science.gov (United States)

    Flores Livas, José A.; Sharma, Sangeeta; Dewhurst, John Kay; Gross, Eberhard; MagMat Team

    2015-03-01

    It is difficult to over state the importance of hard magnets for human life in modern times; they enter every walk of our life from medical equipments (NMR) to transport (trains, planes, cars, etc) to electronic appliances (for house hold use to computers). All the known hard magnets in use today contain rare-earth elements, extraction of which is expensive and environmentally harmful. Rare-earths are also instrumental in tipping the balance of world economy as most of them are mined in limited specific parts of the world. Hence it would be ideal to have similar characteristics as a hard magnet but without or at least with reduced amount of rare-earths. This is the main goal of our work: search for rare-earth-free magnets. To do so we employ a combination of density functional theory and crystal prediction methods. The quantities which define a hard magnet are magnetic anisotropy energy (MAE) and saturation magnetization (Ms), which are the quantities we maximize in search for an ideal magnet. In my talk I will present details of the computation search algorithm together with some potential newly discovered rare-earth free hard magnet. J.A.F.L. acknowledge financial support from EU's 7th Framework Marie-Curie scholarship program within the ``ExMaMa'' Project (329386).

  5. Problems in education, employment and social integration of hard of hearing artists

    Directory of Open Access Journals (Sweden)

    Radić-Šestić Marina

    2013-01-01

    Full Text Available The aim of this research was to determine the problems in education (primary, secondary and undergraduate academic studies, employment and social integration of hard of hearing artists based on a multiple case study. The sample consisted of 4 examinees of both genders, aged between 29 and 54, from the field of visual arts (a painter, a sculptor, a graphic designer, and an interior designer. The structured interview consisted of 30 questions testing three areas: the first area involved family, primary and secondary education; the second area was about the length of studying and socio-emotional problems of the examinees; the third area dealt with problems in employment and job satisfaction of our examinees. Research results indicate the existence of several problems which more or less reflect the success in education, employment and social integration of hard of hearing artists. One of the problems which can influence the development of language abilities, socioemotional maturity, and better educational achievement of hard of hearing artists in general, is prolongation in diagnosing hearing impairments, amplification and auditory rehabilitation. Furthermore, parents of hard of hearing artists have difficulties in adjusting to their children's hearing impairments and ignore the language and culture of the Deaf, i.e. they tend to identify their children with typically developing population. Another problem are negative attitudes of teachers/professors/employers and typically developing peers/ colleagues towards the inclusion of hard of hearing people into the regular education/employment system. Apart from that, unmodified instruction, course books, information, school and working area further complicate the acquisition of knowledge, information, and the progress of hard of hearing people in education and profession.

  6. Computational Study on a PTAS for Planar Dominating Set Problem

    Directory of Open Access Journals (Sweden)

    Qian-Ping Gu

    2013-01-01

    Full Text Available The dominating set problem is a core NP-hard problem in combinatorial optimization and graph theory, and has many important applications. Baker [JACM 41,1994] introduces a k-outer planar graph decomposition-based framework for designing polynomial time approximation scheme (PTAS for a class of NP-hard problems in planar graphs. It is mentioned that the framework can be applied to obtain an O(2ckn time, c is a constant, (1+1/k-approximation algorithm for the planar dominating set problem. We show that the approximation ratio achieved by the mentioned application of the framework is not bounded by any constant for the planar dominating set problem. We modify the application of the framework to give a PTAS for the planar dominating set problem. With k-outer planar graph decompositions, the modified PTAS has an approximation ratio (1 + 2/k. Using 2k-outer planar graph decompositions, the modified PTAS achieves the approximation ratio (1+1/k in O(22ckn time. We report a computational study on the modified PTAS. Our results show that the modified PTAS is practical.

  7. Reply to "Comment on 'Calculations for the one-dimensional soft Coulomb problem and the hard Coulomb limit' ".

    Science.gov (United States)

    Gebremedhin, Daniel H; Weatherford, Charles A

    2015-02-01

    This is a response to the comment we received on our recent paper "Calculations for the one-dimensional soft Coulomb problem and the hard Coulomb limit." In that paper, we introduced a computational algorithm that is appropriate for solving stiff initial value problems, and which we applied to the one-dimensional time-independent Schrödinger equation with a soft Coulomb potential. We solved for the eigenpairs using a shooting method and hence turned it into an initial value problem. In particular, we examined the behavior of the eigenpairs as the softening parameter approached zero (hard Coulomb limit). The commenters question the existence of the ground state of the hard Coulomb potential, which we inferred by extrapolation of the softening parameter to zero. A key distinction between the commenters' approach and ours is that they consider only the half-line while we considered the entire x axis. Based on mathematical considerations, the commenters consider only a vanishing solution function at the origin, and they question our conclusion that the ground state of the hard Coulomb potential exists. The ground state we inferred resembles a δ(x), and hence it cannot even be addressed based on their argument. For the excited states, there is agreement with the fact that the particle is always excluded from the origin. Our discussion with regard to the symmetry of the excited states is an extrapolation of the soft Coulomb case and is further explained herein.

  8. The Downward Causality and the Hard Problem of Consciousness or Why Computer Programs Do not Work in the Dark

    Directory of Open Access Journals (Sweden)

    Boldachev Alexander

    2015-01-01

    Full Text Available Any low-level processes, the sequence of chemical interactions in a living cell, muscle cellular activity, processor commands or neuron interaction, is possible only if there is a downward causality, only due to uniting and controlling power of the highest level. Therefore, there is no special “hard problem of consciousness”, i.e. the problem of relation of ostensibly purely biological materiality and non-causal mentality - we have only the single philosophical problem of relation between the upward and downward causalities, the problem of interrelation between hierarchic levels of existence. It is necessary to conclude that the problem of determinacy of chemical processes by the biological ones and the problem of neuron interactions caused by consciousness are of one nature and must have one solution.

  9. Heterogeneous quantum computing for satellite constellation optimization: solving the weighted k-clique problem

    Science.gov (United States)

    Bass, Gideon; Tomlin, Casey; Kumar, Vaibhaw; Rihaczek, Pete; Dulny, Joseph, III

    2018-04-01

    NP-hard optimization problems scale very rapidly with problem size, becoming unsolvable with brute force methods, even with supercomputing resources. Typically, such problems have been approximated with heuristics. However, these methods still take a long time and are not guaranteed to find an optimal solution. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. Current quantum annealing (QA) devices are designed to solve difficult optimization problems, but they are limited by hardware size and qubit connectivity restrictions. We present a novel heterogeneous computing stack that combines QA and classical machine learning, allowing the use of QA on problems larger than the hardware limits of the quantum device. These results represent experiments on a real-world problem represented by the weighted k-clique problem. Through this experiment, we provide insight into the state of quantum machine learning.

  10. Exponential-Time Algorithms and Complexity of NP-Hard Graph Problems

    DEFF Research Database (Denmark)

    Taslaman, Nina Sofia

    of algorithms, as well as investigations into how far such improvements can get under reasonable assumptions.      The first part is concerned with detection of cycles in graphs, especially parameterized generalizations of Hamiltonian cycles. A remarkably simple Monte Carlo algorithm is presented......NP-hard problems are deemed highly unlikely to be solvable in polynomial time. Still, one can often find algorithms that are substantially faster than brute force solutions. This thesis concerns such algorithms for problems from graph theory; techniques for constructing and improving this type......, and with high probability any found solution is shortest possible. Moreover, the algorithm can be used to find a cycle of given parity through the specified elements.      The second part concerns the hardness of problems encoded as evaluations of the Tutte polynomial at some fixed point in the rational plane...

  11. Computation of disordered system from the first principles of classical mechanics and ℕℙ hard problem

    Energy Technology Data Exchange (ETDEWEB)

    Gevorkyan, A. S., E-mail: g-ashot@sci.am; Sahakyan, V. V. [National Academy of Sciences of the Republic of Armenia, Institute for Informatics and Automation Problems (Armenia)

    2017-03-15

    We study the classical 1D Heisenberg spin glasses in the framework of nearest-neighboring model. Based on the Hamilton equations we obtained the system of recurrence equations which allows to perform node-by-node calculations of a spin-chain. It is shown that calculations from the first principles of classical mechanics lead to ℕℙ hard problem, that however in the limit of the statistical equilibrium can be calculated by ℙ algorithm. For the partition function of the ensemble a new representation is offered in the form of one-dimensional integral of spin-chains’ energy distribution.

  12. Deaf and Hard of Hearing Students' Problem-Solving Strategies with Signed Arithmetic Story Problems

    Science.gov (United States)

    Pagliaro, Claudia M.; Ansell, Ellen

    2011-01-01

    The use of problem-solving strategies by 59 deaf and hard of hearing children, grades K-3, was investigated. The children were asked to solve 9 arithmetic story problems presented to them in American Sign Language. The researchers found that while the children used the same general types of strategies that are used by hearing children (i.e.,…

  13. Nuclear many-body problem with repulsive hard core interactions

    Energy Technology Data Exchange (ETDEWEB)

    Haddad, L M

    1965-07-01

    The nuclear many-body problem is considered using the perturbation-theoretic approach of Brueckner and collaborators. This approach is outlined with particular attention paid to the graphical representation of the terms in the perturbation expansion. The problem is transformed to centre-of-mass coordinates in configuration space and difficulties involved in ordinary methods of solution of the resulting equation are discussed. A new technique, the 'reference spectrum method', devised by Bethe, Brandow and Petschek in an attempt to simplify the numerical work in presented. The basic equations are derived in this approximation and considering the repulsive hard core part of the interaction only, the effective mass is calculated at high momentum (using the same energy spectrum for both 'particle' and 'hole' states). The result of 0.87m is in agreement with that of Bethe et al. A more complete treatment using the reference spectrum method in introduced and a self-consistent set of equations is established for the reference spectrum parameters again for the case of hard core repulsions. (author)

  14. Structural qualia: a solution to the hard problem of consciousness.

    Science.gov (United States)

    Loorits, Kristjan

    2014-01-01

    The hard problem of consciousness has been often claimed to be unsolvable by the methods of traditional empirical sciences. It has been argued that all the objects of empirical sciences can be fully analyzed in structural terms but that consciousness is (or has) something over and above its structure. However, modern neuroscience has introduced a theoretical framework in which also the apparently non-structural aspects of consciousness, namely the so called qualia or qualitative properties, can be analyzed in structural terms. That framework allows us to see qualia as something compositional with internal structures that fully determine their qualitative nature. Moreover, those internal structures can be identified which certain neural patterns. Thus consciousness as a whole can be seen as a complex neural pattern that misperceives some of its own highly complex structural properties as monadic and qualitative. Such neural pattern is analyzable in fully structural terms and thereby the hard problem is solved.

  15. Structural qualia: a solution to the hard problem of consciousness

    Directory of Open Access Journals (Sweden)

    Kristjan eLoorits

    2014-03-01

    Full Text Available The hard problem of consciousness has been often claimed to be unsolvable by the methods of traditional empirical sciences. It has been argued that all the objects of empirical sciences can be fully analyzed in structural terms but that consciousness is (or has something over and above its structure. However, modern neuroscience has introduced a theoretical framework in which also the apparently non-structural aspects of consciousness, namely the so called qualia or qualitative properties, can be analyzed in structural terms. That framework allows us to see qualia as something compositional with internal structures that fully determine their qualitative nature. Moreover, those internal structures can be identified which certain neural patterns. Thus consciousness as a whole can be seen as a complex neural pattern that misperceives some of its own highly complex structural properties as monadic and qualitative. Such neural pattern is analyzable in fully structural terms and thereby the hard problem is solved.

  16. Sampling from a polytope and hard-disk Monte Carlo

    International Nuclear Information System (INIS)

    Kapfer, Sebastian C; Krauth, Werner

    2013-01-01

    The hard-disk problem, the statics and the dynamics of equal two-dimensional hard spheres in a periodic box, has had a profound influence on statistical and computational physics. Markov-chain Monte Carlo and molecular dynamics were first discussed for this model. Here we reformulate hard-disk Monte Carlo algorithms in terms of another classic problem, namely the sampling from a polytope. Local Markov-chain Monte Carlo, as proposed by Metropolis et al. in 1953, appears as a sequence of random walks in high-dimensional polytopes, while the moves of the more powerful event-chain algorithm correspond to molecular dynamics evolution. We determine the convergence properties of Monte Carlo methods in a special invariant polytope associated with hard-disk configurations, and the implications for convergence of hard-disk sampling. Finally, we discuss parallelization strategies for event-chain Monte Carlo and present results for a multicore implementation

  17. Computational problems in engineering

    CERN Document Server

    Mladenov, Valeri

    2014-01-01

    This book provides readers with modern computational techniques for solving variety of problems from electrical, mechanical, civil and chemical engineering. Mathematical methods are presented in a unified manner, so they can be applied consistently to problems in applied electromagnetics, strength of materials, fluid mechanics, heat and mass transfer, environmental engineering, biomedical engineering, signal processing, automatic control and more.   • Features contributions from distinguished researchers on significant aspects of current numerical methods and computational mathematics; • Presents actual results and innovative methods that provide numerical solutions, while minimizing computing times; • Includes new and advanced methods and modern variations of known techniques that can solve difficult scientific problems efficiently.  

  18. New Unconditional Hardness Results for Dynamic and Online Problems

    DEFF Research Database (Denmark)

    Clifford, Raphaël; Jørgensen, Allan Grønlund; Larsen, Kasper Green

    2015-01-01

    Data summarization is an effective approach to dealing with the 'big data' problem. While data summarization problems traditionally have been studied is the streaming model, the focus is starting to shift to distributed models, as distributed/parallel computation seems to be the only viable way...... to handle today's massive data sets. In this paper, we study ε-approximations, a classical data summary that, intuitively speaking, preserves approximately the density of the underlying data set over a certain range space. We consider the problem of computing ε-approximations for a data set which is held...

  19. Statistical physics of hard combinatorial optimization: Vertex cover problem

    Science.gov (United States)

    Zhao, Jin-Hua; Zhou, Hai-Jun

    2014-07-01

    Typical-case computation complexity is a research topic at the boundary of computer science, applied mathematics, and statistical physics. In the last twenty years, the replica-symmetry-breaking mean field theory of spin glasses and the associated message-passing algorithms have greatly deepened our understanding of typical-case computation complexity. In this paper, we use the vertex cover problem, a basic nondeterministic-polynomial (NP)-complete combinatorial optimization problem of wide application, as an example to introduce the statistical physical methods and algorithms. We do not go into the technical details but emphasize mainly the intuitive physical meanings of the message-passing equations. A nonfamiliar reader shall be able to understand to a large extent the physics behind the mean field approaches and to adjust the mean field methods in solving other optimization problems.

  20. Computer Game Play as an Imaginary Stage for Reading: Implicit Spatial Effects of Computer Games Embedded in Hard Copy Books

    Science.gov (United States)

    Smith, Glenn Gordon

    2012-01-01

    This study compared books with embedded computer games (via pentop computers with microdot paper and audio feedback) with regular books with maps, in terms of fifth graders' comprehension and retention of spatial details from stories. One group read a story in hard copy with embedded computer games, the other group read it in regular book format…

  1. The limits of quantum computers

    International Nuclear Information System (INIS)

    Aaronson, S.

    2008-01-01

    Future computers, which work with quantum bits, would indeed solve some special problems extremely fastly, but for the most problems the would hardly be superior to contemporary computers. This knowledge could manifest a new fundamental physical principle

  2. Continuous-Variable Instantaneous Quantum Computing is Hard to Sample.

    Science.gov (United States)

    Douce, T; Markham, D; Kashefi, E; Diamanti, E; Coudreau, T; Milman, P; van Loock, P; Ferrini, G

    2017-02-17

    Instantaneous quantum computing is a subuniversal quantum complexity class, whose circuits have proven to be hard to simulate classically in the discrete-variable realm. We extend this proof to the continuous-variable (CV) domain by using squeezed states and homodyne detection, and by exploring the properties of postselected circuits. In order to treat postselection in CVs, we consider finitely resolved homodyne detectors, corresponding to a realistic scheme based on discrete probability distributions of the measurement outcomes. The unavoidable errors stemming from the use of finitely squeezed states are suppressed through a qubit-into-oscillator Gottesman-Kitaev-Preskill encoding of quantum information, which was previously shown to enable fault-tolerant CV quantum computation. Finally, we show that, in order to render postselected computational classes in CVs meaningful, a logarithmic scaling of the squeezing parameter with the circuit size is necessary, translating into a polynomial scaling of the input energy.

  3. Shutdown and degradation: Space computers for nuclear application, verification of radiation hardness. Final report

    International Nuclear Information System (INIS)

    Eichhorn, E.; Gerber, V.; Schreyer, P.

    1995-01-01

    (1) Employment of those radiation hard electronics which are already known in military and space applications. (2) The experience in space-flight shall be used to investigate nuclear technology areas, for example, by using space electronics to prove the range of applications in nuclear radiating environments. (3) Reproduction of a computer developed for telecommunication satellites; proof of radiation hardness by radiation tests. (4) At 328 Krad (Si) first failure of radiation tolerant devices with 100 Krad (Si) hardness guaranteed. (5) Using radiation hard devices of the same type you can expect applications at doses of greater than 1 Mrad (Si). Electronic systems applicable for radiation categories D, C and lower part of B for manipulators, vehicles, underwater robotics. (orig.) [de

  4. On a hierarchy of Booleanfunctions hard to compute in constant depth

    Directory of Open Access Journals (Sweden)

    Anna Bernasconi

    2001-12-01

    Full Text Available Any attempt to find connections between mathematical properties and complexity has a strong relevance to the field of Complexity Theory. This is due to the lack of mathematical techniques to prove lower bounds for general models of computation. This work represents a step in this direction: we define a combinatorial property that makes Boolean functions `` hard '' to compute in constant depth and show how the harmonic analysis on the hypercube can be applied to derive new lower bounds on the size complexity of previously unclassified Boolean functions.

  5. Disposal of waste computer hard disk drive: data destruction and resources recycling.

    Science.gov (United States)

    Yan, Guoqing; Xue, Mianqiang; Xu, Zhenming

    2013-06-01

    An increasing quantity of discarded computers is accompanied by a sharp increase in the number of hard disk drives to be eliminated. A waste hard disk drive is a special form of waste electrical and electronic equipment because it holds large amounts of information that is closely connected with its user. Therefore, the treatment of waste hard disk drives is an urgent issue in terms of data security, environmental protection and sustainable development. In the present study the degaussing method was adopted to destroy the residual data on the waste hard disk drives and the housing of the disks was used as an example to explore the coating removal process, which is the most important pretreatment for aluminium alloy recycling. The key operation points of the degaussing determined were: (1) keep the platter plate parallel with the magnetic field direction; and (2) the enlargement of magnetic field intensity B and action time t can lead to a significant upgrade in the degaussing effect. The coating removal experiment indicated that heating the waste hard disk drives housing at a temperature of 400 °C for 24 min was the optimum condition. A novel integrated technique for the treatment of waste hard disk drives is proposed herein. This technique offers the possibility of destroying residual data, recycling the recovered resources and disposing of the disks in an environmentally friendly manner.

  6. Evolutionary Hybrid Particle Swarm Optimization Algorithm for Solving NP-Hard No-Wait Flow Shop Scheduling Problems

    Directory of Open Access Journals (Sweden)

    Laxmi A. Bewoor

    2017-10-01

    Full Text Available The no-wait flow shop is a flowshop in which the scheduling of jobs is continuous and simultaneous through all machines without waiting for any consecutive machines. The scheduling of a no-wait flow shop requires finding an appropriate sequence of jobs for scheduling, which in turn reduces total processing time. The classical brute force method for finding the probabilities of scheduling for improving the utilization of resources may become trapped in local optima, and this problem can hence be observed as a typical NP-hard combinatorial optimization problem that requires finding a near optimal solution with heuristic and metaheuristic techniques. This paper proposes an effective hybrid Particle Swarm Optimization (PSO metaheuristic algorithm for solving no-wait flow shop scheduling problems with the objective of minimizing the total flow time of jobs. This Proposed Hybrid Particle Swarm Optimization (PHPSO algorithm presents a solution by the random key representation rule for converting the continuous position information values of particles to a discrete job permutation. The proposed algorithm initializes population efficiently with the Nawaz-Enscore-Ham (NEH heuristic technique and uses an evolutionary search guided by the mechanism of PSO, as well as simulated annealing based on a local neighborhood search to avoid getting stuck in local optima and to provide the appropriate balance of global exploration and local exploitation. Extensive computational experiments are carried out based on Taillard’s benchmark suite. Computational results and comparisons with existing metaheuristics show that the PHPSO algorithm outperforms the existing methods in terms of quality search and robustness for the problem considered. The improvement in solution quality is confirmed by statistical tests of significance.

  7. Computer simulation of solid-liquid coexistence in binary hard sphere mixtures

    NARCIS (Netherlands)

    Kranendonk, W.G.T.; Frenkel, D.

    1991-01-01

    We present the results of a computer simulation study of the solid-liquid coexistence of a binary hard sphere mixture for diameter ratios in the range 0·85 ⩽ ğa ⩽ 1>·00. For the solid phase we only consider substitutionally disordered FCC and HCP crystals. For 0·9425 < α < 1·00 we find a

  8. A new fast algorithm for solving the minimum spanning tree problem based on DNA molecules computation.

    Science.gov (United States)

    Wang, Zhaocai; Huang, Dongmei; Meng, Huajun; Tang, Chengpei

    2013-10-01

    The minimum spanning tree (MST) problem is to find minimum edge connected subsets containing all the vertex of a given undirected graph. It is a vitally important NP-complete problem in graph theory and applied mathematics, having numerous real life applications. Moreover in previous studies, DNA molecular operations usually were used to solve NP-complete head-to-tail path search problems, rarely for NP-hard problems with multi-lateral path solutions result, such as the minimum spanning tree problem. In this paper, we present a new fast DNA algorithm for solving the MST problem using DNA molecular operations. For an undirected graph with n vertex and m edges, we reasonably design flexible length DNA strands representing the vertex and edges, take appropriate steps and get the solutions of the MST problem in proper length range and O(3m+n) time complexity. We extend the application of DNA molecular operations and simultaneity simplify the complexity of the computation. Results of computer simulative experiments show that the proposed method updates some of the best known values with very short time and that the proposed method provides a better performance with solution accuracy over existing algorithms. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  9. Problems in accounting for the soft and hard components in transverse energy triggers

    International Nuclear Information System (INIS)

    Anjos, J.C.; Santoro, A.F.S.; Souza, M.H.G.; Escobar, C.O.

    1983-01-01

    It is argued that for a transverse energy trigger, the cancellation theorem of DeTar, Ellis and Landshoff is not valid. As a consequence, the problem of accounting for soft and hard components in this kind of trigger becomes complicated and no simple separation between them is expected. (Author) [pt

  10. Parallel metaheuristics in computational biology: an asynchronous cooperative enhanced scatter search method

    OpenAIRE

    Penas, David R.; González, Patricia; Egea, José A.; Banga, Julio R.; Doallo, Ramón

    2015-01-01

    Metaheuristics are gaining increased attention as efficient solvers for hard global optimization problems arising in bioinformatics and computational systems biology. Scatter Search (SS) is one of the recent outstanding algorithms in that class. However, its application to very hard problems, like those considering parameter estimation in dynamic models of systems biology, still results in excessive computation times. In order to reduce the computational cost of the SS and improve its success...

  11. Advances in bio-inspired computing for combinatorial optimization problems

    CERN Document Server

    Pintea, Camelia-Mihaela

    2013-01-01

    Advances in Bio-inspired Combinatorial Optimization Problems' illustrates several recent bio-inspired efficient algorithms for solving NP-hard problems.Theoretical bio-inspired concepts and models, in particular for agents, ants and virtual robots are described. Large-scale optimization problems, for example: the Generalized Traveling Salesman Problem and the Railway Traveling Salesman Problem, are solved and their results are discussed.Some of the main concepts and models described in this book are: inner rule to guide ant search - a recent model in ant optimization, heterogeneous sensitive a

  12. Rad-hard embedded computers for nuclear robotics; Calculateurs durcis embarques pour la robotique nucleaire

    Energy Technology Data Exchange (ETDEWEB)

    Giraud, A; Joffre, F; Marceau, M; Robiolle, M; Brunet, J P; Mijuin, D

    1994-12-31

    For requirements of nuclear industries, it is necessary to use robots with embedded rad hard electronics and high level safety. The computer developed for french research program SYROCO is presented in this paper. (authors). 8 refs., 5 figs.

  13. Molecular computing towards a novel computing architecture for complex problem solving

    CERN Document Server

    Chang, Weng-Long

    2014-01-01

    This textbook introduces a concise approach to the design of molecular algorithms for students or researchers who are interested in dealing with complex problems. Through numerous examples and exercises, you will understand the main difference of molecular circuits and traditional digital circuits to manipulate the same problem and you will also learn how to design a molecular algorithm of solving any a problem from start to finish. The book starts with an introduction to computational aspects of digital computers and molecular computing, data representation of molecular computing, molecular operations of molecular computing and number representation of molecular computing, and provides many molecular algorithm to construct the parity generator and the parity checker of error-detection codes on digital communication, to encode integers of different formats, single precision and double precision of floating-point numbers, to implement addition and subtraction of unsigned integers, to construct logic operations...

  14. Supervised learning from human performance at the computationally hard problem of optimal traffic signal control on a network of junctions.

    Science.gov (United States)

    Box, Simon

    2014-12-01

    Optimal switching of traffic lights on a network of junctions is a computationally intractable problem. In this research, road traffic networks containing signallized junctions are simulated. A computer game interface is used to enable a human 'player' to control the traffic light settings on the junctions within the simulation. A supervised learning approach, based on simple neural network classifiers can be used to capture human player's strategies in the game and thus develop a human-trained machine control (HuTMaC) system that approaches human levels of performance. Experiments conducted within the simulation compare the performance of HuTMaC to two well-established traffic-responsive control systems that are widely deployed in the developed world and also to a temporal difference learning-based control method. In all experiments, HuTMaC outperforms the other control methods in terms of average delay and variance over delay. The conclusion is that these results add weight to the suggestion that HuTMaC may be a viable alternative, or supplemental method, to approximate optimization for some practical engineering control problems where the optimal strategy is computationally intractable.

  15. Excavation Technology for Hard Rock - Problems and Prospects

    International Nuclear Information System (INIS)

    Gillani, S.T.A.; Butt, N.

    2009-01-01

    Civil engineering projects have greatly benefited from the mechanical excavation of hard rock technology. Mining industry, on the other hand, is still searching for major breakthroughs to mechanize and then automate the winning of ore and drivage of access tunnels in its metalliferous sector. The aim of this study is to extend the scope of drag bits for road headers in hard rock cutting. Various factors that can impose limitations on the potential applications of drag bits in hard rock mining are investigated and discussed along with alternative technology options. (author)

  16. Monte Carlo computer simulation of sedimentation of charged hard spherocylinders

    International Nuclear Information System (INIS)

    Viveros-Méndez, P. X.; Aranda-Espinoza, S.; Gil-Villegas, Alejandro

    2014-01-01

    In this article we present a NVT Monte Carlo computer simulation study of sedimentation of an electroneutral mixture of oppositely charged hard spherocylinders (CHSC) with aspect ratio L/σ = 5, where L and σ are the length and diameter of the cylinder and hemispherical caps, respectively, for each particle. This system is an extension of the restricted primitive model for spherical particles, where L/σ = 0, and it is assumed that the ions are immersed in an structureless solvent, i.e., a continuum with dielectric constant D. The system consisted of N = 2000 particles and the Wolf method was implemented to handle the coulombic interactions of the inhomogeneous system. Results are presented for different values of the strength ratio between the gravitational and electrostatic interactions, Γ = (mgσ)/(e 2 /Dσ), where m is the mass per particle, e is the electron's charge and g is the gravitational acceleration value. A semi-infinite simulation cell was used with dimensions L x ≈ L y and L z = 5L x , where L x , L y , and L z are the box dimensions in Cartesian coordinates, and the gravitational force acts along the z-direction. Sedimentation effects were studied by looking at every layer formed by the CHSC along the gravitational field. By increasing Γ, particles tend to get more packed at each layer and to arrange in local domains with an orientational ordering along two perpendicular axis, a feature not observed in the uncharged system with the same hard-body geometry. This type of arrangement, known as tetratic phase, has been observed in two-dimensional systems of hard-rectangles and rounded hard-squares. In this way, the coupling of gravitational and electric interactions in the CHSC system induces the arrangement of particles in layers, with the formation of quasi-two dimensional tetratic phases near the surface

  17. Computational Complexity of Some Problems on Generalized Cellular Automations

    Directory of Open Access Journals (Sweden)

    P. G. Klyucharev

    2012-03-01

    Full Text Available We prove that the preimage problem of a generalized cellular automation is NP-hard. The results of this work are important for supporting the security of the ciphers based on the cellular automations.

  18. Music algorithm for imaging of a sound-hard arc in limited-view inverse scattering problem

    Science.gov (United States)

    Park, Won-Kwang

    2017-07-01

    MUltiple SIgnal Classification (MUSIC) algorithm for a non-iterative imaging of sound-hard arc in limited-view inverse scattering problem is considered. In order to discover mathematical structure of MUSIC, we derive a relationship between MUSIC and an infinite series of Bessel functions of integer order. This structure enables us to examine some properties of MUSIC in limited-view problem. Numerical simulations are performed to support the identified structure of MUSIC.

  19. Computing the Fréchet distance between folded polygons

    NARCIS (Netherlands)

    Cook IV, A.F.; Driemel, A.; Sherette, J.; Wenk, C.

    2015-01-01

    Computing the Fréchet distance for surfaces is a surprisingly hard problem and the only known polynomial-time algorithm is limited to computing it between flat surfaces. We study the problem of computing the Fréchet distance for a class of non-flat surfaces called folded polygons. We present a

  20. The Evolution of Computer Based Learning Software Design: Computer Assisted Teaching Unit Experience.

    Science.gov (United States)

    Blandford, A. E.; Smith, P. R.

    1986-01-01

    Describes the style of design of computer simulations developed by Computer Assisted Teaching Unit at Queen Mary College with reference to user interface, input and initialization, input data vetting, effective display screen use, graphical results presentation, and need for hard copy. Procedures and problems relating to academic involvement are…

  1. Visual problems in young adults due to computer use.

    Science.gov (United States)

    Moschos, M M; Chatziralli, I P; Siasou, G; Papazisis, L

    2012-04-01

    Computer use can cause visual problems. The purpose of our study was to evaluate visual problems due to computer use in young adults. Participants in our study were 87 adults, 48 male and 39 female, mean aged 31.3 years old (SD 7.6). All the participants completed a questionnaire regarding visual problems detected after computer use. The mean daily use of computers was 3.2 hours (SD 2.7). 65.5 % of the participants complained for dry eye, mainly after more than 2.5 hours of computer use. 32 persons (36.8 %) had a foreign body sensation in their eyes, while 15 participants (17.2 %) complained for blurred vision which caused difficulties in driving, after 3.25 hours of continuous computer use. 10.3 % of the participants sought medical advice for their problem. There was a statistically significant correlation between the frequency of visual problems and the duration of computer use (p = 0.021). 79.3 % of the participants use artificial tears during or after long use of computers, so as not to feel any ocular discomfort. The main symptom after computer use in young adults was dry eye. All visual problems associated with the duration of computer use. Artificial tears play an important role in the treatment of ocular discomfort after computer use. © Georg Thieme Verlag KG Stuttgart · New York.

  2. Polyomino Problems to Confuse Computers

    Science.gov (United States)

    Coffin, Stewart

    2009-01-01

    Computers are very good at solving certain types combinatorial problems, such as fitting sets of polyomino pieces into square or rectangular trays of a given size. However, most puzzle-solving programs now in use assume orthogonal arrangements. When one departs from the usual square grid layout, complications arise. The author--using a computer,…

  3. The Consensus String Problem and the Complexity of Comparing Hidden Markov Models

    DEFF Research Database (Denmark)

    Lyngsø, Rune Bang; Pedersen, Christian Nørgaard Storm

    2002-01-01

    The basic theory of hidden Markov models was developed and applied to problems in speech recognition in the late 1960s, and has since then been applied to numerous problems, e.g. biological sequence analysis. Most applications of hidden Markov models are based on efficient algorithms for computing...... the probability of generating a given string, or computing the most likely path generating a given string. In this paper we consider the problem of computing the most likely string, or consensus string, generated by a given model, and its implications on the complexity of comparing hidden Markov models. We show...... that computing the consensus string, and approximating its probability within any constant factor, is NP-hard, and that the same holds for the closely related labeling problem for class hidden Markov models. Furthermore, we establish the NP-hardness of comparing two hidden Markov models under the L∞- and L1...

  4. Computer jargon explained

    CERN Document Server

    Enticknap, Nicholas

    2014-01-01

    Computer Jargon Explained is a feature in Computer Weekly publications that discusses 68 of the most commonly used technical computing terms. The book explains what the terms mean and why the terms are important to computer professionals. The text also discusses how the terms relate to the trends and developments that are driving the information technology industry. Computer jargon irritates non-computer people and in turn causes problems for computer people. The technology and the industry are changing so rapidly; it is very hard even for professionals to keep updated. Computer people do not

  5. Classroon demonstration: Foucault s currents explored by the computer hard disc (HD

    Directory of Open Access Journals (Sweden)

    Jorge Roberto Pimentel

    2008-09-01

    Full Text Available This paper making an experimental exploration of Foucault s currents (eddy currents through a rotor magnetically coupled to computer hard disc (HD that is no longer being used. The set-up allows to illustrate in a stimulating way electromagnetism classes in High Schools for mean of the qualitative observations of the currents which are created as consequence of the movement of an electric conductor in a region where a magnetic field exists.

  6. NP-hardness of decoding quantum error-correction codes

    Science.gov (United States)

    Hsieh, Min-Hsiu; Le Gall, François

    2011-05-01

    Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.

  7. NP-hardness of decoding quantum error-correction codes

    International Nuclear Information System (INIS)

    Hsieh, Min-Hsiu; Le Gall, Francois

    2011-01-01

    Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.

  8. Artificial immune system and sheep flock algorithms for two-stage fixed-charge transportation problem

    DEFF Research Database (Denmark)

    Kannan, Devika; Govindan, Kannan; Soleimani, Hamed

    2014-01-01

    In this paper, we cope with a two-stage distribution planning problem of supply chain regarding fixed charges. The focus of the paper is on developing efficient solution methodologies of the selected NP-hard problem. Based on computational limitations, common exact and approximation solution...... approaches are unable to solve real-world instances of such NP-hard problems in a reasonable time. These approaches involve cumbersome computational steps in real-size cases. In order to solve the mixed integer linear programming model, we develop an artificial immune system and a sheep flock algorithm...

  9. Monte Carlo computer simulation of sedimentation of charged hard spherocylinders

    Energy Technology Data Exchange (ETDEWEB)

    Viveros-Méndez, P. X., E-mail: xviveros@fisica.uaz.edu.mx; Aranda-Espinoza, S. [Unidad Académica de Física, Universidad Autónoma de Zacatecas, Calzada Solidaridad esq. Paseo, La Bufa s/n, 98060 Zacatecas, Zacatecas, México (Mexico); Gil-Villegas, Alejandro [Departamento de Ingeniería Física, División de Ciencias e Ingenierías, Campus León, Universidad de Guanajuato, Loma del Bosque 103, Lomas del Campestre, 37150 León, Guanajuato, México (Mexico)

    2014-07-28

    In this article we present a NVT Monte Carlo computer simulation study of sedimentation of an electroneutral mixture of oppositely charged hard spherocylinders (CHSC) with aspect ratio L/σ = 5, where L and σ are the length and diameter of the cylinder and hemispherical caps, respectively, for each particle. This system is an extension of the restricted primitive model for spherical particles, where L/σ = 0, and it is assumed that the ions are immersed in an structureless solvent, i.e., a continuum with dielectric constant D. The system consisted of N = 2000 particles and the Wolf method was implemented to handle the coulombic interactions of the inhomogeneous system. Results are presented for different values of the strength ratio between the gravitational and electrostatic interactions, Γ = (mgσ)/(e{sup 2}/Dσ), where m is the mass per particle, e is the electron's charge and g is the gravitational acceleration value. A semi-infinite simulation cell was used with dimensions L{sub x} ≈ L{sub y} and L{sub z} = 5L{sub x}, where L{sub x}, L{sub y}, and L{sub z} are the box dimensions in Cartesian coordinates, and the gravitational force acts along the z-direction. Sedimentation effects were studied by looking at every layer formed by the CHSC along the gravitational field. By increasing Γ, particles tend to get more packed at each layer and to arrange in local domains with an orientational ordering along two perpendicular axis, a feature not observed in the uncharged system with the same hard-body geometry. This type of arrangement, known as tetratic phase, has been observed in two-dimensional systems of hard-rectangles and rounded hard-squares. In this way, the coupling of gravitational and electric interactions in the CHSC system induces the arrangement of particles in layers, with the formation of quasi-two dimensional tetratic phases near the surface.

  10. Theoretical Computer Science

    DEFF Research Database (Denmark)

    2002-01-01

    The proceedings contains 8 papers from the Conference on Theoretical Computer Science. Topics discussed include: query by committee, linear separation and random walks; hardness results for neural network approximation problems; a geometric approach to leveraging weak learners; mind change...

  11. Singular problems in shell theory. Computing and asymptotics

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez-Palencia, Evariste [Institut Jean Le Rond d' Alembert, Paris (France); Millet, Olivier [La Rochelle Univ. (France). LEPTIAB; Bechet, Fabien [Metz Univ. (France). LPMM

    2010-07-01

    It is known that deformations of thin shells exhibit peculiarities such as propagation of singularities, edge and internal layers, piecewise quasi inextensional deformations, sensitive problems and others, leading in most cases to numerical locking phenomena under several forms, and very poor quality of computations for small relative thickness. Most of these phenomena have a local and often anisotropic character (elongated in some directions), so that efficient numerical schemes should take them in consideration. This book deals with various topics in this context: general geometric formalism, analysis of singularities, numerical computing of thin shell problems, estimates for finite element approximation (including non-uniform and anisotropic meshes), mathematical considerations on boundary value problems in connection with sensitive problems encountered for very thin shells; and others. Most of numerical computations presented here use an adaptive anisotropic mesh procedure which allows a good computation of the physical peculiarities on one hand, and the possibility to perform automatic computations (without a previous mathematical description of the singularities) on the other. The book is recommended for PhD students, postgraduates and researchers who want to improve their knowledge in shell theory and in particular in the areas addressed (analysis of singularities, numerical computing of thin and very thin shell problems, sensitive problems). The lecture of the book may not be continuous and the reader may refer directly to the chapters concerned. (orig.)

  12. Effectively Tackling Reinsurance Problems by Using Evolutionary and Swarm Intelligence Algorithms

    Directory of Open Access Journals (Sweden)

    Sancho Salcedo-Sanz

    2014-04-01

    Full Text Available This paper is focused on solving different hard optimization problems that arise in the field of insurance and, more specifically, in reinsurance problems. In this area, the complexity of the models and assumptions considered in the definition of the reinsurance rules and conditions produces hard black-box optimization problems (problems in which the objective function does not have an algebraic expression, but it is the output of a system (usually a computer program, which must be solved in order to obtain the optimal output of the reinsurance. The application of traditional optimization approaches is not possible in this kind of mathematical problem, so new computational paradigms must be applied to solve these problems. In this paper, we show the performance of two evolutionary and swarm intelligence techniques (evolutionary programming and particle swarm optimization. We provide an analysis in three black-box optimization problems in reinsurance, where the proposed approaches exhibit an excellent behavior, finding the optimal solution within a fraction of the computational cost used by inspection or enumeration methods.

  13. From animals to robots and back reflections on hard problems in the study of cognition a collection in honour of Aaron Sloman

    CERN Document Server

    Petters, Dean; Hogg, David

    2014-01-01

    Cognitive Science is a discipline that brings together research in natural and artificial systems and this is clearly reflected in the diverse contributions to From Animals to Robots and Back: Reflections on Hard Problems in the Study of Cognition. In tribute to Aaron Sloman and his pioneering work in Cognitive Science and Artificial Intelligence, the editors have collected a unique collection of cross-disciplinary papers that include work on: · intelligent robotics; · philosophy of cognitive science; · emotional research · computational vision; · comparative psychology; and · human-computer interaction. Key themes such as the importance of taking an architectural view in approaching cognition, run through the text. Drawing on the expertize of leading international researchers, contemporary debates in the study of natural and artificial cognition are addressed from complementary and contrasting perspectives with key issues being outlined at various levels of abstraction. From Animals to Robots and Back:...

  14. A comparative approach for the investigation of biological information processing: An examination of the structure and function of computer hard drives and DNA

    Science.gov (United States)

    2010-01-01

    Background The robust storage, updating and utilization of information are necessary for the maintenance and perpetuation of dynamic systems. These systems can exist as constructs of metal-oxide semiconductors and silicon, as in a digital computer, or in the "wetware" of organic compounds, proteins and nucleic acids that make up biological organisms. We propose that there are essential functional properties of centralized information-processing systems; for digital computers these properties reside in the computer's hard drive, and for eukaryotic cells they are manifest in the DNA and associated structures. Methods Presented herein is a descriptive framework that compares DNA and its associated proteins and sub-nuclear structure with the structure and function of the computer hard drive. We identify four essential properties of information for a centralized storage and processing system: (1) orthogonal uniqueness, (2) low level formatting, (3) high level formatting and (4) translation of stored to usable form. The corresponding aspects of the DNA complex and a computer hard drive are categorized using this classification. This is intended to demonstrate a functional equivalence between the components of the two systems, and thus the systems themselves. Results Both the DNA complex and the computer hard drive contain components that fulfill the essential properties of a centralized information storage and processing system. The functional equivalence of these components provides insight into both the design process of engineered systems and the evolved solutions addressing similar system requirements. However, there are points where the comparison breaks down, particularly when there are externally imposed information-organizing structures on the computer hard drive. A specific example of this is the imposition of the File Allocation Table (FAT) during high level formatting of the computer hard drive and the subsequent loading of an operating system (OS). Biological

  15. A comparative approach for the investigation of biological information processing: an examination of the structure and function of computer hard drives and DNA.

    Science.gov (United States)

    D'Onofrio, David J; An, Gary

    2010-01-21

    The robust storage, updating and utilization of information are necessary for the maintenance and perpetuation of dynamic systems. These systems can exist as constructs of metal-oxide semiconductors and silicon, as in a digital computer, or in the "wetware" of organic compounds, proteins and nucleic acids that make up biological organisms. We propose that there are essential functional properties of centralized information-processing systems; for digital computers these properties reside in the computer's hard drive, and for eukaryotic cells they are manifest in the DNA and associated structures. Presented herein is a descriptive framework that compares DNA and its associated proteins and sub-nuclear structure with the structure and function of the computer hard drive. We identify four essential properties of information for a centralized storage and processing system: (1) orthogonal uniqueness, (2) low level formatting, (3) high level formatting and (4) translation of stored to usable form. The corresponding aspects of the DNA complex and a computer hard drive are categorized using this classification. This is intended to demonstrate a functional equivalence between the components of the two systems, and thus the systems themselves. Both the DNA complex and the computer hard drive contain components that fulfill the essential properties of a centralized information storage and processing system. The functional equivalence of these components provides insight into both the design process of engineered systems and the evolved solutions addressing similar system requirements. However, there are points where the comparison breaks down, particularly when there are externally imposed information-organizing structures on the computer hard drive. A specific example of this is the imposition of the File Allocation Table (FAT) during high level formatting of the computer hard drive and the subsequent loading of an operating system (OS). Biological systems do not have an

  16. A comparative approach for the investigation of biological information processing: An examination of the structure and function of computer hard drives and DNA

    Directory of Open Access Journals (Sweden)

    D'Onofrio David J

    2010-01-01

    Full Text Available Abstract Background The robust storage, updating and utilization of information are necessary for the maintenance and perpetuation of dynamic systems. These systems can exist as constructs of metal-oxide semiconductors and silicon, as in a digital computer, or in the "wetware" of organic compounds, proteins and nucleic acids that make up biological organisms. We propose that there are essential functional properties of centralized information-processing systems; for digital computers these properties reside in the computer's hard drive, and for eukaryotic cells they are manifest in the DNA and associated structures. Methods Presented herein is a descriptive framework that compares DNA and its associated proteins and sub-nuclear structure with the structure and function of the computer hard drive. We identify four essential properties of information for a centralized storage and processing system: (1 orthogonal uniqueness, (2 low level formatting, (3 high level formatting and (4 translation of stored to usable form. The corresponding aspects of the DNA complex and a computer hard drive are categorized using this classification. This is intended to demonstrate a functional equivalence between the components of the two systems, and thus the systems themselves. Results Both the DNA complex and the computer hard drive contain components that fulfill the essential properties of a centralized information storage and processing system. The functional equivalence of these components provides insight into both the design process of engineered systems and the evolved solutions addressing similar system requirements. However, there are points where the comparison breaks down, particularly when there are externally imposed information-organizing structures on the computer hard drive. A specific example of this is the imposition of the File Allocation Table (FAT during high level formatting of the computer hard drive and the subsequent loading of an operating

  17. Computational sieving applied to some classical number-theoretic problems

    NARCIS (Netherlands)

    H.J.J. te Riele (Herman)

    1998-01-01

    textabstractMany problems in computational number theory require the application of some sieve. Efficient implementation of these sieves on modern computers has extended our knowledge of these problems considerably. This is illustrated by three classical problems: the Goldbach conjecture, factoring

  18. Quantum elastic net and the traveling salesman problem

    International Nuclear Information System (INIS)

    Kostenko, B.F.; Pribis, J.; Yur'ev, M.Z.

    2009-01-01

    Theory of computer calculations strongly depends on the nature of elements the computer is made of. Quantum interference allows one to formulate the Shor factorization algorithm turned out to be more effective than any one written for classical computers. Similarly, quantum wave packet reduction allows one to devise the Grover search algorithm which outperforms any classical one. In the present paper we argue that the quantum incoherent tunneling can be used for elaboration of new algorithms able to solve some NP-hard problems, such as the traveling Salesman Problem, considered to be intractable in the classical theory of computer computations

  19. Security Problems in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Rola Motawie

    2016-12-01

    Full Text Available Cloud is a pool of computing resources which are distributed among cloud users. Cloud computing has many benefits like scalability, flexibility, cost savings, reliability, maintenance and mobile accessibility. Since cloud-computing technology is growing day by day, it comes with many security problems. Securing the data in the cloud environment is most critical challenges which act as a barrier when implementing the cloud. There are many new concepts that cloud introduces, such as resource sharing, multi-tenancy, and outsourcing, create new challenges for the security community. In this work, we provide a comparable study of cloud computing privacy and security concerns. We identify and classify known security threats, cloud vulnerabilities, and attacks.

  20. [Problem list in computer-based patient records].

    Science.gov (United States)

    Ludwig, C A

    1997-01-14

    Computer-based clinical information systems are capable of effectively processing even large amounts of patient-related data. However, physicians depend on rapid access to summarized, clearly laid out data on the computer screen to inform themselves about a patient's current clinical situation. In introducing a clinical workplace system, we therefore transformed the problem list-which for decades has been successfully used in clinical information management-into an electronic equivalent and integrated it into the medical record. The table contains a concise overview of diagnoses and problems as well as related findings. Graphical information can also be integrated into the table, and an additional space is provided for a summary of planned examinations or interventions. The digital form of the problem list makes it possible to use the entire list or selected text elements for generating medical documents. Diagnostic terms for medical reports are transferred automatically to corresponding documents. Computer technology has an immense potential for the further development of problem list concepts. With multimedia applications sound and images will be included in the problem list. For hyperlink purpose the problem list could become a central information board and table of contents of the medical record, thus serving as the starting point for database searches and supporting the user in navigating through the medical record.

  1. A Cognitive Model for Problem Solving in Computer Science

    Science.gov (United States)

    Parham, Jennifer R.

    2009-01-01

    According to industry representatives, computer science education needs to emphasize the processes involved in solving computing problems rather than their solutions. Most of the current assessment tools used by universities and computer science departments analyze student answers to problems rather than investigating the processes involved in…

  2. Unraveling Quantum Annealers using Classical Hardness

    Science.gov (United States)

    Martin-Mayor, Victor; Hen, Itay

    2015-01-01

    Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, commonly referred to as ‘D-Wave’ chips, promise to solve practical optimization problems potentially faster than conventional ‘classical’ computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of experimental quantum annealers from their classical thermal counterparts. Inspired by recent results in spin-glass theory that recognize ‘temperature chaos’ as the underlying mechanism responsible for the computational intractability of hard optimization problems, we devise a general method to quantify the performance of quantum annealers on optimization problems suffering from varying degrees of temperature chaos: A superior performance of quantum annealers over classical algorithms on these may allude to the role that quantum effects play in providing speedup. We utilize our method to experimentally study the D-Wave Two chip on different temperature-chaotic problems and find, surprisingly, that its performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss several purely classical effects that possibly mask the quantum behavior of the chip. PMID:26483257

  3. A neural algorithm for a fundamental computing problem.

    Science.gov (United States)

    Dasgupta, Sanjoy; Stevens, Charles F; Navlakha, Saket

    2017-11-10

    Similarity search-for example, identifying similar images in a database or similar documents on the web-is a fundamental computing problem faced by large-scale information retrieval systems. We discovered that the fruit fly olfactory circuit solves this problem with a variant of a computer science algorithm (called locality-sensitive hashing). The fly circuit assigns similar neural activity patterns to similar odors, so that behaviors learned from one odor can be applied when a similar odor is experienced. The fly algorithm, however, uses three computational strategies that depart from traditional approaches. These strategies can be translated to improve the performance of computational similarity searches. This perspective helps illuminate the logic supporting an important sensory function and provides a conceptually new algorithm for solving a fundamental computational problem. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  4. Fundamental measure theory for hard-sphere mixtures: a review

    International Nuclear Information System (INIS)

    Roth, Roland

    2010-01-01

    Hard-sphere systems are one of the fundamental model systems of statistical physics and represent an important reference system for molecular or colloidal systems with soft repulsive or attractive interactions in addition to hard-core repulsion at short distances. Density functional theory for classical systems, as one of the core theoretical approaches of statistical physics of fluids and solids, has to be able to treat such an important system successfully and accurately. Fundamental measure theory is up to date the most successful and most accurate density functional theory for hard-sphere mixtures. Since its introduction fundamental measure theory has been applied to many problems, tested against computer simulations, and further developed in many respects. The literature on fundamental measure theory is already large and is growing fast. This review aims to provide a starting point for readers new to fundamental measure theory and an overview of important developments. (topical review)

  5. Solving satisfiability problems by the ground-state quantum computer

    International Nuclear Information System (INIS)

    Mao Wenjin

    2005-01-01

    A quantum algorithm is proposed to solve the satisfiability (SAT) problems by the ground-state quantum computer. The scale of the energy gap of the ground-state quantum computer is analyzed for the 3-bit exact cover problem. The time cost of this algorithm on the general SAT problems is discussed

  6. Recycling potential of neodymium: the case of computer hard disk drives.

    Science.gov (United States)

    Sprecher, Benjamin; Kleijn, Rene; Kramer, Gert Jan

    2014-08-19

    Neodymium, one of the more critically scarce rare earth metals, is often used in sustainable technologies. In this study, we investigate the potential contribution of neodymium recycling to reducing scarcity in supply, with a case study on computer hard disk drives (HDDs). We first review the literature on neodymium production and recycling potential. From this review, we find that recycling of computer HDDs is currently the most feasible pathway toward large-scale recycling of neodymium, even though HDDs do not represent the largest application of neodymium. We then use a combination of dynamic modeling and empirical experiments to conclude that within the application of NdFeB magnets for HDDs, the potential for loop-closing is significant: up to 57% in 2017. However, compared to the total NdFeB production capacity, the recovery potential from HDDs is relatively small (in the 1-3% range). The distributed nature of neodymium poses a significant challenge for recycling of neodymium.

  7. Adapting Experiential Learning to Develop Problem-Solving Skills in Deaf and Hard-of-Hearing Engineering Students

    Science.gov (United States)

    Marshall, Matthew M.; Carrano, Andres L.; Dannels, Wendy A.

    2016-01-01

    Individuals who are deaf and hard-of-hearing (DHH) are underrepresented in science, technology, engineering, and mathematics (STEM) professions, and this may be due in part to their level of preparation in the development and retention of mathematical and problem-solving skills. An approach was developed that incorporates experiential learning and…

  8. Solving computationally expensive engineering problems

    CERN Document Server

    Leifsson, Leifur; Yang, Xin-She

    2014-01-01

    Computational complexity is a serious bottleneck for the design process in virtually any engineering area. While migration from prototyping and experimental-based design validation to verification using computer simulation models is inevitable and has a number of advantages, high computational costs of accurate, high-fidelity simulations can be a major issue that slows down the development of computer-aided design methodologies, particularly those exploiting automated design improvement procedures, e.g., numerical optimization. The continuous increase of available computational resources does not always translate into shortening of the design cycle because of the growing demand for higher accuracy and necessity to simulate larger and more complex systems. Accurate simulation of a single design of a given system may be as long as several hours, days or even weeks, which often makes design automation using conventional methods impractical or even prohibitive. Additional problems include numerical noise often pr...

  9. Computational Modeling Develops Ultra-Hard Steel

    Science.gov (United States)

    2007-01-01

    Glenn Research Center's Mechanical Components Branch developed a spiral bevel or face gear test rig for testing thermal behavior, surface fatigue, strain, vibration, and noise; a full-scale, 500-horsepower helicopter main-rotor transmission testing stand; a gear rig that allows fundamental studies of the dynamic behavior of gear systems and gear noise; and a high-speed helical gear test for analyzing thermal behavior for rotorcraft. The test rig provides accelerated fatigue life testing for standard spur gears at speeds of up to 10,000 rotations per minute. The test rig enables engineers to investigate the effects of materials, heat treat, shot peen, lubricants, and other factors on the gear's performance. QuesTek Innovations LLC, based in Evanston, Illinois, recently developed a carburized, martensitic gear steel with an ultra-hard case using its computational design methodology, but needed to verify surface fatigue, lifecycle performance, and overall reliability. The Battelle Memorial Institute introduced the company to researchers at Glenn's Mechanical Components Branch and facilitated a partnership allowing researchers at the NASA Center to conduct spur gear fatigue testing for the company. Testing revealed that QuesTek's gear steel outperforms the current state-of-the-art alloys used for aviation gears in contact fatigue by almost 300 percent. With the confidence and credibility provided by the NASA testing, QuesTek is commercializing two new steel alloys. Uses for this new class of steel are limitless in areas that demand exceptional strength for high throughput applications.

  10. Standard hardness conversion tables for metals relationship among brinell hardness, vickers hardness, rockwell hardness, superficial hardness, knoop hardness, and scleroscope hardness

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2007-01-01

    1.1 Conversion Table 1 presents data in the Rockwell C hardness range on the relationship among Brinell hardness, Vickers hardness, Rockwell hardness, Rockwell superficial hardness, Knoop hardness, and Scleroscope hardness of non-austenitic steels including carbon, alloy, and tool steels in the as-forged, annealed, normalized, and quenched and tempered conditions provided that they are homogeneous. 1.2 Conversion Table 2 presents data in the Rockwell B hardness range on the relationship among Brinell hardness, Vickers hardness, Rockwell hardness, Rockwell superficial hardness, Knoop hardness, and Scleroscope hardness of non-austenitic steels including carbon, alloy, and tool steels in the as-forged, annealed, normalized, and quenched and tempered conditions provided that they are homogeneous. 1.3 Conversion Table 3 presents data on the relationship among Brinell hardness, Vickers hardness, Rockwell hardness, Rockwell superficial hardness, and Knoop hardness of nickel and high-nickel alloys (nickel content o...

  11. Computational problems in science and engineering

    CERN Document Server

    Bulucea, Aida; Tsekouras, George

    2015-01-01

    This book provides readers with modern computational techniques for solving variety of problems from electrical, mechanical, civil and chemical engineering. Mathematical methods are presented in a unified manner, so they can be applied consistently to problems in applied electromagnetics, strength of materials, fluid mechanics, heat and mass transfer, environmental engineering, biomedical engineering, signal processing, automatic control and more.

  12. Perceived problems with computer gaming and internet use among adolescents

    DEFF Research Database (Denmark)

    Holstein, Bjørn E; Pedersen, Trine Pagh; Bendtsen, Pernille

    2014-01-01

    BACKGROUND: Existing instruments for measuring problematic computer and console gaming and internet use are often lengthy and often based on a pathological perspective. The objective was to develop and present a new and short non-clinical measurement tool for perceived problems related to computer...... on weekdays on computer- and console-gaming and internet use for communication and surfing. The outcome measures were three indexes on perceived problems related to computer and console gaming and internet use. RESULTS: The three new indexes showed high face validity and acceptable internal consistency. Most...... schoolchildren with high screen time did not experience problems related to computer use. Still, there was a strong and graded association between time use and perceived problems related to computer gaming, console gaming (only boys) and internet use, odds ratios ranging from 6.90 to 10.23. CONCLUSION: The three...

  13. Assessment of computer-related health problems among post-graduate nursing students.

    Science.gov (United States)

    Khan, Shaheen Akhtar; Sharma, Veena

    2013-01-01

    The study was conducted to assess computer-related health problems among post-graduate nursing students and to develop a Self Instructional Module for prevention of computer-related health problems in a selected university situated in Delhi. A descriptive survey with co-relational design was adopted. A total of 97 samples were selected from different faculties of Jamia Hamdard by multi stage sampling with systematic random sampling technique. Among post-graduate students, majority of sample subjects had average compliance with computer-related ergonomics principles. As regards computer related health problems, majority of post graduate students had moderate computer-related health problems, Self Instructional Module developed for prevention of computer-related health problems was found to be acceptable by the post-graduate students.

  14. Visual inspection technology in the hard disc drive industry

    CERN Document Server

    Muneesawang, Paisarn

    2015-01-01

    A presentation of the use of computer vision systems to control manufacturing processes and product quality in the hard disk drive industry. Visual Inspection Technology in the Hard Disk Drive Industry is an application-oriented book borne out of collaborative research with the world's leading hard disk drive companies. It covers the latest developments and important topics in computer vision technology in hard disk drive manufacturing, as well as offering a glimpse of future technologies.

  15. Problems experienced by people with arthritis when using a computer.

    Science.gov (United States)

    Baker, Nancy A; Rogers, Joan C; Rubinstein, Elaine N; Allaire, Saralynn H; Wasko, Mary Chester

    2009-05-15

    To describe the prevalence of computer use problems experienced by a sample of people with arthritis, and to determine differences in the magnitude of these problems among people with rheumatoid arthritis (RA), osteoarthritis (OA), and fibromyalgia (FM). Subjects were recruited from the Arthritis Network Disease Registry and asked to complete a survey, the Computer Problems Survey, which was developed for this study. Descriptive statistics were calculated for the total sample and the 3 diagnostic subgroups. Ordinal regressions were used to determine differences between the diagnostic subgroups with respect to each equipment item while controlling for confounding demographic variables. A total of 359 respondents completed a survey. Of the 315 respondents who reported using a computer, 84% reported a problem with computer use attributed to their underlying disorder, and approximately 77% reported some discomfort related to computer use. Equipment items most likely to account for problems and discomfort were the chair, keyboard, mouse, and monitor. Of the 3 subgroups, significantly more respondents with FM reported more severe discomfort, more problems, and greater limitations related to computer use than those with RA or OA for all 4 equipment items. Computer use is significantly affected by arthritis. This could limit the ability of a person with arthritis to participate in work and home activities. Further study is warranted to delineate disease-related limitations and develop interventions to reduce them.

  16. Adiabatic quantum computing

    OpenAIRE

    Lobe, Elisabeth; Stollenwerk, Tobias; Tröltzsch, Anke

    2015-01-01

    In the recent years, the field of adiabatic quantum computing has gained importance due to the advances in the realisation of such machines, especially by the company D-Wave Systems. These machines are suited to solve discrete optimisation problems which are typically very hard to solve on a classical computer. Due to the quantum nature of the device it is assumed that there is a substantial speedup compared to classical HPC facilities. We explain the basic principles of adiabatic ...

  17. Computer methods in physics 250 problems with guided solutions

    CERN Document Server

    Landau, Rubin H

    2018-01-01

    Our future scientists and professionals must be conversant in computational techniques. In order to facilitate integration of computer methods into existing physics courses, this textbook offers a large number of worked examples and problems with fully guided solutions in Python as well as other languages (Mathematica, Java, C, Fortran, and Maple). It’s also intended as a self-study guide for learning how to use computer methods in physics. The authors include an introductory chapter on numerical tools and indication of computational and physics difficulty level for each problem.

  18. AI tools in computer based problem solving

    Science.gov (United States)

    Beane, Arthur J.

    1988-01-01

    The use of computers to solve value oriented, deterministic, algorithmic problems, has evolved a structured life cycle model of the software process. The symbolic processing techniques used, primarily in research, for solving nondeterministic problems, and those for which an algorithmic solution is unknown, have evolved a different model, much less structured. Traditionally, the two approaches have been used completely independently. With the advent of low cost, high performance 32 bit workstations executing identical software with large minicomputers and mainframes, it became possible to begin to merge both models into a single extended model of computer problem solving. The implementation of such an extended model on a VAX family of micro/mini/mainframe systems is described. Examples in both development and deployment of applications involving a blending of AI and traditional techniques are given.

  19. [Computer-assisted phacoemulsification for hard cataracts].

    Science.gov (United States)

    Zemba, M; Papadatu, Adriana-Camelia; Sîrbu, Laura-Nicoleta; Avram, Corina

    2012-01-01

    to evaluate the efficiency of new torsional phacoemulsification software (Ozil IP system) in hard nucleus cataract extraction. 45 eyes with hard senile cataract (degree III and IV) underwent phacoemulsification performed by the same surgeon, using the same technique (stop and chop). Infiniti (Alcon) platform was used, with Ozil IP software and Kelman phaco tip miniflared, 45 degrees. The nucleus was split into two and after that the first half was phacoemulsificated with IP-on (group 1) and the second half with IP-off (group 2). For every group we measured: cumulative dissipated energy (CDE), numbers of tip closure that needed manual desobstruction the amount of BSS used. The mean CDE was the same in group 1 and in group 2 (between 6.2 and 14.9). The incidence of occlusion that needed manual desobstruction was lower in group 1 (5 times) than in group 2 (13 times). Group 2 used more BSS compared to group 1. The new torsional software (IP system) significantly decreased occlusion time and balanced salt solution use over standard torsional software, particularly with denser cataracts.

  20. Anatomy of safety-critical computing problems

    International Nuclear Information System (INIS)

    Swu Yih; Fan Chinfeng; Shirazi, Behrooz

    1995-01-01

    This paper analyzes the obstacles faced by current safety-critical computing applications. The major problem lies in the difficulty to provide complete and convincing safety evidence to prove that the software is safe. We explain this problem from a fundamental perspective by analyzing the essence of safety analysis against that of software developed by current practice. Our basic belief is that in order to perform a successful safety analysis, the state space structure of the analyzed system must have some properties as prerequisites. We propose the concept of safety analyzability, and derive its necessary and sufficient conditions; namely, definability, finiteness, commensurability, and tractability. We then examine software state space structures against these conditions, and affirm that the safety analyzability of safety-critical software developed by current practice is severely restricted by its state space structure and by the problem of exponential growth cost. Thus, except for small and simple systems, the safety evidence may not be complete and convincing. Our concepts and arguments successfully explain the current problematic situation faced by the safety-critical computing domain. The implications are also discussed

  1. Applying natural evolution for solving computational problems - Lecture 1

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Darwin’s natural evolution theory has inspired computer scientists for solving computational problems. In a similar way to how humans and animals have evolved along millions of years, computational problems can be solved by evolving a population of solutions through generations until a good solution is found. In the first lecture, the fundaments of evolutionary computing (EC) will be described, covering the different phases that the evolutionary process implies. ECJ, a framework for researching in such field, will be also explained. In the second lecture, genetic programming (GP) will be covered. GP is a sub-field of EC where solutions are actual computational programs represented by trees. Bloat control and distributed evaluation will be introduced.

  2. Applying natural evolution for solving computational problems - Lecture 2

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Darwin’s natural evolution theory has inspired computer scientists for solving computational problems. In a similar way to how humans and animals have evolved along millions of years, computational problems can be solved by evolving a population of solutions through generations until a good solution is found. In the first lecture, the fundaments of evolutionary computing (EC) will be described, covering the different phases that the evolutionary process implies. ECJ, a framework for researching in such field, will be also explained. In the second lecture, genetic programming (GP) will be covered. GP is a sub-field of EC where solutions are actual computational programs represented by trees. Bloat control and distributed evaluation will be introduced.

  3. Exact sampling hardness of Ising spin models

    Science.gov (United States)

    Fefferman, B.; Foss-Feig, M.; Gorshkov, A. V.

    2017-09-01

    We study the complexity of classically sampling from the output distribution of an Ising spin model, which can be implemented naturally in a variety of atomic, molecular, and optical systems. In particular, we construct a specific example of an Ising Hamiltonian that, after time evolution starting from a trivial initial state, produces a particular output configuration with probability very nearly proportional to the square of the permanent of a matrix with arbitrary integer entries. In a similar spirit to boson sampling, the ability to sample classically from the probability distribution induced by time evolution under this Hamiltonian would imply unlikely complexity theoretic consequences, suggesting that the dynamics of such a spin model cannot be efficiently simulated with a classical computer. Physical Ising spin systems capable of achieving problem-size instances (i.e., qubit numbers) large enough so that classical sampling of the output distribution is classically difficult in practice may be achievable in the near future. Unlike boson sampling, our current results only imply hardness of exact classical sampling, leaving open the important question of whether a much stronger approximate-sampling hardness result holds in this context. The latter is most likely necessary to enable a convincing experimental demonstration of quantum supremacy. As referenced in a recent paper [A. Bouland, L. Mancinska, and X. Zhang, in Proceedings of the 31st Conference on Computational Complexity (CCC 2016), Leibniz International Proceedings in Informatics (Schloss Dagstuhl-Leibniz-Zentrum für Informatik, Dagstuhl, 2016)], our result completes the sampling hardness classification of two-qubit commuting Hamiltonians.

  4. Analysis of Hard Thin Film Coating

    Science.gov (United States)

    Shen, Dashen

    1998-01-01

    MSFC is interested in developing hard thin film coating for bearings. The wearing of the bearing is an important problem for space flight engine. Hard thin film coating can drastically improve the surface of the bearing and improve the wear-endurance of the bearing. However, many fundamental problems in surface physics, plasma deposition, etc, need further research. The approach is using electron cyclotron resonance chemical vapor deposition (ECRCVD) to deposit hard thin film an stainless steel bearing. The thin films in consideration include SiC, SiN and other materials. An ECRCVD deposition system is being assembled at MSFC.

  5. Using Computer Simulations in Chemistry Problem Solving

    Science.gov (United States)

    Avramiotis, Spyridon; Tsaparlis, Georgios

    2013-01-01

    This study is concerned with the effects of computer simulations of two novel chemistry problems on the problem solving ability of students. A control-experimental group, equalized by pair groups (n[subscript Exp] = n[subscript Ctrl] = 78), research design was used. The students had no previous experience of chemical practical work. Student…

  6. Computer simulation of the influence of the alloying elements on secondary hardness of the high-speed steels

    International Nuclear Information System (INIS)

    Dobrzanski, L.A.; Sitek, W.; Zaclona, J.

    2004-01-01

    The paper presents the method of modelling of high-speed steels' (HSS) properties, being basing on chemical composition and heat treatment parameters, employing neural networks. An example of its application possibility the computer simulation was made of the influence of the particular alloying elements on hardness and obtained results are presented. (author)

  7. Solving a Hamiltonian Path Problem with a bacterial computer

    Directory of Open Access Journals (Sweden)

    Treece Jessica

    2009-07-01

    Full Text Available Abstract Background The Hamiltonian Path Problem asks whether there is a route in a directed graph from a beginning node to an ending node, visiting each node exactly once. The Hamiltonian Path Problem is NP complete, achieving surprising computational complexity with modest increases in size. This challenge has inspired researchers to broaden the definition of a computer. DNA computers have been developed that solve NP complete problems. Bacterial computers can be programmed by constructing genetic circuits to execute an algorithm that is responsive to the environment and whose result can be observed. Each bacterium can examine a solution to a mathematical problem and billions of them can explore billions of possible solutions. Bacterial computers can be automated, made responsive to selection, and reproduce themselves so that more processing capacity is applied to problems over time. Results We programmed bacteria with a genetic circuit that enables them to evaluate all possible paths in a directed graph in order to find a Hamiltonian path. We encoded a three node directed graph as DNA segments that were autonomously shuffled randomly inside bacteria by a Hin/hixC recombination system we previously adapted from Salmonella typhimurium for use in Escherichia coli. We represented nodes in the graph as linked halves of two different genes encoding red or green fluorescent proteins. Bacterial populations displayed phenotypes that reflected random ordering of edges in the graph. Individual bacterial clones that found a Hamiltonian path reported their success by fluorescing both red and green, resulting in yellow colonies. We used DNA sequencing to verify that the yellow phenotype resulted from genotypes that represented Hamiltonian path solutions, demonstrating that our bacterial computer functioned as expected. Conclusion We successfully designed, constructed, and tested a bacterial computer capable of finding a Hamiltonian path in a three node

  8. Solving a Hamiltonian Path Problem with a bacterial computer

    Science.gov (United States)

    Baumgardner, Jordan; Acker, Karen; Adefuye, Oyinade; Crowley, Samuel Thomas; DeLoache, Will; Dickson, James O; Heard, Lane; Martens, Andrew T; Morton, Nickolaus; Ritter, Michelle; Shoecraft, Amber; Treece, Jessica; Unzicker, Matthew; Valencia, Amanda; Waters, Mike; Campbell, A Malcolm; Heyer, Laurie J; Poet, Jeffrey L; Eckdahl, Todd T

    2009-01-01

    Background The Hamiltonian Path Problem asks whether there is a route in a directed graph from a beginning node to an ending node, visiting each node exactly once. The Hamiltonian Path Problem is NP complete, achieving surprising computational complexity with modest increases in size. This challenge has inspired researchers to broaden the definition of a computer. DNA computers have been developed that solve NP complete problems. Bacterial computers can be programmed by constructing genetic circuits to execute an algorithm that is responsive to the environment and whose result can be observed. Each bacterium can examine a solution to a mathematical problem and billions of them can explore billions of possible solutions. Bacterial computers can be automated, made responsive to selection, and reproduce themselves so that more processing capacity is applied to problems over time. Results We programmed bacteria with a genetic circuit that enables them to evaluate all possible paths in a directed graph in order to find a Hamiltonian path. We encoded a three node directed graph as DNA segments that were autonomously shuffled randomly inside bacteria by a Hin/hixC recombination system we previously adapted from Salmonella typhimurium for use in Escherichia coli. We represented nodes in the graph as linked halves of two different genes encoding red or green fluorescent proteins. Bacterial populations displayed phenotypes that reflected random ordering of edges in the graph. Individual bacterial clones that found a Hamiltonian path reported their success by fluorescing both red and green, resulting in yellow colonies. We used DNA sequencing to verify that the yellow phenotype resulted from genotypes that represented Hamiltonian path solutions, demonstrating that our bacterial computer functioned as expected. Conclusion We successfully designed, constructed, and tested a bacterial computer capable of finding a Hamiltonian path in a three node directed graph. This proof

  9. Random matrix model of adiabatic quantum computing

    International Nuclear Information System (INIS)

    Mitchell, David R.; Adami, Christoph; Lue, Waynn; Williams, Colin P.

    2005-01-01

    We present an analysis of the quantum adiabatic algorithm for solving hard instances of 3-SAT (an NP-complete problem) in terms of random matrix theory (RMT). We determine the global regularity of the spectral fluctuations of the instantaneous Hamiltonians encountered during the interpolation between the starting Hamiltonians and the ones whose ground states encode the solutions to the computational problems of interest. At each interpolation point, we quantify the degree of regularity of the average spectral distribution via its Brody parameter, a measure that distinguishes regular (i.e., Poissonian) from chaotic (i.e., Wigner-type) distributions of normalized nearest-neighbor spacings. We find that for hard problem instances - i.e., those having a critical ratio of clauses to variables - the spectral fluctuations typically become irregular across a contiguous region of the interpolation parameter, while the spectrum is regular for easy instances. Within the hard region, RMT may be applied to obtain a mathematical model of the probability of avoided level crossings and concomitant failure rate of the adiabatic algorithm due to nonadiabatic Landau-Zener-type transitions. Our model predicts that if the interpolation is performed at a uniform rate, the average failure rate of the quantum adiabatic algorithm, when averaged over hard problem instances, scales exponentially with increasing problem size

  10. Computerized micrographics in processing hard-copy records for an epidemiologic study

    International Nuclear Information System (INIS)

    Robie, D.M.; Fry, S.A.

    1983-01-01

    The availability of computers with increasing capabilities has made feasibile epidemiologic studies involving large populations such as those utilized to evaluate the health effects of occupational exposure to radiation. However, the storage and retrieval of data from the large numbers of hard-copy personnel, health physics, employment medical, historical or anecdotal documents that are the bases of such studies pose major logistics problems to investigators. The potential value of such records to epidemiologic studies depends, not only on their accuracy and completeness, but also on ease of accessibility. To address the latter problem, we are using a stand-alone user-oriented electronic filing system that records, stores, and secures hard-copy documents micrographically. This system is controlled by a computer that provides retrieval of a document image and printed copy (if desired) in less than 30 seconds from a maximum of eight fields. One thousand documents are randomly filmed and indexed on computer storage diskettes in two hours. Manual sorting and filing of the same number of documents takes over a day. At present two thousand documents can be recorded on each microfilm roll and 85,000 documents indexed on each diskette. Simultaneous searching for documents can be done using up to ten terminals while indexing is being done at the main terminal. The micrographics system provides the space-saving and security advantages of microfilm with the speed of computerized data retrieval

  11. Melting of polydisperse hard disks

    NARCIS (Netherlands)

    Pronk, S.; Frenkel, D.

    2004-01-01

    The melting of a polydisperse hard-disk system is investigated by Monte Carlo simulations in the semigrand canonical ensemble. This is done in the context of possible continuous melting by a dislocation-unbinding mechanism, as an extension of the two-dimensional hard-disk melting problem. We find

  12. Computational Biomechanics Theoretical Background and BiologicalBiomedical Problems

    CERN Document Server

    Tanaka, Masao; Nakamura, Masanori

    2012-01-01

    Rapid developments have taken place in biological/biomedical measurement and imaging technologies as well as in computer analysis and information technologies. The increase in data obtained with such technologies invites the reader into a virtual world that represents realistic biological tissue or organ structures in digital form and allows for simulation and what is called “in silico medicine.” This volume is the third in a textbook series and covers both the basics of continuum mechanics of biosolids and biofluids and the theoretical core of computational methods for continuum mechanics analyses. Several biomechanics problems are provided for better understanding of computational modeling and analysis. Topics include the mechanics of solid and fluid bodies, fundamental characteristics of biosolids and biofluids, computational methods in biomechanics analysis/simulation, practical problems in orthopedic biomechanics, dental biomechanics, ophthalmic biomechanics, cardiovascular biomechanics, hemodynamics...

  13. Ontology Design for Solving Computationally-Intensive Problems on Heterogeneous Architectures

    Directory of Open Access Journals (Sweden)

    Hossam M. Faheem

    2018-02-01

    Full Text Available Viewing a computationally-intensive problem as a self-contained challenge with its own hardware, software and scheduling strategies is an approach that should be investigated. We might suggest assigning heterogeneous hardware architectures to solve a problem, while parallel computing paradigms may play an important role in writing efficient code to solve the problem; moreover, the scheduling strategies may be examined as a possible solution. Depending on the problem complexity, finding the best possible solution using an integrated infrastructure of hardware, software and scheduling strategy can be a complex job. Developing and using ontologies and reasoning techniques play a significant role in reducing the complexity of identifying the components of such integrated infrastructures. Undertaking reasoning and inferencing regarding the domain concepts can help to find the best possible solution through a combination of hardware, software and scheduling strategies. In this paper, we present an ontology and show how we can use it to solve computationally-intensive problems from various domains. As a potential use for the idea, we present examples from the bioinformatics domain. Validation by using problems from the Elastic Optical Network domain has demonstrated the flexibility of the suggested ontology and its suitability for use with any other computationally-intensive problem domain.

  14. Haplotyping Problem, A Clustering Approach

    International Nuclear Information System (INIS)

    Eslahchi, Changiz; Sadeghi, Mehdi; Pezeshk, Hamid; Kargar, Mehdi; Poormohammadi, Hadi

    2007-01-01

    Construction of two haplotypes from a set of Single Nucleotide Polymorphism (SNP) fragments is called haplotype reconstruction problem. One of the most popular computational model for this problem is Minimum Error Correction (MEC). Since MEC is an NP-hard problem, here we propose a novel heuristic algorithm based on clustering analysis in data mining for haplotype reconstruction problem. Based on hamming distance and similarity between two fragments, our iterative algorithm produces two clusters of fragments; then, in each iteration, the algorithm assigns a fragment to one of the clusters. Our results suggest that the algorithm has less reconstruction error rate in comparison with other algorithms

  15. One-dimensional computational modeling on nuclear reactor problems

    International Nuclear Information System (INIS)

    Alves Filho, Hermes; Baptista, Josue Costa; Trindade, Luiz Fernando Santos; Heringer, Juan Diego dos Santos

    2013-01-01

    In this article, we present a computational modeling, which gives us a dynamic view of some applications of Nuclear Engineering, specifically in the power distribution and the effective multiplication factor (keff) calculations. We work with one-dimensional problems of deterministic neutron transport theory, with the linearized Boltzmann equation in the discrete ordinates (SN) formulation, independent of time, with isotropic scattering and then built a software (Simulator) for modeling computational problems used in a typical calculations. The program used in the implementation of the simulator was Matlab, version 7.0. (author)

  16. Computer integration in the curriculum: promises and problems

    NARCIS (Netherlands)

    Plomp, T.; van den Akker, Jan

    1988-01-01

    This discussion of the integration of computers into the curriculum begins by reviewing the results of several surveys conducted in the Netherlands and the United States which provide insight into the problems encountered by schools and teachers when introducing computers in education. Case studies

  17. Emotion Oriented Programming: Computational Abstractions for AI Problem Solving

    OpenAIRE

    Darty , Kevin; Sabouret , Nicolas

    2012-01-01

    International audience; In this paper, we present a programming paradigm for AI problem solving based on computational concepts drawn from Affective Computing. It is believed that emotions participate in human adaptability and reactivity, in behaviour selection and in complex and dynamic environments. We propose to define a mechanism inspired from this observation for general AI problem solving. To this purpose, we synthesize emotions as programming abstractions that represent the perception ...

  18. Quantum Computing's Classical Problem, Classical Computing's Quantum Problem

    OpenAIRE

    Van Meter, Rodney

    2013-01-01

    Tasked with the challenge to build better and better computers, quantum computing and classical computing face the same conundrum: the success of classical computing systems. Small quantum computing systems have been demonstrated, and intermediate-scale systems are on the horizon, capable of calculating numeric results or simulating physical systems far beyond what humans can do by hand. However, to be commercially viable, they must surpass what our wildly successful, highly advanced classica...

  19. Computational Recognition of RNA Splice Sites by Exact Algorithms for the Quadratic Traveling Salesman Problem

    Directory of Open Access Journals (Sweden)

    Anja Fischer

    2015-06-01

    Full Text Available One fundamental problem of bioinformatics is the computational recognition of DNA and RNA binding sites. Given a set of short DNA or RNA sequences of equal length such as transcription factor binding sites or RNA splice sites, the task is to learn a pattern from this set that allows the recognition of similar sites in another set of DNA or RNA sequences. Permuted Markov (PM models and permuted variable length Markov (PVLM models are two powerful models for this task, but the problem of finding an optimal PM model or PVLM model is NP-hard. While the problem of finding an optimal PM model or PVLM model of order one is equivalent to the traveling salesman problem (TSP, the problem of finding an optimal PM model or PVLM model of order two is equivalent to the quadratic TSP (QTSP. Several exact algorithms exist for solving the QTSP, but it is unclear if these algorithms are capable of solving QTSP instances resulting from RNA splice sites of at least 150 base pairs in a reasonable time frame. Here, we investigate the performance of three exact algorithms for solving the QTSP for ten datasets of splice acceptor sites and splice donor sites of five different species and find that one of these algorithms is capable of solving QTSP instances of up to 200 base pairs with a running time of less than two days.

  20. Computing camera heading: A study

    Science.gov (United States)

    Zhang, John Jiaxiang

    2000-08-01

    An accurate estimate of the motion of a camera is a crucial first step for the 3D reconstruction of sites, objects, and buildings from video. Solutions to the camera heading problem can be readily applied to many areas, such as robotic navigation, surgical operation, video special effects, multimedia, and lately even in internet commerce. From image sequences of a real world scene, the problem is to calculate the directions of the camera translations. The presence of rotations makes this problem very hard. This is because rotations and translations can have similar effects on the images, and are thus hard to tell apart. However, the visual angles between the projection rays of point pairs are unaffected by rotations, and their changes over time contain sufficient information to determine the direction of camera translation. We developed a new formulation of the visual angle disparity approach, first introduced by Tomasi, to the camera heading problem. Our new derivation makes theoretical analysis possible. Most notably, a theorem is obtained that locates all possible singularities of the residual function for the underlying optimization problem. This allows identifying all computation trouble spots beforehand, and to design reliable and accurate computational optimization methods. A bootstrap-jackknife resampling method simultaneously reduces complexity and tolerates outliers well. Experiments with image sequences show accurate results when compared with the true camera motion as measured with mechanical devices.

  1. Date Sensitive Computing Problems: Understanding the Threat

    Science.gov (United States)

    1998-08-29

    equipment on Earth.3 It can also interfere with electromagnetic signals from such devices as cell phones, radio, televison , and radar. By itself, the ...spacecraft. Debris from impacted satellites will add to the existing orbital debris problem, and could eventually cause damage to other satellites...Date Sensitive Computing Problems Understanding the Threat Aug. 17, 1998 Revised Aug. 29, 1998 Prepared by: The National Crisis Response

  2. Methane in German hard coal mining

    International Nuclear Information System (INIS)

    Martens, P.N.; Den Drijver, J.

    1995-01-01

    Worldwide, hard coal mining is being carried out at ever increasing depth, and has, therefore, to cope with correspondingly increasing methane emissions are caused by coal mining. Beside carbon dioxide, chloro-fluoro-carbons (CFCs) and nitrogen oxides, methane is one of the most significant 'greenhouse' gases. It is mainly through the release of such trace gases that the greenhouse effect is brought about. Reducing methane emissions is therefore an important problem to be solved by the coal mining industry. This paper begins by highlighting some of the fundamental principles of methane in hard coal mining. The methane problem in German hard coal mining and the industry's efforts to reduce methane emissions are presented. The future development in German hard coal mining is illustrated by an example which shows how large methane volumes can be managed, while still maintaining high outputs at increasing depth. (author). 7 tabs., 10 figs., 20 refs

  3. Double hard scattering without double counting

    Energy Technology Data Exchange (ETDEWEB)

    Diehl, Markus [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Gaunt, Jonathan R. [VU Univ. Amsterdam (Netherlands). NIKHEF Theory Group; Schoenwald, Kay [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2017-02-15

    Double parton scattering in proton-proton collisions includes kinematic regions in which two partons inside a proton originate from the perturbative splitting of a single parton. This leads to a double counting problem between single and double hard scattering. We present a solution to this problem, which allows for the definition of double parton distributions as operator matrix elements in a proton, and which can be used at higher orders in perturbation theory. We show how the evaluation of double hard scattering in this framework can provide a rough estimate for the size of the higher-order contributions to single hard scattering that are affected by double counting. In a numeric study, we identify situations in which these higher-order contributions must be explicitly calculated and included if one wants to attain an accuracy at which double hard scattering becomes relevant, and other situations where such contributions may be neglected.

  4. Double hard scattering without double counting

    International Nuclear Information System (INIS)

    Diehl, Markus; Gaunt, Jonathan R.

    2017-02-01

    Double parton scattering in proton-proton collisions includes kinematic regions in which two partons inside a proton originate from the perturbative splitting of a single parton. This leads to a double counting problem between single and double hard scattering. We present a solution to this problem, which allows for the definition of double parton distributions as operator matrix elements in a proton, and which can be used at higher orders in perturbation theory. We show how the evaluation of double hard scattering in this framework can provide a rough estimate for the size of the higher-order contributions to single hard scattering that are affected by double counting. In a numeric study, we identify situations in which these higher-order contributions must be explicitly calculated and included if one wants to attain an accuracy at which double hard scattering becomes relevant, and other situations where such contributions may be neglected.

  5. Students' Mathematics Word Problem-Solving Achievement in a Computer-Based Story

    Science.gov (United States)

    Gunbas, N.

    2015-01-01

    The purpose of this study was to investigate the effect of a computer-based story, which was designed in anchored instruction framework, on sixth-grade students' mathematics word problem-solving achievement. Problems were embedded in a story presented on a computer as computer story, and then compared with the paper-based version of the same story…

  6. A GPU-Based Genetic Algorithm for the P-Median Problem

    OpenAIRE

    AlBdaiwi, Bader F.; AboElFotoh, Hosam M. F.

    2016-01-01

    The p-median problem is a well-known NP-hard problem. Many heuristics have been proposed in the literature for this problem. In this paper, we exploit a GPGPU parallel computing platform to present a new genetic algorithm implemented in Cuda and based on a Pseudo Boolean formulation of the p-median problem. We have tested the effectiveness of our algorithm using a Tesla K40 (2880 Cuda cores) on 290 different benchmark instances obtained from OR-Library, discrete location problems benchmark li...

  7. Second benchmark problem for WIPP structural computations

    International Nuclear Information System (INIS)

    Krieg, R.D.; Morgan, H.S.; Hunter, T.O.

    1980-12-01

    This report describes the second benchmark problem for comparison of the structural codes used in the WIPP project. The first benchmark problem consisted of heated and unheated drifts at a depth of 790 m, whereas this problem considers a shallower level (650 m) more typical of the repository horizon. But more important, the first problem considered a homogeneous salt configuration, whereas this problem considers a configuration with 27 distinct geologic layers, including 10 clay layers - 4 of which are to be modeled as possible slip planes. The inclusion of layering introduces complications in structural and thermal calculations that were not present in the first benchmark problem. These additional complications will be handled differently by the various codes used to compute drift closure rates. This second benchmark problem will assess these codes by evaluating the treatment of these complications

  8. Computing several eigenpairs of Hermitian problems by conjugate gradient iterations

    International Nuclear Information System (INIS)

    Ovtchinnikov, E.E.

    2008-01-01

    The paper is concerned with algorithms for computing several extreme eigenpairs of Hermitian problems based on the conjugate gradient method. We analyse computational strategies employed by various algorithms of this kind reported in the literature and identify their limitations. Our criticism is illustrated by numerical tests on a set of problems from electronic structure calculations and acoustics

  9. Practical applications of soft computing in engineering

    CERN Document Server

    2001-01-01

    Soft computing has been presented not only with the theoretical developments but also with a large variety of realistic applications to consumer products and industrial systems. Application of soft computing has provided the opportunity to integrate human-like vagueness and real-life uncertainty into an otherwise hard computer program. This book highlights some of the recent developments in practical applications of soft computing in engineering problems. All the chapters have been sophisticatedly designed and revised by international experts to achieve wide but in-depth coverage. Contents: Au

  10. New computational methodology for large 3D neutron transport problems

    International Nuclear Information System (INIS)

    Dahmani, M.; Roy, R.; Koclas, J.

    2004-01-01

    We present a new computational methodology, based on 3D characteristics method, dedicated to solve very large 3D problems without spatial homogenization. In order to eliminate the input/output problems occurring when solving these large problems, we set up a new computing scheme that requires more CPU resources than the usual one, based on sweeps over large tracking files. The huge capacity of storage needed in some problems and the related I/O queries needed by the characteristics solver are replaced by on-the-fly recalculation of tracks at each iteration step. Using this technique, large 3D problems are no longer I/O-bound, and distributed CPU resources can be efficiently used. (authors)

  11. On the computation of Clebsch-Gordan coefficients and the dilation effect

    NARCIS (Netherlands)

    De Loera, J.A.; McAllister, T.B.

    2006-01-01

    We investigate the problem of computing tensor product multiplicities for complex semisimple Lie algebras. Even though computing these numbers is #P-hard in general, we show that when the rank of the Lie algebra is assumed fixed, then there is a polynomial-time algorithm, based on counting lattice

  12. Rad-hard embedded computers for nuclear robotics

    International Nuclear Information System (INIS)

    Giraud, A.; Joffre, F.; Marceau, M.; Robiolle, M.; Brunet, J.P.; Mijuin, D.

    1994-01-01

    Nuclear industries require robots with embedded rad hard electronics and high reliability. The SYROCO research program allowed to perform efficient industrial prototypes, build according to MICADO architecture, and to design CADMOS architecture. MICADO architecture uses the auto healing property that have CMOS circuits when being switched off during irradiation. (D.L.). 8 refs., 5 figs

  13. Car sequencing is NP-hard: a short proof

    OpenAIRE

    B Estellon; F Gardi

    2013-01-01

    In this note, a new proof is given that the car sequencing (CS) problem is NP-hard. Established from the Hamiltonian Path problem, the reduction is direct while closing some gaps remaining in the previous NP-hardness results. Since CS is studied in many operational research courses, this result and its proof are particularly interesting for teaching purposes.

  14. Selecting Optimal Random Forest Predictive Models: A Case Study on Predicting the Spatial Distribution of Seabed Hardness

    Science.gov (United States)

    Li, Jin; Tran, Maggie; Siwabessy, Justy

    2016-01-01

    Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and

  15. New results on classical problems in computational geometry in the plane

    DEFF Research Database (Denmark)

    Abrahamsen, Mikkel

    In this thesis, we revisit three classical problems in computational geometry in the plane. An obstacle that often occurs as a subproblem in more complicated problems is to compute the common tangents of two disjoint, simple polygons. For instance, the common tangents turn up in problems related...... to visibility, collision avoidance, shortest paths, etc. We provide a remarkably simple algorithm to compute all (at most four) common tangents of two disjoint simple polygons. Given each polygon as a read-only array of its corners in cyclic order, the algorithm runs in linear time and constant workspace...... and is the first to achieve the two complexity bounds simultaneously. The set of common tangents provides basic information about the convex hulls of the polygons—whether they are nested, overlapping, or disjoint—and our algorithm thus also decides this relationship. One of the best-known problems in computational...

  16. Computer Security: better code, fewer problems

    CERN Multimedia

    Stefan Lueders, Computer Security Team

    2016-01-01

    The origin of many security incidents is negligence or unintentional mistakes made by web developers or programmers. In the rush to complete the work, due to skewed priorities, or just to ignorance, basic security principles can be omitted or forgotten.   The resulting vulnerabilities lie dormant until the evil side spots them and decides to hit hard. Computer security incidents in the past have put CERN’s reputation at risk due to websites being defaced with negative messages about the Organization, hash files of passwords being extracted, restricted data exposed… And it all started with a little bit of negligence! If you check out the Top 10 web development blunders, you will see that the most prevalent mistakes are: Not filtering input, e.g. accepting “<“ or “>” in input fields even if only a number is expected.  Not validating that input: you expect a birth date? So why accept letters? &...

  17. The Consensus String Problem and the Complexity of Comparing Hidden Markov Models

    DEFF Research Database (Denmark)

    Lyngsø, Rune Bang; Pedersen, Christian Nørgaard Storm

    2002-01-01

    The basic theory of hidden Markov models was developed and applied to problems in speech recognition in the late 1960s, and has since then been applied to numerous problems, e.g. biological sequence analysis. Most applications of hidden Markov models are based on efficient algorithms for computing......-norms. We discuss the applicability of the technique used for proving the hardness of comparing two hidden Markov models under the L1-norm to other measures of distance between probability distributions. In particular, we show that it cannot be used for proving NP-hardness of determining the Kullback...

  18. Hardware for soft computing and soft computing for hardware

    CERN Document Server

    Nedjah, Nadia

    2014-01-01

    Single and Multi-Objective Evolutionary Computation (MOEA),  Genetic Algorithms (GAs), Artificial Neural Networks (ANNs), Fuzzy Controllers (FCs), Particle Swarm Optimization (PSO) and Ant colony Optimization (ACO) are becoming omnipresent in almost every intelligent system design. Unfortunately, the application of the majority of these techniques is complex and so requires a huge computational effort to yield useful and practical results. Therefore, dedicated hardware for evolutionary, neural and fuzzy computation is a key issue for designers. With the spread of reconfigurable hardware such as FPGAs, digital as well as analog hardware implementations of such computation become cost-effective. The idea behind this book is to offer a variety of hardware designs for soft computing techniques that can be embedded in any final product. Also, to introduce the successful application of soft computing technique to solve many hard problem encountered during the design of embedded hardware designs. Reconfigurable em...

  19. Cost-effective computations with boundary interface operators in elliptic problems

    International Nuclear Information System (INIS)

    Khoromskij, B.N.; Mazurkevich, G.E.; Nikonov, E.G.

    1993-01-01

    The numerical algorithm for fast computations with interface operators associated with the elliptic boundary value problems (BVP) defined on step-type domains is presented. The algorithm is based on the asymptotically almost optimal technique developed for treatment of the discrete Poincare-Steklov (PS) operators associated with the finite-difference Laplacian on rectangles when using the uniform grid with a 'displacement by h/2'. The approach can be regarded as an extension of the method proposed for the partial solution of the finite-difference Laplace equation to the case of displaced grids and mixed boundary conditions. It is shown that the action of the PS operator for the Dirichlet problem and mixed BVP can be computed with expenses of the order of O(Nlog 2 N) both for arithmetical operations and computer memory needs, where N is the number of unknowns on the rectangle boundary. The single domain algorithm is applied to solving the multidomain elliptic interface problems with piecewise constant coefficients. The numerical experiments presented confirm almost linear growth of the computational costs and memory needs with respect to the dimension of the discrete interface problem. 14 refs., 3 figs., 4 tabs

  20. Digital dice computational solutions to practical probability problems

    CERN Document Server

    Nahin, Paul J

    2013-01-01

    Some probability problems are so difficult that they stump the smartest mathematicians. But even the hardest of these problems can often be solved with a computer and a Monte Carlo simulation, in which a random-number generator simulates a physical process, such as a million rolls of a pair of dice. This is what Digital Dice is all about: how to get numerical answers to difficult probability problems without having to solve complicated mathematical equations. Popular-math writer Paul Nahin challenges readers to solve twenty-one difficult but fun problems, from determining the

  1. Ocular problems of computer vision syndrome: Review

    Directory of Open Access Journals (Sweden)

    Ayakutty Muni Raja

    2015-01-01

    Full Text Available Nowadays, ophthalmologists are facing a new group of patients having eye problems related to prolonged and excessive computer use. When the demand for near work exceeds the normal ability of the eye to perform the job comfortably, one develops discomfort and prolonged exposure, which leads to a cascade of reactions that can be put together as computer vision syndrome (CVS. In India, the computer-using population is more than 40 million, and 80% have discomfort due to CVS. Eye strain, headache, blurring of vision and dryness are the most common symptoms. Workstation modification, voluntary blinking, adjustment of the brightness of screen and breaks in between can reduce CVS.

  2. Locating phase transitions in computationally hard problems

    Indian Academy of Sciences (India)

    New applications of statistical mechanics; analysis of algorithms; heuristics; phase transitions and critical ...... KGaA, Weinheim, 2005). [12] S Zilberstein, AI Magazine 17, 73 (1996) ... versity Press Inc., New York, 1971). [17] F Baras, G Nicolis, ...

  3. A Study of the Correlation between Computer Games and Adolescent Behavioral Problems.

    Science.gov (United States)

    Shokouhi-Moqhaddam, Solmaz; Khezri-Moghadam, Noshiravan; Javanmard, Zeinab; Sarmadi-Ansar, Hassan; Aminaee, Mehran; Shokouhi-Moqhaddam, Majid; Zivari-Rahman, Mahmoud

    2013-01-01

    Today, due to developing communicative technologies, computer games and other audio-visual media as social phenomena, are very attractive and have a great effect on children and adolescents. The increasing popularity of these games among children and adolescents results in the public uncertainties about plausible harmful effects of these games. This study aimed to investigate the correlation between computer games and behavioral problems on male guidance school students. This was a descriptive-correlative study on 384 randomly chosen male guidance school students. They were asked to answer the researcher's questionnaire about computer games and Achenbach's Youth Self-Report (YSR). The Results of this study indicated that there was about 95% direct significant correlation between the amount of playing games among adolescents and anxiety/depression, withdrawn/depression, rule-breaking behaviors, aggression, and social problems. However, there was no statistically significant correlation between the amount of computer game usage and physical complaints, thinking problems, and attention problems. In addition, there was a significant correlation between the students' place of living and their parents' job, and using computer games. Computer games lead to anxiety, depression, withdrawal, rule-breaking behavior, aggression, and social problems in adolescents.

  4. A Study of the Correlation between Computer Games and Adolescent Behavioral Problems

    Science.gov (United States)

    Shokouhi-Moqhaddam, Solmaz; Khezri-Moghadam, Noshiravan; Javanmard, Zeinab; Sarmadi-Ansar, Hassan; Aminaee, Mehran; Shokouhi-Moqhaddam, Majid; Zivari-Rahman, Mahmoud

    2013-01-01

    Background Today, due to developing communicative technologies, computer games and other audio-visual media as social phenomena, are very attractive and have a great effect on children and adolescents. The increasing popularity of these games among children and adolescents results in the public uncertainties about plausible harmful effects of these games. This study aimed to investigate the correlation between computer games and behavioral problems on male guidance school students. Methods This was a descriptive-correlative study on 384 randomly chosen male guidance school students. They were asked to answer the researcher's questionnaire about computer games and Achenbach’s Youth Self-Report (YSR). Findings The Results of this study indicated that there was about 95% direct significant correlation between the amount of playing games among adolescents and anxiety/depression, withdrawn/depression, rule-breaking behaviors, aggression, and social problems. However, there was no statistically significant correlation between the amount of computer game usage and physical complaints, thinking problems, and attention problems. In addition, there was a significant correlation between the students’ place of living and their parents’ job, and using computer games. Conclusion Computer games lead to anxiety, depression, withdrawal, rule-breaking behavior, aggression, and social problems in adolescents. PMID:24494157

  5. Engineering Courses on Computational Thinking Through Solving Problems in Artificial Intelligence

    Directory of Open Access Journals (Sweden)

    Piyanuch Silapachote

    2017-09-01

    Full Text Available Computational thinking sits at the core of every engineering and computing related discipline. It has increasingly emerged as its own subject in all levels of education. It is a powerful cornerstone for cognitive development, creative problem solving, algorithmic thinking and designs, and programming. How to effectively teach computational thinking skills poses real challenges and creates opportunities. Targeting entering computer science and engineering undergraduates, we resourcefully integrate elements from artificial intelligence (AI into introductory computing courses. In addition to comprehension of the essence of computational thinking, practical exercises in AI enable inspirations of collaborative problem solving beyond abstraction, logical reasoning, critical and analytical thinking. Problems in machine intelligence systems intrinsically connect students to algorithmic oriented computing and essential mathematical foundations. Beyond knowledge representation, AI fosters a gentle introduction to data structures and algorithms. Focused on engaging mental tool, a computer is never a necessity. Neither coding nor programming is ever required. Instead, students enjoy constructivist classrooms designed to always be active, flexible, and highly dynamic. Learning to learn and reflecting on cognitive experiences, they rigorously construct knowledge from collectively solving exciting puzzles, competing in strategic games, and participating in intellectual discussions.

  6. Computational benchmark problems: a review of recent work within the American Nuclear Society Mathematics and Computation Division

    International Nuclear Information System (INIS)

    Dodds, H.L. Jr.

    1977-01-01

    An overview of the recent accomplishments of the Computational Benchmark Problems Committee of the American Nuclear Society Mathematics and Computation Division is presented. Solutions of computational benchmark problems in the following eight areas are presented and discussed: (a) high-temperature gas-cooled reactor neutronics, (b) pressurized water reactor (PWR) thermal hydraulics, (c) PWR neutronics, (d) neutron transport in a cylindrical ''black'' rod, (e) neutron transport in a boiling water reactor (BWR) rod bundle, (f) BWR transient neutronics with thermal feedback, (g) neutron depletion in a heavy water reactor, and (h) heavy water reactor transient neutronics. It is concluded that these problems and solutions are of considerable value to the nuclear industry because they have been and will continue to be useful in the development, evaluation, and verification of computer codes and numerical-solution methods

  7. Computational Nuclear Quantum Many-Body Problem: The UNEDF Project

    OpenAIRE

    Bogner, Scott; Bulgac, Aurel; Carlson, Joseph A.; Engel, Jonathan; Fann, George; Furnstahl, Richard J.; Gandolfi, Stefano; Hagen, Gaute; Horoi, Mihai; Johnson, Calvin W.; Kortelainen, Markus; Lusk, Ewing; Maris, Pieter; Nam, Hai Ah; Navratil, Petr

    2013-01-01

    The UNEDF project was a large-scale collaborative effort that applied high-performance computing to the nuclear quantum many-body problem. UNEDF demonstrated that close associations among nuclear physicists, mathematicians, and computer scientists can lead to novel physics outcomes built on algorithmic innovations and computational developments. This review showcases a wide range of UNEDF science results to illustrate this interplay.

  8. Fluid history computation methods for reactor safeguards problems using MNODE computer program

    International Nuclear Information System (INIS)

    Huang, Y.S.; Savery, C.W.

    1976-10-01

    A method for predicting the pressure-temperature histories of air, water liquid, and vapor flowing in a zoned containment as a result of high energy pipe rupture is described. The computer code, MNODE, has been developed for 12 connected control volumes and 24 inertia flow paths. Predictions by the code are compared with the results of an analytical gas dynamic problem, semiscale blowdown experiments, full scale MARVIKEN test results, Battelle-Frankfurt model PWR containment test data. The MNODE solutions to NRC/AEC subcompartment benchmark problems are also compared with results predicted by other computer codes such as RELAP-3, FLASH-2, CONTEMPT-PS. The analytical consideration is consistent with Section 6.2.1.2 of the Standard Format (Rev. 2) issued by U.S. Nuclear Regulatory Commission in September 1975

  9. Adapting Experiential Learning to Develop Problem-Solving Skills in Deaf and Hard-of-Hearing Engineering Students.

    Science.gov (United States)

    Marshall, Matthew M; Carrano, Andres L; Dannels, Wendy A

    2016-10-01

    Individuals who are deaf and hard-of-hearing (DHH) are underrepresented in science, technology, engineering, and mathematics (STEM) professions, and this may be due in part to their level of preparation in the development and retention of mathematical and problem-solving skills. An approach was developed that incorporates experiential learning and best practices of STEM instruction to give first-year DHH students enrolled in a postsecondary STEM program the opportunity to develop problem-solving skills in real-world scenarios. Using an industrial engineering laboratory that provides manufacturing and warehousing environments, students were immersed in real-world scenarios in which they worked on teams to address prescribed problems encountered during the activities. The highly structured, Plan-Do-Check-Act approach commonly used in industry was adapted for the DHH student participants to document and communicate the problem-solving steps. Students who experienced the intervention realized a 14.6% improvement in problem-solving proficiency compared with a control group, and this gain was retained at 6 and 12 months, post-intervention. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. CO2 laser milling of hard tissue

    Science.gov (United States)

    Werner, Martin; Ivanenko, Mikhail; Harbecke, Daniela; Klasing, Manfred; Steigerwald, Hendrik; Hering, Peter

    2007-02-01

    Drilling of bone and tooth tissue belongs to recurrent medical procedures (screw- and pin-bores, bores for implant inserting, trepanation etc.). Small round bores can be in general quickly produced with mechanical drills. Problems arise however by angled drilling, by the necessity to fulfill the drilling without damaging of sensitive soft tissue beneath the bone, or by the attempt to mill precisely noncircular small cavities. We present investigations on laser hard tissue "milling", which can be advantageous for solving these problems. The "milling" is done with a CO2 laser (10.6 μm) with pulse duration of 50 - 100 μs, combined with a PC-controlled galvanic beam scanner and with a fine water-spray, which helps to avoid thermal side-effects. The damaging of underlying soft tissue can be prevented through control of the optical or acoustical ablation signal. The ablation of hard tissue is accompanied with a strong glowing, which is absent during the laser beam action on soft tissue. The acoustic signals from the diverse tissue types exhibit distinct differences in the spectral composition. Also computer image analysis could be a useful tool to control the operation. Laser "milling" of noncircular cavities with 1 - 4 mm width and about 10 mm depth is particularly interesting for dental implantology. In ex-vivo investigations we found conditions for fast laser "milling" of the cavities without thermal damage and with minimal tapering. It included exploration of different filling patterns (concentric rings, crosshatch, parallel lines and their combinations), definition of maximal pulse duration, repetition rate and laser power, optimal position of the spray. The optimized results give evidences for the applicability of the CO2 laser for biologically tolerable "milling" of deep cavities in the hard tissue.

  11. Isotropic-nematic transition in a mixture of hard spheres and hard spherocylinders: scaled particle theory description

    Directory of Open Access Journals (Sweden)

    M.F. Holovko

    2017-12-01

    Full Text Available The scaled particle theory is developed for the description of thermodynamical properties of a mixture of hard spheres and hard spherocylinders. Analytical expressions for free energy, pressure and chemical potentials are derived. From the minimization of free energy, a nonlinear integral equation for the orientational singlet distribution function is formulated. An isotropic-nematic phase transition in this mixture is investigated from the bifurcation analysis of this equation. It is shown that with an increase of concentration of hard spheres, the total packing fraction of a mixture on phase boundaries slightly increases. The obtained results are compared with computer simulations data.

  12. Musculoskeletal Problems Associated with University Students Computer Users: A Cross-Sectional Study

    Directory of Open Access Journals (Sweden)

    Rakhadani PB

    2017-07-01

    Full Text Available While several studies have examined the prevalence and correlates of musculoskeletal problems among university students, scanty information exists in South African context. The objective of this study was to determine the prevalence, causes and consequences of musculoskeletal problems among University of Venda students’ computer users. This cross-sectional study involved 694 university students at the University of Venda. A self-designed questionnaire was used to collect information on the sociodemographic characteristics, problems associated with computer users, and causes of musculoskeletal problems associated with computer users. The majority (84.6% of the participants use computer for internet, wording processing (20.3%, and games (18.7%. The students reported neck pain when using computer (52.3%; shoulder (47.0%, finger (45.0%, lower back (43.1%, general body pain (42.9%, elbow (36.2%, wrist (33.7%, hip and foot (29.1% and knee (26.2%. Reported causes of musculoskeletal pains associated with computer usage were: sitting position, low chair, a lot of time spent on computer, uncomfortable laboratory chairs, and stressfulness. Eye problems (51.9%, muscle cramp (344.0%, headache (45.3%, blurred vision (38.0%, feeling of illness (39.9% and missed lectures (29.1% were consequences of musculoskeletal problems linked to computer use. The majority of students reported having mild pain (43.7%, moderate (24.2%, and severe (8.4% pains. Years of computer use were significantly associated with neck, shoulder and wrist pain. Using computer for internet was significantly associated with neck pain (OR=0.60; 95% CI 0.40-0.93; games: neck (OR=0.60; 95% CI 0.40-0.85 and hip/foot (OR=0.60; CI 95% 0.40-0.92, programming for elbow (OR= 1.78; CI 95% 1.10-2.94 and wrist (OR=2.25; CI 95% 1.36-3.73, while word processing was significantly associated with lower back (OR=1.45; CI 95% 1.03-2.04. Undergraduate study had a significant association with elbow pain (OR=2

  13. Agent assisted interactive algorithm for computationally demanding multiobjective optimization problems

    OpenAIRE

    Ojalehto, Vesa; Podkopaev, Dmitry; Miettinen, Kaisa

    2015-01-01

    We generalize the applicability of interactive methods for solving computationally demanding, that is, time-consuming, multiobjective optimization problems. For this purpose we propose a new agent assisted interactive algorithm. It employs a computationally inexpensive surrogate problem and four different agents that intelligently update the surrogate based on the preferences specified by a decision maker. In this way, we decrease the waiting times imposed on the decision maker du...

  14. Computational complexity a quantitative perspective

    CERN Document Server

    Zimand, Marius

    2004-01-01

    There has been a common perception that computational complexity is a theory of "bad news" because its most typical results assert that various real-world and innocent-looking tasks are infeasible. In fact, "bad news" is a relative term, and, indeed, in some situations (e.g., in cryptography), we want an adversary to not be able to perform a certain task. However, a "bad news" result does not automatically become useful in such a scenario. For this to happen, its hardness features have to be quantitatively evaluated and shown to manifest extensively. The book undertakes a quantitative analysis of some of the major results in complexity that regard either classes of problems or individual concrete problems. The size of some important classes are studied using resource-bounded topological and measure-theoretical tools. In the case of individual problems, the book studies relevant quantitative attributes such as approximation properties or the number of hard inputs at each length. One chapter is dedicated to abs...

  15. Problems and accommodation strategies reported by computer users with rheumatoid arthritis or fibromyalgia.

    Science.gov (United States)

    Baker, Nancy A; Rubinstein, Elaine N; Rogers, Joan C

    2012-09-01

    Little is known about the problems experienced by and the accommodation strategies used by computer users with rheumatoid arthritis (RA) or fibromyalgia (FM). This study (1) describes specific problems and accommodation strategies used by people with RA and FM during computer use; and (2) examines if there were significant differences in the problems and accommodation strategies between the different equipment items for each diagnosis. Subjects were recruited from the Arthritis Network Disease Registry. Respondents completed a self-report survey, the Computer Problems Survey. Data were analyzed descriptively (percentages; 95% confidence intervals). Differences in the number of problems and accommodation strategies were calculated using nonparametric tests (Friedman's test and Wilcoxon Signed Rank Test). Eighty-four percent of respondents reported at least one problem with at least one equipment item (RA = 81.5%; FM = 88.9%), with most respondents reporting problems with their chair. Respondents most commonly used timing accommodation strategies to cope with mouse and keyboard problems, personal accommodation strategies to cope with chair problems and environmental accommodation strategies to cope with monitor problems. The number of problems during computer use was substantial in our sample, and our respondents with RA and FM may not implement the most effective strategies to deal with their chair, keyboard, or mouse problems. This study suggests that workers with RA and FM might potentially benefit from education and interventions to assist with the development of accommodation strategies to reduce problems related to computer use.

  16. Computer Hacking as a Social Problem

    OpenAIRE

    Alleyne, Brian

    2018-01-01

    This chapter introduces the ideas and practices of digital technology enthusiasts who fall under the umbrella of “hackers. “We will discuss how their defining activity has been constructed as a social problem and how that construction has been challenged in different ways. The chapter concludes with several policy suggestions aimed at addressing the more problematic aspects of computer hacking.

  17. Solving the Stokes problem on a massively parallel computer

    DEFF Research Database (Denmark)

    Axelsson, Owe; Barker, Vincent A.; Neytcheva, Maya

    2001-01-01

    boundary value problem for each velocity component, are solved by the conjugate gradient method with a preconditioning based on the algebraic multi‐level iteration (AMLI) technique. The velocity is found from the computed pressure. The method is optimal in the sense that the computational work...... is proportional to the number of unknowns. Further, it is designed to exploit a massively parallel computer with distributed memory architecture. Numerical experiments on a Cray T3E computer illustrate the parallel performance of the method....

  18. Hard-real-time resource management for autonomous spacecraft

    Science.gov (United States)

    Gat, E.

    2000-01-01

    This paper describes tickets, a computational mechanism for hard-real-time autonomous resource management. Autonomous spacecraftcontrol can be considered abstractly as a computational process whose outputs are spacecraft commands.

  19. Molecular computation: RNA solutions to chess problems.

    Science.gov (United States)

    Faulhammer, D; Cukras, A R; Lipton, R J; Landweber, L F

    2000-02-15

    We have expanded the field of "DNA computers" to RNA and present a general approach for the solution of satisfiability problems. As an example, we consider a variant of the "Knight problem," which asks generally what configurations of knights can one place on an n x n chess board such that no knight is attacking any other knight on the board. Using specific ribonuclease digestion to manipulate strands of a 10-bit binary RNA library, we developed a molecular algorithm and applied it to a 3 x 3 chessboard as a 9-bit instance of this problem. Here, the nine spaces on the board correspond to nine "bits" or placeholders in a combinatorial RNA library. We recovered a set of "winning" molecules that describe solutions to this problem.

  20. The BQP-hardness of approximating the Jones polynomial

    International Nuclear Information System (INIS)

    Aharonov, Dorit; Arad, Itai

    2011-01-01

    A celebrated important result due to Freedman et al (2002 Commun. Math. Phys. 227 605-22) states that providing additive approximations of the Jones polynomial at the kth root of unity, for constant k=5 and k≥7, is BQP-hard. Together with the algorithmic results of Aharonov et al (2005) and Freedman et al (2002 Commun. Math. Phys. 227 587-603), this gives perhaps the most natural BQP-complete problem known today and motivates further study of the topic. In this paper, we focus on the universality proof; we extend the result of Freedman et al (2002) to ks that grow polynomially with the number of strands and crossings in the link, thus extending the BQP-hardness of Jones polynomial approximations to all values to which the AJL algorithm applies (Aharonov et al 2005), proving that for all those values, the problems are BQP-complete. As a side benefit, we derive a fairly elementary proof of the Freedman et al density result, without referring to advanced results from Lie algebra representation theory, making this important result accessible to a wider audience in the computer science research community. We make use of two general lemmas we prove, the bridge lemma and the decoupling lemma, which provide tools for establishing the density of subgroups in SU(n). Those tools seem to be of independent interest in more general contexts of proving the quantum universality. Our result also implies a completely classical statement, that the multiplicative approximations of the Jones polynomial, at exactly the same values, are P-hard, via a recent result due to Kuperberg (2009 arXiv:0908.0512). Since the first publication of those results in their preliminary form (Aharonov and Arad 2006 arXiv:quant-ph/0605181), the methods we present here have been used in several other contexts (Aharonov and Arad 2007 arXiv:quant-ph/0702008; Peter and Stephen 2008 Quantum Inf. Comput. 8 681). The present paper is an improved and extended version of the results presented by Aharonov and Arad

  1. The BQP-hardness of approximating the Jones polynomial

    Energy Technology Data Exchange (ETDEWEB)

    Aharonov, Dorit; Arad, Itai, E-mail: itaia@cs.huji.ac.il [Department of Computer Science and Engineering, Hebrew University, Jerusalem (Israel)

    2011-03-15

    A celebrated important result due to Freedman et al (2002 Commun. Math. Phys. 227 605-22) states that providing additive approximations of the Jones polynomial at the kth root of unity, for constant k=5 and k{>=}7, is BQP-hard. Together with the algorithmic results of Aharonov et al (2005) and Freedman et al (2002 Commun. Math. Phys. 227 587-603), this gives perhaps the most natural BQP-complete problem known today and motivates further study of the topic. In this paper, we focus on the universality proof; we extend the result of Freedman et al (2002) to ks that grow polynomially with the number of strands and crossings in the link, thus extending the BQP-hardness of Jones polynomial approximations to all values to which the AJL algorithm applies (Aharonov et al 2005), proving that for all those values, the problems are BQP-complete. As a side benefit, we derive a fairly elementary proof of the Freedman et al density result, without referring to advanced results from Lie algebra representation theory, making this important result accessible to a wider audience in the computer science research community. We make use of two general lemmas we prove, the bridge lemma and the decoupling lemma, which provide tools for establishing the density of subgroups in SU(n). Those tools seem to be of independent interest in more general contexts of proving the quantum universality. Our result also implies a completely classical statement, that the multiplicative approximations of the Jones polynomial, at exactly the same values, are P-hard, via a recent result due to Kuperberg (2009 arXiv:0908.0512). Since the first publication of those results in their preliminary form (Aharonov and Arad 2006 arXiv:quant-ph/0605181), the methods we present here have been used in several other contexts (Aharonov and Arad 2007 arXiv:quant-ph/0702008; Peter and Stephen 2008 Quantum Inf. Comput. 8 681). The present paper is an improved and extended version of the results presented by Aharonov and

  2. Distributed Problem Solving: Adaptive Networks with a Computer Intermediary Resource. Intelligent Executive Computer Communication

    Science.gov (United States)

    1991-06-01

    Proceedings of The National Conference on Artificial Intelligence , pages 181-184, The American Association for Aritificial Intelligence , Pittsburgh...Intermediary Resource: Intelligent Executive Computer Communication John Lyman and Carla J. Conaway University of California at Los Angeles for Contracting...Include Security Classification) Interim Report: Distributed Problem Solving: Adaptive Networks With a Computer Intermediary Resource: Intelligent

  3. SOLVING FLOWSHOP SCHEDULING PROBLEMS USING A DISCRETE AFRICAN WILD DOG ALGORITHM

    Directory of Open Access Journals (Sweden)

    M. K. Marichelvam

    2013-04-01

    Full Text Available The problem of m-machine permutation flowshop scheduling is considered in this paper. The objective is to minimize the makespan. The flowshop scheduling problem is a typical combinatorial optimization problem and has been proved to be strongly NP-hard. Hence, several heuristics and meta-heuristics were addressed by the researchers. In this paper, a discrete African wild dog algorithm is applied for solving the flowshop scheduling problems. Computational results using benchmark problems show that the proposed algorithm outperforms many other algorithms addressed in the literature.

  4. The traveling salesman problem a computational study

    CERN Document Server

    Applegate, David L; Chvatal, Vasek; Cook, William J

    2006-01-01

    This book presents the latest findings on one of the most intensely investigated subjects in computational mathematics--the traveling salesman problem. It sounds simple enough: given a set of cities and the cost of travel between each pair of them, the problem challenges you to find the cheapest route by which to visit all the cities and return home to where you began. Though seemingly modest, this exercise has inspired studies by mathematicians, chemists, and physicists. Teachers use it in the classroom. It has practical applications in genetics, telecommunications, and neuroscience.

  5. Quantum Computation: Entangling with the Future

    Science.gov (United States)

    Jiang, Zhang

    2017-01-01

    Commercial applications of quantum computation have become viable due to the rapid progress of the field in the recent years. Efficient quantum algorithms are discovered to cope with the most challenging real-world problems that are too hard for classical computers. Manufactured quantum hardware has reached unprecedented precision and controllability, enabling fault-tolerant quantum computation. Here, I give a brief introduction on what principles in quantum mechanics promise its unparalleled computational power. I will discuss several important quantum algorithms that achieve exponential or polynomial speedup over any classical algorithm. Building a quantum computer is a daunting task, and I will talk about the criteria and various implementations of quantum computers. I conclude the talk with near-future commercial applications of a quantum computer.

  6. Second International Conference on Soft Computing for Problem Solving

    CERN Document Server

    Nagar, Atulya; Deep, Kusum; Pant, Millie; Bansal, Jagdish; Ray, Kanad; Gupta, Umesh

    2014-01-01

    The present book is based on the research papers presented in the International Conference on Soft Computing for Problem Solving (SocProS 2012), held at JK Lakshmipat University, Jaipur, India. This book provides the latest developments in the area of soft computing and covers a variety of topics, including mathematical modeling, image processing, optimization, swarm intelligence, evolutionary algorithms, fuzzy logic, neural networks, forecasting, data mining, etc. The objective of the book is to familiarize the reader with the latest scientific developments that are taking place in various fields and the latest sophisticated problem solving tools that are being developed to deal with the complex and intricate problems that are otherwise difficult to solve by the usual and traditional methods. The book is directed to the researchers and scientists engaged in various fields of Science and Technology.

  7. Work Hard / Play Hard

    OpenAIRE

    Burrows, J.; Johnson, V.; Henckel, D.

    2016-01-01

    Work Hard / Play Hard was a participatory performance/workshop or CPD experience hosted by interdisciplinary arts atelier WeAreCodeX, in association with AntiUniversity.org. As a socially/economically engaged arts practice, Work Hard / Play Hard challenged employees/players to get playful, or go to work. 'The game changes you, you never change the game'. Employee PLAYER A 'The faster the better.' Employer PLAYER B

  8. Perspective: Memcomputing: Leveraging memory and physics to compute efficiently

    Science.gov (United States)

    Di Ventra, Massimiliano; Traversa, Fabio L.

    2018-05-01

    It is well known that physical phenomena may be of great help in computing some difficult problems efficiently. A typical example is prime factorization that may be solved in polynomial time by exploiting quantum entanglement on a quantum computer. There are, however, other types of (non-quantum) physical properties that one may leverage to compute efficiently a wide range of hard problems. In this perspective, we discuss how to employ one such property, memory (time non-locality), in a novel physics-based approach to computation: Memcomputing. In particular, we focus on digital memcomputing machines (DMMs) that are scalable. DMMs can be realized with non-linear dynamical systems with memory. The latter property allows the realization of a new type of Boolean logic, one that is self-organizing. Self-organizing logic gates are "terminal-agnostic," namely, they do not distinguish between the input and output terminals. When appropriately assembled to represent a given combinatorial/optimization problem, the corresponding self-organizing circuit converges to the equilibrium points that express the solutions of the problem at hand. In doing so, DMMs take advantage of the long-range order that develops during the transient dynamics. This collective dynamical behavior, reminiscent of a phase transition, or even the "edge of chaos," is mediated by families of classical trajectories (instantons) that connect critical points of increasing stability in the system's phase space. The topological character of the solution search renders DMMs robust against noise and structural disorder. Since DMMs are non-quantum systems described by ordinary differential equations, not only can they be built in hardware with the available technology, they can also be simulated efficiently on modern classical computers. As an example, we will show the polynomial-time solution of the subset-sum problem for the worst cases, and point to other types of hard problems where simulations of DMMs

  9. Adiabatic quantum search algorithm for structured problems

    International Nuclear Information System (INIS)

    Roland, Jeremie; Cerf, Nicolas J.

    2003-01-01

    The study of quantum computation has been motivated by the hope of finding efficient quantum algorithms for solving classically hard problems. In this context, quantum algorithms by local adiabatic evolution have been shown to solve an unstructured search problem with a quadratic speedup over a classical search, just as Grover's algorithm. In this paper, we study how the structure of the search problem may be exploited to further improve the efficiency of these quantum adiabatic algorithms. We show that by nesting a partial search over a reduced set of variables into a global search, it is possible to devise quantum adiabatic algorithms with a complexity that, although still exponential, grows with a reduced order in the problem size

  10. Numerical problems with the Pascal triangle in moment computation

    Czech Academy of Sciences Publication Activity Database

    Kautsky, J.; Flusser, Jan

    2016-01-01

    Roč. 306, č. 1 (2016), s. 53-68 ISSN 0377-0427 R&D Projects: GA ČR GA15-16928S Institutional support: RVO:67985556 Keywords : moment computation * Pascal triangle * appropriate polynomial basis * numerical problems Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.357, year: 2016 http://library.utia.cas.cz/separaty/2016/ZOI/flusser-0459096.pdf

  11. Optimal recombination in genetic algorithms for combinatorial optimization problems: Part II

    Directory of Open Access Journals (Sweden)

    Eremeev Anton V.

    2014-01-01

    Full Text Available This paper surveys results on complexity of the optimal recombination problem (ORP, which consists in finding the best possible offspring as a result of a recombination operator in a genetic algorithm, given two parent solutions. In Part II, we consider the computational complexity of ORPs arising in genetic algorithms for problems on permutations: the Travelling Salesman Problem, the Shortest Hamilton Path Problem and the Makespan Minimization on Single Machine and some other related problems. The analysis indicates that the corresponding ORPs are NP-hard, but solvable by faster algorithms, compared to the problems they are derived from.

  12. Computational nuclear quantum many-body problem: The UNEDF project

    Science.gov (United States)

    Bogner, S.; Bulgac, A.; Carlson, J.; Engel, J.; Fann, G.; Furnstahl, R. J.; Gandolfi, S.; Hagen, G.; Horoi, M.; Johnson, C.; Kortelainen, M.; Lusk, E.; Maris, P.; Nam, H.; Navratil, P.; Nazarewicz, W.; Ng, E.; Nobre, G. P. A.; Ormand, E.; Papenbrock, T.; Pei, J.; Pieper, S. C.; Quaglioni, S.; Roche, K. J.; Sarich, J.; Schunck, N.; Sosonkina, M.; Terasaki, J.; Thompson, I.; Vary, J. P.; Wild, S. M.

    2013-10-01

    The UNEDF project was a large-scale collaborative effort that applied high-performance computing to the nuclear quantum many-body problem. The primary focus of the project was on constructing, validating, and applying an optimized nuclear energy density functional, which entailed a wide range of pioneering developments in microscopic nuclear structure and reactions, algorithms, high-performance computing, and uncertainty quantification. UNEDF demonstrated that close associations among nuclear physicists, mathematicians, and computer scientists can lead to novel physics outcomes built on algorithmic innovations and computational developments. This review showcases a wide range of UNEDF science results to illustrate this interplay.

  13. Runtime analysis of the (1+1) EA on computing unique input output sequences

    DEFF Research Database (Denmark)

    Lehre, Per Kristian; Yao, Xin

    2010-01-01

    Computing unique input output (UIO) sequences is a fundamental and hard problem in conformance testing of finite state machines (FSM). Previous experimental research has shown that evolutionary algorithms (EAs) can be applied successfully to find UIOs for some FSMs. However, before EAs can...... in the theoretical analysis, and the variability of the runtime. The numerical results fit well with the theoretical results, even for small problem instance sizes. Together, these results provide a first theoretical characterisation of the potential and limitations of the (1 + 1) EA on the problem of computing UIOs....

  14. A genetic algorithm approach to optimization for the radiological worker allocation problem

    International Nuclear Information System (INIS)

    Yan Chen; Masakuni Narita; Masashi Tsuji; Sangduk Sa

    1996-01-01

    The worker allocation optimization problem in radiological facilities inevitably involves various types of requirements and constraints relevant to radiological protection and labor management. Some of these goals and constraints are not amenable to a rigorous mathematical formulation. Conventional methods for this problem rely heavily on sophisticated algebraic or numerical algorithms, which cause difficulties in the search for optimal solutions in the search space of worker allocation optimization problems. Genetic algorithms (GAB) are stochastic search algorithms introduced by J. Holland in the 1970s based on ideas and techniques from genetic and evolutionary theories. The most striking characteristic of GAs is the large flexibility allowed in the formulation of the optimal problem and the process of the search for the optimal solution. In the formulation, it is not necessary to define the optimal problem in rigorous mathematical terms, as required in the conventional methods. Furthermore, by designing a model of evolution for the optimal search problem, the optimal solution can be sought efficiently with computational simple manipulations without highly complex mathematical algorithms. We reported a GA approach to the worker allocation problem in radiological facilities in the previous study. In this study, two types of hard constraints were employed to reduce the huge search space, where the optimal solution is sought in such a way as to satisfy as many of soft constraints as possible. It was demonstrated that the proposed evolutionary method could provide the optimal solution efficiently compared with conventional methods. However, although the employed hard constraints could localize the search space into a very small region, it brought some complexities in the designed genetic operators and demanded additional computational burdens. In this paper, we propose a simplified evolutionary model with less restrictive hard constraints and make comparisons between

  15. Software Systems for High-performance Quantum Computing

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL; Britt, Keith A [ORNL

    2016-01-01

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventional computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.

  16. Distributed data fusion across multiple hard and soft mobile sensor platforms

    Science.gov (United States)

    Sinsley, Gregory

    is a younger field than centralized fusion. The main issues in distributed fusion that are addressed are distributed classification and distributed tracking. There are several well established methods for performing distributed fusion that are first reviewed. The chapter on distributed fusion concludes with a multiple unmanned vehicle collaborative test involving an unmanned aerial vehicle and an unmanned ground vehicle. The third issue this thesis addresses is that of soft sensor only data fusion. Soft-only fusion is a newer field than centralized or distributed hard sensor fusion. Because of the novelty of the field, the chapter on soft only fusion contains less background information and instead focuses on some new results in soft sensor data fusion. Specifically, it discusses a novel fuzzy logic based soft sensor data fusion method. This new method is tested using both simulations and field measurements. The biggest issue addressed in this thesis is that of combined hard and soft fusion. Fusion of hard and soft data is the newest area for research in the data fusion community; therefore, some of the largest theoretical contributions in this thesis are in the chapter on combined hard and soft fusion. This chapter presents a novel combined hard and soft data fusion method based on random set theory, which processes random set data using a particle filter. Furthermore, the particle filter is designed to be distributed across multiple robots and portable computers (used by human observers) so that there is no centralized failure point in the system. After laying out a theoretical groundwork for hard and soft sensor data fusion the thesis presents practical applications for hard and soft sensor data fusion in simulation. Through a series of three progressively more difficult simulations, some important hard and soft sensor data fusion capabilities are demonstrated. The first simulation demonstrates fusing data from a single soft sensor and a single hard sensor in

  17. 6th International Conference on Soft Computing for Problem Solving

    CERN Document Server

    Bansal, Jagdish; Das, Kedar; Lal, Arvind; Garg, Harish; Nagar, Atulya; Pant, Millie

    2017-01-01

    This two-volume book gathers the proceedings of the Sixth International Conference on Soft Computing for Problem Solving (SocProS 2016), offering a collection of research papers presented during the conference at Thapar University, Patiala, India. Providing a veritable treasure trove for scientists and researchers working in the field of soft computing, it highlights the latest developments in the broad area of “Computational Intelligence” and explores both theoretical and practical aspects using fuzzy logic, artificial neural networks, evolutionary algorithms, swarm intelligence, soft computing, computational intelligence, etc.

  18. Explaining the Mind: Problems, Problems

    OpenAIRE

    Harnad, Stevan

    2001-01-01

    The mind/body problem is the feeling/function problem: How and why do feeling systems feel? The problem is not just "hard" but insoluble (unless one is ready to resort to telekinetic dualism). Fortunately, the "easy" problems of cognitive science (such as the how and why of categorization and language) are not insoluble. Five books (by Damasio, Edelman/Tononi...

  19. Shutdown problems in large tokamaks

    International Nuclear Information System (INIS)

    Weldon, D.M.

    1978-01-01

    Some of the problems connected with a normal shutdown at the end of the burn phase (soft shutdown) and with a shutdown caused by disruptive instability (hard shutdown) have been considered. For a soft shutdown a cursory literature search was undertaken and methods for controlling the thermal wall loading were listed. Because shutdown computer codes are not widespread, some of the differences between start-up codes and shutdown codes were discussed along with program changes needed to change a start-up code to a shutdown code. For a hard shutdown, the major problems are large induced voltages in the ohmic-heating and equilibrium-field coils and high first wall erosion. A literature search of plasma-wall interactions was carried out. Phenomena that occur at the plasma-wall interface can be quite complicated. For example, material evaporated from the wall can form a virtual limiter or shield protecting the wall from major damage. Thermal gradients that occur during the interaction can produce currents whose associated magnetic field also helps shield the wall

  20. Hard Real-Time Task Scheduling in Cloud Computing Using an Adaptive Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Amjad Mahmood

    2017-04-01

    Full Text Available In the Infrastructure-as-a-Service cloud computing model, virtualized computing resources in the form of virtual machines are provided over the Internet. A user can rent an arbitrary number of computing resources to meet their requirements, making cloud computing an attractive choice for executing real-time tasks. Economical task allocation and scheduling on a set of leased virtual machines is an important problem in the cloud computing environment. This paper proposes a greedy and a genetic algorithm with an adaptive selection of suitable crossover and mutation operations (named as AGA to allocate and schedule real-time tasks with precedence constraint on heterogamous virtual machines. A comprehensive simulation study has been done to evaluate the performance of the proposed algorithms in terms of their solution quality and efficiency. The simulation results show that AGA outperforms the greedy algorithm and non-adaptive genetic algorithm in terms of solution quality.

  1. Computational complexity in entanglement transformations

    Science.gov (United States)

    Chitambar, Eric A.

    In physics, systems having three parts are typically much more difficult to analyze than those having just two. Even in classical mechanics, predicting the motion of three interacting celestial bodies remains an insurmountable challenge while the analogous two-body problem has an elementary solution. It is as if just by adding a third party, a fundamental change occurs in the structure of the problem that renders it unsolvable. In this thesis, we demonstrate how such an effect is likewise present in the theory of quantum entanglement. In fact, the complexity differences between two-party and three-party entanglement become quite conspicuous when comparing the difficulty in deciding what state changes are possible for these systems when no additional entanglement is consumed in the transformation process. We examine this entanglement transformation question and its variants in the language of computational complexity theory, a powerful subject that formalizes the concept of problem difficulty. Since deciding feasibility of a specified bipartite transformation is relatively easy, this task belongs to the complexity class P. On the other hand, for tripartite systems, we find the problem to be NP-Hard, meaning that its solution is at least as hard as the solution to some of the most difficult problems humans have encountered. One can then rigorously defend the assertion that a fundamental complexity difference exists between bipartite and tripartite entanglement since unlike the former, the full range of forms realizable by the latter is incalculable (assuming P≠NP). However, similar to the three-body celestial problem, when one examines a special subclass of the problem---invertible transformations on systems having at least one qubit subsystem---we prove that the problem can be solved efficiently. As a hybrid of the two questions, we find that the question of tripartite to bipartite transformations can be solved by an efficient randomized algorithm. Our results are

  2. COMPUTER TOOLS OF DYNAMIC MATHEMATIC SOFTWARE AND METHODICAL PROBLEMS OF THEIR USE

    Directory of Open Access Journals (Sweden)

    Olena V. Semenikhina

    2014-08-01

    Full Text Available The article presents results of analyses of standard computer tools of dynamic mathematic software which are used in solving tasks, and tools on which the teacher can support in the teaching of mathematics. Possibility of the organization of experimental investigating of mathematical objects on the basis of these tools and the wording of new tasks on the basis of the limited number of tools, fast automated check are specified. Some methodological comments on application of computer tools and methodological features of the use of interactive mathematical environments are presented. Problems, which are arising from the use of computer tools, among which rethinking forms and methods of training by teacher, the search for creative problems, the problem of rational choice of environment, check the e-solutions, common mistakes in the use of computer tools are selected.

  3. An Integer Programming Formulation of the Minimum Common String Partition Problem.

    Directory of Open Access Journals (Sweden)

    S M Ferdous

    Full Text Available We consider the problem of finding a minimum common string partition (MCSP of two strings, which is an NP-hard problem. The MCSP problem is closely related to genome comparison and rearrangement, an important field in Computational Biology. In this paper, we map the MCSP problem into a graph applying a prior technique and using this graph, we develop an Integer Linear Programming (ILP formulation for the problem. We implement the ILP formulation and compare the results with the state-of-the-art algorithms from the literature. The experimental results are found to be promising.

  4. Hard electronics; Hard electronics

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    Hard material technologies were surveyed to establish the hard electronic technology which offers superior characteristics under hard operational or environmental conditions as compared with conventional Si devices. The following technologies were separately surveyed: (1) The device and integration technologies of wide gap hard semiconductors such as SiC, diamond and nitride, (2) The technology of hard semiconductor devices for vacuum micro- electronics technology, and (3) The technology of hard new material devices for oxides. The formation technology of oxide thin films made remarkable progress after discovery of oxide superconductor materials, resulting in development of an atomic layer growth method and mist deposition method. This leading research is expected to solve such issues difficult to be easily realized by current Si technology as high-power, high-frequency and low-loss devices in power electronics, high temperature-proof and radiation-proof devices in ultimate electronics, and high-speed and dense- integrated devices in information electronics. 432 refs., 136 figs., 15 tabs.

  5. Hybrid Metaheuristics for Solving a Fuzzy Single Batch-Processing Machine Scheduling Problem

    Directory of Open Access Journals (Sweden)

    S. Molla-Alizadeh-Zavardehi

    2014-01-01

    Full Text Available This paper deals with a problem of minimizing total weighted tardiness of jobs in a real-world single batch-processing machine (SBPM scheduling in the presence of fuzzy due date. In this paper, first a fuzzy mixed integer linear programming model is developed. Then, due to the complexity of the problem, which is NP-hard, we design two hybrid metaheuristics called GA-VNS and VNS-SA applying the advantages of genetic algorithm (GA, variable neighborhood search (VNS, and simulated annealing (SA frameworks. Besides, we propose three fuzzy earliest due date heuristics to solve the given problem. Through computational experiments with several random test problems, a robust calibration is applied on the parameters. Finally, computational results on different-scale test problems are presented to compare the proposed algorithms.

  6. Third International Conference on Soft Computing for Problem Solving

    CERN Document Server

    Deep, Kusum; Nagar, Atulya; Bansal, Jagdish

    2014-01-01

    The present book is based on the research papers presented in the 3rd International Conference on Soft Computing for Problem Solving (SocProS 2013), held as a part of the golden jubilee celebrations of the Saharanpur Campus of IIT Roorkee, at the Noida Campus of Indian Institute of Technology Roorkee, India. This book is divided into two volumes and covers a variety of topics including mathematical modelling, image processing, optimization, swarm intelligence, evolutionary algorithms, fuzzy logic, neural networks, forecasting, medical and health care, data mining etc. Particular emphasis is laid on soft computing and its application to diverse fields. The prime objective of the book is to familiarize the reader with the latest scientific developments that are taking place in various fields and the latest sophisticated problem solving tools that are being developed to deal with the complex and intricate problems, which are otherwise difficult to solve by the usual and traditional methods. The book is directed ...

  7. A Parallel Computational Model for Multichannel Phase Unwrapping Problem

    Science.gov (United States)

    Imperatore, Pasquale; Pepe, Antonio; Lanari, Riccardo

    2015-05-01

    In this paper, a parallel model for the solution of the computationally intensive multichannel phase unwrapping (MCh-PhU) problem is proposed. Firstly, the Extended Minimum Cost Flow (EMCF) algorithm for solving MCh-PhU problem is revised within the rigorous mathematical framework of the discrete calculus ; thus permitting to capture its topological structure in terms of meaningful discrete differential operators. Secondly, emphasis is placed on those methodological and practical aspects, which lead to a parallel reformulation of the EMCF algorithm. Thus, a novel dual-level parallel computational model, in which the parallelism is hierarchically implemented at two different (i.e., process and thread) levels, is presented. The validity of our approach has been demonstrated through a series of experiments that have revealed a significant speedup. Therefore, the attained high-performance prototype is suitable for the solution of large-scale phase unwrapping problems in reasonable time frames, with a significant impact on the systematic exploitation of the existing, and rapidly growing, large archives of SAR data.

  8. Computer Use and Vision‑Related Problems Among University ...

    African Journals Online (AJOL)

    and adjusted OR was calculated using the multiple logistic regression. Results: The ... Nearly 72% of students reported frequent interruption of computer work. Headache ... procedure (non-probability sampling) recruiting 250 .... Table 1: Percentage distribution of visual problems among different genders and ethnic groups.

  9. Searching spectrum points of difference initial-boundary value problems with using GAS

    International Nuclear Information System (INIS)

    Mazepa, N.E.

    1989-01-01

    A new algorithm for searching spectrum points is proposed. The difference schemes which approximate systems of linear differential equations of hyperbolic type with constant coefficients and in one space dimension are considered. For important class of practiclas problems this algorithm reduces the hard spectrum calculation problem to the polynomial equation solution. For complicated analytic manipulations connected with realization of this algorithm the computation algebraic system REDUCE is used. 28 refs

  10. Quantum computing from the ground up

    CERN Document Server

    Perry, Riley Tipton

    2012-01-01

    Quantum computing - the application of quantum mechanics to information - represents a fundamental break from classical information and promises to dramatically increase a computer's power. Many difficult problems, such as the factorization of large numbers, have so far resisted attack by classical computers yet are easily solved with quantum computers. If they become feasible, quantum computers will end standard practices such as RSA encryption. Most of the books or papers on quantum computing require (or assume) prior knowledge of certain areas such as linear algebra or quantum mechanics. The majority of the currently-available literature is hard to understand for the average computer enthusiast or interested layman. This text attempts to teach quantum computing from the ground up in an easily readable way, providing a comprehensive tutorial that includes all the necessary mathematics, computer science and physics.

  11. Computer problem-solving coaches for introductory physics: Design and usability studies

    Science.gov (United States)

    Ryan, Qing X.; Frodermann, Evan; Heller, Kenneth; Hsu, Leonardo; Mason, Andrew

    2016-06-01

    The combination of modern computing power, the interactivity of web applications, and the flexibility of object-oriented programming may finally be sufficient to create computer coaches that can help students develop metacognitive problem-solving skills, an important competence in our rapidly changing technological society. However, no matter how effective such coaches might be, they will only be useful if they are attractive to students. We describe the design and testing of a set of web-based computer programs that act as personal coaches to students while they practice solving problems from introductory physics. The coaches are designed to supplement regular human instruction, giving students access to effective forms of practice outside class. We present results from large-scale usability tests of the computer coaches and discuss their implications for future versions of the coaches.

  12. Improved Genetic and Simulating Annealing Algorithms to Solve the Traveling Salesman Problem Using Constraint Programming

    Directory of Open Access Journals (Sweden)

    M. Abdul-Niby

    2016-04-01

    Full Text Available The Traveling Salesman Problem (TSP is an integer programming problem that falls into the category of NP-Hard problems. As the problem become larger, there is no guarantee that optimal tours will be found within reasonable computation time. Heuristics techniques, like genetic algorithm and simulating annealing, can solve TSP instances with different levels of accuracy. Choosing which algorithm to use in order to get a best solution is still considered as a hard choice. This paper suggests domain reduction as a tool to be combined with any meta-heuristic so that the obtained results will be almost the same. The hybrid approach of combining domain reduction with any meta-heuristic encountered the challenge of choosing an algorithm that matches the TSP instance in order to get the best results.

  13. The benefits of computer-generated feedback for mathematics problem solving.

    Science.gov (United States)

    Fyfe, Emily R; Rittle-Johnson, Bethany

    2016-07-01

    The goal of the current research was to better understand when and why feedback has positive effects on learning and to identify features of feedback that may improve its efficacy. In a randomized experiment, second-grade children received instruction on a correct problem-solving strategy and then solved a set of relevant problems. Children were assigned to receive no feedback, immediate feedback, or summative feedback from the computer. On a posttest the following day, feedback resulted in higher scores relative to no feedback for children who started with low prior knowledge. Immediate feedback was particularly effective, facilitating mastery of the material for children with both low and high prior knowledge. Results suggest that minimal computer-generated feedback can be a powerful form of guidance during problem solving. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Methods for computing SN eigenvalues and eigenvectors of slab geometry transport problems

    International Nuclear Information System (INIS)

    Yavuz, Musa

    1998-01-01

    We discuss computational methods for computing the eigenvalues and eigenvectors of single energy-group neutral particle transport (S N ) problems in homogeneous slab geometry, with an arbitrary scattering anisotropy of order L. These eigensolutions are important when exact (or very accurate) solutions are desired for coarse spatial cell problems demanding rapid execution times. Three methods, one of which is 'new', are presented for determining the eigenvalues and eigenvectors of such S N problems. In the first method, separation of variables is directly applied to the S N equations. In the second method, common characteristics of the S N and P N-1 equations are used. In the new method, the eigenvalues and eigenvectors can be computed provided that the cell-interface Green's functions (transmission and reflection factors) are known. Numerical results for S 4 test problems are given to compare the new method with the existing methods

  15. Methods for computing SN eigenvalues and eigenvectors of slab geometry transport problems

    International Nuclear Information System (INIS)

    Yavuz, M.

    1997-01-01

    We discuss computational methods for computing the eigenvalues and eigenvectors of single energy-group neutral particle transport (S N ) problems in homogeneous slab geometry, with an arbitrary scattering anisotropy of order L. These eigensolutions are important when exact (or very accurate) solutions are desired for coarse spatial cell problems demanding rapid execution times. Three methods, one of which is 'new', are presented for determining the eigenvalues and eigenvectors of such S N problems. In the first method, separation of variables is directly applied to the S N equations. In the second method, common characteristics of the S N and P N-1 equations are used. In the new method, the eigenvalues and eigenvectors can be computed provided that the cell-interface Green's functions (transmission and reflection factors) are known. Numerical results for S 4 test problems are given to compare the new method with the existing methods. (author)

  16. Problems and limitation in diagnosis with computed tomography

    International Nuclear Information System (INIS)

    Ohtomo, Eiichi

    1985-01-01

    The development and explosion of computed tomography (CT) machines have forced a revolution in the diagnosis of nervous diseases, making it very easy to detect cerebrovascular disorders, cerebral atrophy, and ventricular dilation. However, as much information on various brain diseases has been available by CT, diagnostic problems and limitations of CT have becoming evident. This paper outlines CT problems and limitations in diagnosing cerebrovascular disorders, cerebral tumors, inflammation, head trauma, and cerebral atrophy; and discusses their relation to adults and elderly people. (Namekawa, K.)

  17. Patch planting of hard spin-glass problems: Getting ready for the next generation of optimization approaches

    Science.gov (United States)

    Wang, Wenlong; Mandrà, Salvatore; Katzgraber, Helmut

    We propose a patch planting heuristic that allows us to create arbitrarily-large Ising spin-glass instances on any topology and with any type of disorder, and where the exact ground-state energy of the problem is known by construction. By breaking up the problem into patches that can be treated either with exact or heuristic solvers, we can reconstruct the optimum of the original, considerably larger, problem. The scaling of the computational complexity of these instances with various patch numbers and sizes is investigated and compared with random instances using population annealing Monte Carlo and quantum annealing on the D-Wave 2X quantum annealer. The method can be useful for benchmarking of novel computing technologies and algorithms. NSF-DMR-1208046 and the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via MIT Lincoln Laboratory Air Force Contract No. FA8721-05-C-0002.

  18. A Computational Analysis of the Traveling Salesman and Cutting Stock Problems

    Directory of Open Access Journals (Sweden)

    Gracia María D.

    2015-01-01

    Full Text Available The aim of this article is to perform a computational study to analyze the impact of formulations, and the solution strategy on the algorithmic performance of two classical optimization problems: the traveling salesman problem and the cutting stock problem. In order to assess the algorithmic performance on both problems three dependent variables were used: solution quality, computing time and number of iterations. The results are useful for choosing the solution approach to each specific problem. In the STSP, the results demonstrate that the multistage decision formulation is better than the conventional formulations, by solving 90.47% of the instances compared with MTZ (76.19% and DFJ (14.28%. The results of the CSP demonstrate that the cutting patterns formulation is better than the standard formulation with symmetry breaking inequalities, when the objective function is to minimize the loss of trim when cutting the rolls.

  19. Full truckload vehicle routing problem with profits

    Directory of Open Access Journals (Sweden)

    Jian Li

    2014-04-01

    Full Text Available A new variant of the full truckload vehicle routing problem is studied. In this problem there are more than one delivery points corresponding to the same pickup point, and one order is allowed to be served several times by the same vehicle or different vehicles. For the orders which cannot be assigned because of resource constraint, the logistics company outsources them to other logistics companies at a certain cost. To maximize its profits, logistics company decides which to be transported by private fleet and which to be outsourced. The mathematical model is constructed for the problem. Since the problem is NP-hard and it is difficult to solve the large-scale problems with an exact algorithm, a hybrid genetic algorithm is proposed. Computational results show the effectiveness of the hybrid genetic algorithm.

  20. Approximability of optimization problems through adiabatic quantum computation

    CERN Document Server

    Cruz-Santos, William

    2014-01-01

    The adiabatic quantum computation (AQC) is based on the adiabatic theorem to approximate solutions of the Schrödinger equation. The design of an AQC algorithm involves the construction of a Hamiltonian that describes the behavior of the quantum system. This Hamiltonian is expressed as a linear interpolation of an initial Hamiltonian whose ground state is easy to compute, and a final Hamiltonian whose ground state corresponds to the solution of a given combinatorial optimization problem. The adiabatic theorem asserts that if the time evolution of a quantum system described by a Hamiltonian is l

  1. A Hybrid Autonomic Computing-Based Approach to Distributed Constraint Satisfaction Problems

    Directory of Open Access Journals (Sweden)

    Abhishek Bhatia

    2015-03-01

    Full Text Available Distributed constraint satisfaction problems (DisCSPs are among the widely endeavored problems using agent-based simulation. Fernandez et al. formulated sensor and mobile tracking problem as a DisCSP, known as SensorDCSP In this paper, we adopt a customized ERE (environment, reactive rules and entities algorithm for the SensorDCSP, which is otherwise proven as a computationally intractable problem. An amalgamation of the autonomy-oriented computing (AOC-based algorithm (ERE and genetic algorithm (GA provides an early solution of the modeled DisCSP. Incorporation of GA into ERE facilitates auto-tuning of the simulation parameters, thereby leading to an early solution of constraint satisfaction. This study further contributes towards a model, built up in the NetLogo simulation environment, to infer the efficacy of the proposed approach.

  2. Mental health problems in deaf and severely hard of hearing children and adolescents : findings on prevalence, pathogenesis and clinical complexities, and implications for prevention, diagnosis and intervention

    NARCIS (Netherlands)

    Gent, Tiejo van

    2012-01-01

    The aim of this thesis is to expand the knowledge of mental health problems with deaf and severely hard of hearing children and adolescents in the following domains: 1. The prevalence of mental health problems; 2. Specific intra- and interpersonal aspects of pathogenesis; 3. characteristics of the

  3. Reconfiguration in FPGA-Based Multi-Core Platforms for Hard Real-Time Applications

    DEFF Research Database (Denmark)

    Pezzarossa, Luca; Schoeberl, Martin; Sparsø, Jens

    2016-01-01

    -case execution-time of tasks of an application that determines the systems ability to respond in time. To support this focus, the platform must provide service guarantees for both communication and computation resources. In addition, many hard real-time applications have multiple modes of operation, and each......In general-purpose computing multi-core platforms, hardware accelerators and reconfiguration are means to improve performance; i.e., the average-case execution time of a software application. In hard real-time systems, such average-case speed-up is not in itself relevant - it is the worst...... mode has specific requirements. An interesting perspective on reconfigurable computing is to exploit run-time reconfiguration to support mode changes. In this paper we explore approaches to reconfiguration of communication and computation resources in the T-CREST hard real-time multi-core platform...

  4. Continuous-Variable Quantum Computation of Oracle Decision Problems

    Science.gov (United States)

    Adcock, Mark R. A.

    Quantum information processing is appealing due its ability to solve certain problems quantitatively faster than classical information processing. Most quantum algorithms have been studied in discretely parameterized systems, but many quantum systems are continuously parameterized. The field of quantum optics in particular has sophisticated techniques for manipulating continuously parameterized quantum states of light, but the lack of a code-state formalism has hindered the study of quantum algorithms in these systems. To address this situation, a code-state formalism for the solution of oracle decision problems in continuously-parameterized quantum systems is developed. Quantum information processing is appealing due its ability to solve certain problems quantitatively faster than classical information processing. Most quantum algorithms have been studied in discretely parameterized systems, but many quantum systems are continuously parameterized. The field of quantum optics in particular has sophisticated techniques for manipulating continuously parameterized quantum states of light, but the lack of a code-state formalism has hindered the study of quantum algorithms in these systems. To address this situation, a code-state formalism for the solution of oracle decision problems in continuously-parameterized quantum systems is developed. In the infinite-dimensional case, we study continuous-variable quantum algorithms for the solution of the Deutsch--Jozsa oracle decision problem implemented within a single harmonic-oscillator. Orthogonal states are used as the computational bases, and we show that, contrary to a previous claim in the literature, this implementation of quantum information processing has limitations due to a position-momentum trade-off of the Fourier transform. We further demonstrate that orthogonal encoding bases are not unique, and using the coherent states of the harmonic oscillator as the computational bases, our formalism enables quantifying

  5. Soft and hard pomerons

    International Nuclear Information System (INIS)

    Maor, Uri; Tel Aviv Univ.

    1995-09-01

    The role of s-channel unitarity screening corrections, calculated in the eikonal approximation, is investigated for soft Pomeron exchange responsible for elastic and diffractive hadron scattering in the high energy limit. We examine the differences between our results and those obtained from the supercritical Pomeron-Regge model with no such corrections. It is shown that screening saturation is attained at different scales for different channels. We then proceed to discuss the new HERA data on hard (PQCD) Pomeron diffractive channels and discuss the relationship between the soft and hard Pomerons and the relevance of our analysis to this problem. (author). 18 refs, 9 figs, 1 tab

  6. Hard Copy Market Overview

    Science.gov (United States)

    Testan, Peter R.

    1987-04-01

    A number of Color Hard Copy (CHC) market drivers are currently indicating strong growth in the use of CHC technologies for the business graphics marketplace. These market drivers relate to product, software, color monitors and color copiers. The use of color in business graphics allows more information to be relayed than is normally the case in a monochrome format. The communicative powers of full-color computer generated output in the business graphics application area will continue to induce end users to desire and require color in their future applications. A number of color hard copy technologies will be utilized in the presentation graphics arena. Thermal transfer, ink jet, photographic and electrophotographic technologies are all expected to be utilized in the business graphics presentation application area in the future. Since the end of 1984, the availability of color application software packages has grown significantly. Sales revenue generated by business graphics software is expected to grow at a compound annual growth rate of just over 40 percent to 1990. Increased availability of packages to allow the integration of text and graphics is expected. Currently, the latest versions of page description languages such as Postscript, Interpress and DDL all support color output. The use of color monitors will also drive the demand for color hard copy in the business graphics market place. The availability of higher resolution screens is allowing color monitors to be easily used for both text and graphics applications in the office environment. During 1987, the sales of color monitors are expected to surpass the sales of monochrome monitors. Another major color hard copy market driver will be the color copier. In order to take advantage of the communications power of computer generated color output, multiple copies are required for distribution. Product introductions of a new generation of color copiers is now underway with additional introductions expected

  7. Parallel Optimization of Polynomials for Large-scale Problems in Stability and Control

    Science.gov (United States)

    Kamyar, Reza

    In this thesis, we focus on some of the NP-hard problems in control theory. Thanks to the converse Lyapunov theory, these problems can often be modeled as optimization over polynomials. To avoid the problem of intractability, we establish a trade off between accuracy and complexity. In particular, we develop a sequence of tractable optimization problems --- in the form of Linear Programs (LPs) and/or Semi-Definite Programs (SDPs) --- whose solutions converge to the exact solution of the NP-hard problem. However, the computational and memory complexity of these LPs and SDPs grow exponentially with the progress of the sequence - meaning that improving the accuracy of the solutions requires solving SDPs with tens of thousands of decision variables and constraints. Setting up and solving such problems is a significant challenge. The existing optimization algorithms and software are only designed to use desktop computers or small cluster computers --- machines which do not have sufficient memory for solving such large SDPs. Moreover, the speed-up of these algorithms does not scale beyond dozens of processors. This in fact is the reason we seek parallel algorithms for setting-up and solving large SDPs on large cluster- and/or super-computers. We propose parallel algorithms for stability analysis of two classes of systems: 1) Linear systems with a large number of uncertain parameters; 2) Nonlinear systems defined by polynomial vector fields. First, we develop a distributed parallel algorithm which applies Polya's and/or Handelman's theorems to some variants of parameter-dependent Lyapunov inequalities with parameters defined over the standard simplex. The result is a sequence of SDPs which possess a block-diagonal structure. We then develop a parallel SDP solver which exploits this structure in order to map the computation, memory and communication to a distributed parallel environment. Numerical tests on a supercomputer demonstrate the ability of the algorithm to

  8. Designing a fuzzy scheduler for hard real-time systems

    Science.gov (United States)

    Yen, John; Lee, Jonathan; Pfluger, Nathan; Natarajan, Swami

    1992-01-01

    In hard real-time systems, tasks have to be performed not only correctly, but also in a timely fashion. If timing constraints are not met, there might be severe consequences. Task scheduling is the most important problem in designing a hard real-time system, because the scheduling algorithm ensures that tasks meet their deadlines. However, the inherent nature of uncertainty in dynamic hard real-time systems increases the problems inherent in scheduling. In an effort to alleviate these problems, we have developed a fuzzy scheduler to facilitate searching for a feasible schedule. A set of fuzzy rules are proposed to guide the search. The situation we are trying to address is the performance of the system when no feasible solution can be found, and therefore, certain tasks will not be executed. We wish to limit the number of important tasks that are not scheduled.

  9. Using Volunteer Computing to Study Some Features of Diagonal Latin Squares

    Science.gov (United States)

    Vatutin, Eduard; Zaikin, Oleg; Kochemazov, Stepan; Valyaev, Sergey

    2017-12-01

    In this research, the study concerns around several features of diagonal Latin squares (DLSs) of small order. Authors of the study suggest an algorithm for computing minimal and maximal numbers of transversals of DLSs. According to this algorithm, all DLSs of a particular order are generated, and for each square all its transversals and diagonal transversals are constructed. The algorithm was implemented and applied to DLSs of order at most 7 on a personal computer. The experiment for order 8 was performed in the volunteer computing project Gerasim@home. In addition, the problem of finding pairs of orthogonal DLSs of order 10 was considered and reduced to Boolean satisfiability problem. The obtained problem turned out to be very hard, therefore it was decomposed into a family of subproblems. In order to solve the problem, the volunteer computing project SAT@home was used. As a result, several dozen pairs of described kind were found.

  10. Data science in R a case studies approach to computational reasoning and problem solving

    CERN Document Server

    Nolan, Deborah

    2015-01-01

    Effectively Access, Transform, Manipulate, Visualize, and Reason about Data and ComputationData Science in R: A Case Studies Approach to Computational Reasoning and Problem Solving illustrates the details involved in solving real computational problems encountered in data analysis. It reveals the dynamic and iterative process by which data analysts approach a problem and reason about different ways of implementing solutions. The book's collection of projects, comprehensive sample solutions, and follow-up exercises encompass practical topics pertaining to data processing, including: Non-standar

  11. A canned food scheduling problem with batch due date

    Science.gov (United States)

    Chung, Tsui-Ping; Liao, Ching-Jong; Smith, Milton

    2014-09-01

    This article considers a canned food scheduling problem where jobs are grouped into several batches. Jobs can be sent to the next operation only when all the jobs in the same batch have finished their processing, i.e. jobs in a batch, have a common due date. This batch due date problem is quite common in canned food factories, but there is no efficient heuristic to solve the problem. The problem can be formulated as an identical parallel machine problem with batch due date to minimize the total tardiness. Since the problem is NP hard, two heuristics are proposed to find the near-optimal solution. Computational results comparing the effectiveness and efficiency of the two proposed heuristics with an existing heuristic are reported and discussed.

  12. A taxing problem: the complementary use of hard and soft OR in public policy

    OpenAIRE

    Cooper, C; Brown, J; Pidd, M

    2004-01-01

    A review of the UK personal taxation system used a combination of hard and soft OR approaches in a complementary way. The hard OR was based on data mining to increase understanding of individual taxpayers and their changing needs within the personal tax system. The soft OR was based on soft systems methodology with two aims in mind. First, to guide the review and, secondly, as an auditable approach for collecting the views of key internal and external stakeholders. The soft and hard OR were u...

  13. An exact method for computing the frustration index in signed networks using binary programming

    OpenAIRE

    Aref, Samin; Mason, Andrew J.; Wilson, Mark C.

    2016-01-01

    Computing the frustration index of a signed graph is a key step toward solving problems in many fields including social networks, physics, material science, and biology. The frustration index determines the distance of a network from a state of total structural balance. Although the definition of the frustration index goes back to 1960, its exact algorithmic computation, which is closely related to classic NP-hard graph problems, has only become a focus in recent years. We develop three new b...

  14. Strong Bisimilarity and Regularity of Basic Parallel Processes is PSPACE-Hard

    DEFF Research Database (Denmark)

    Srba, Jirí

    2002-01-01

    We show that the problem of checking whether two processes definable in the syntax of Basic Parallel Processes (BPP) are strongly bisimilar is PSPACE-hard. We also demonstrate that there is a polynomial time reduction from the strong bisimilarity checking problem of regular BPP to the strong...... regularity (finiteness) checking of BPP. This implies that strong regularity of BPP is also PSPACE-hard....

  15. Some unsolved problems in discrete mathematics and mathematical cybernetics

    Science.gov (United States)

    Korshunov, Aleksei D.

    2009-10-01

    There are many unsolved problems in discrete mathematics and mathematical cybernetics. Writing a comprehensive survey of such problems involves great difficulties. First, such problems are rather numerous and varied. Second, they greatly differ from each other in degree of completeness of their solution. Therefore, even a comprehensive survey should not attempt to cover the whole variety of such problems; only the most important and significant problems should be reviewed. An impersonal choice of problems to include is quite hard. This paper includes 13 unsolved problems related to combinatorial mathematics and computational complexity theory. The problems selected give an indication of the author's studies for 50 years; for this reason, the choice of the problems reviewed here is, to some extent, subjective. At the same time, these problems are very difficult and quite important for discrete mathematics and mathematical cybernetics. Bibliography: 74 items.

  16. Some unsolved problems in discrete mathematics and mathematical cybernetics

    International Nuclear Information System (INIS)

    Korshunov, Aleksei D

    2009-01-01

    There are many unsolved problems in discrete mathematics and mathematical cybernetics. Writing a comprehensive survey of such problems involves great difficulties. First, such problems are rather numerous and varied. Second, they greatly differ from each other in degree of completeness of their solution. Therefore, even a comprehensive survey should not attempt to cover the whole variety of such problems; only the most important and significant problems should be reviewed. An impersonal choice of problems to include is quite hard. This paper includes 13 unsolved problems related to combinatorial mathematics and computational complexity theory. The problems selected give an indication of the author's studies for 50 years; for this reason, the choice of the problems reviewed here is, to some extent, subjective. At the same time, these problems are very difficult and quite important for discrete mathematics and mathematical cybernetics. Bibliography: 74 items.

  17. Elasticity of Hard-Spheres-And-Tether Systems

    International Nuclear Information System (INIS)

    Farago, O.; Kantor, Y.

    1999-01-01

    Physical properties of a large class of systems ranging from noble gases to polymers and rubber are primarily determined by entropy, while the internal energy plays a minor role. Such systems can be conveniently modeled and numerically studied using ''hard' (i.e., ''infinity-or-zero'') potentials, such as hard sphere repulsive interactions, or inextensible (''tether'') bonds which limit the distance between the bonded monomers, but have zero energy at all permitted distances. The knowledge of elastic constants is very important for understanding the behavior of entropy-dominated systems. Computational methods for determination of the elastic constants in such systems are broadly classified into ''strain'' methods and (fluctuation methods. In the former, the elastic constants are extracted from stress-strain relations, while in the latter they are determined from measurements of stress fluctuations. The fluctuation technique usually enables more accurate and well-controlled determination of the elastic constants since in this method the elastic constants are computed directly from simulations of the un strained system with no need to deform the simulation cell and perform numerical differentiations. For central forces systems, the original ''fluctuation'' formalism can be applied provided the pair potential is twice differentiable. We have extended this formalism to apply to hard-spheres-and-tether models in which this requirement is not fulfilled. We found that for such models the components of the tensor of elastic constants can be related to (two-, three- and four-point) probability densities of contacts between hard spheres and stretched bonds. We have tested our formalism on simple (phantom networks and three-dimensional hard spheres systems

  18. Industrial application of a graphics computer-based training system

    International Nuclear Information System (INIS)

    Klemm, R.W.

    1985-01-01

    Graphics Computer Based Training (GCBT) roles include drilling, tutoring, simulation and problem solving. Of these, Commonwealth Edison uses mainly tutoring, simulation and problem solving. These roles are not separate in any particular program. They are integrated to provide tutoring and part-task simulation, part-task simulation and problem solving, or problem solving tutoring. Commonwealth's Graphics Computer Based Training program was a result of over a year's worth of research and planning. The keys to the program are it's flexibility and control. Flexibility is maintained through stand alone units capable of program authoring and modification for plant/site specific users. Yet, the system has the capability to support up to 31 terminals with a 40 mb hard disk drive. Control of the GCBT program is accomplished through establishment of development priorities and a central development facility (Commonwealth Edison's Production Training Center)

  19. The Impact of Hard Disk Firmware Steganography on Computer Forensics

    Directory of Open Access Journals (Sweden)

    Iain Sutherland

    2009-06-01

    Full Text Available The hard disk drive is probably the predominant form of storage media and is a primary data source in a forensic investigation. The majority of available software tools and literature relating to the investigation of the structure and content contained within a hard disk drive concerns the extraction and analysis of evidence from the various file systems which can reside in the user accessible area of the disk. It is known that there are other areas of the hard disk drive which could be used to conceal information, such as the Host Protected Area and the Device Configuration Overlay. There are recommended methods for the detection and forensic analysis of these areas using appropriate tools and techniques. However, there are additional areas of a disk that have currently been overlooked.  The Service Area or Platter Resident Firmware Area is used to store code and control structures responsible for the functionality of the drive and for logging failing or failed sectors.This paper provides an introduction into initial research into the investigation and identification of issues relating to the analysis of the Platter Resident Firmware Area. In particular, the possibility that the Platter Resident Firmware Area could be manipulated and exploited to facilitate a form of steganography, enabling information to be concealed by a user and potentially from a digital forensic investigator.

  20. Parallel Object-Oriented Computation Applied to a Finite Element Problem

    Directory of Open Access Journals (Sweden)

    Jon B. Weissman

    1993-01-01

    Full Text Available The conventional wisdom in the scientific computing community is that the best way to solve large-scale numerically intensive scientific problems on today's parallel MIMD computers is to use Fortran or C programmed in a data-parallel style using low-level message-passing primitives. This approach inevitably leads to nonportable codes and extensive development time, and restricts parallel programming to the domain of the expert programmer. We believe that these problems are not inherent to parallel computing but are the result of the programming tools used. We will show that comparable performance can be achieved with little effort if better tools that present higher level abstractions are used. The vehicle for our demonstration is a 2D electromagnetic finite element scattering code we have implemented in Mentat, an object-oriented parallel processing system. We briefly describe the application. Mentat, the implementation, and present performance results for both a Mentat and a hand-coded parallel Fortran version.

  1. Quantum Heterogeneous Computing for Satellite Positioning Optimization

    Science.gov (United States)

    Bass, G.; Kumar, V.; Dulny, J., III

    2016-12-01

    Hard optimization problems occur in many fields of academic study and practical situations. We present results in which quantum heterogeneous computing is used to solve a real-world optimization problem: satellite positioning. Optimization problems like this can scale very rapidly with problem size, and become unsolvable with traditional brute-force methods. Typically, such problems have been approximately solved with heuristic approaches; however, these methods can take a long time to calculate and are not guaranteed to find optimal solutions. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. There are now commercially available quantum annealing (QA) devices that are designed to solve difficult optimization problems. These devices have 1000+ quantum bits, but they have significant hardware size and connectivity limitations. We present a novel heterogeneous computing stack that combines QA and classical machine learning and allows the use of QA on problems larger than the quantum hardware could solve in isolation. We begin by analyzing the satellite positioning problem with a heuristic solver, the genetic algorithm. The classical computer's comparatively large available memory can explore the full problem space and converge to a solution relatively close to the true optimum. The QA device can then evolve directly to the optimal solution within this more limited space. Preliminary experiments, using the Quantum Monte Carlo (QMC) algorithm to simulate QA hardware, have produced promising results. Working with problem instances with known global minima, we find a solution within 8% in a matter of seconds, and within 5% in a few minutes. Future studies include replacing QMC with commercially available quantum hardware and exploring more problem sets and model parameters. Our results have important implications for how heterogeneous quantum computing can be used to solve difficult optimization problems in any

  2. Novel Aspects of Hard Diffraction in QCD

    International Nuclear Information System (INIS)

    Brodsky, Stanley J.

    2005-01-01

    Initial- and final-state interactions from gluon-exchange, normally neglected in the parton model have a profound effect in QCD hard-scattering reactions, leading to leading-twist single-spin asymmetries, diffractive deep inelastic scattering, diffractive hard hadronic reactions, and nuclear shadowing and antishadowing--leading-twist physics not incorporated in the light-front wavefunctions of the target computed in isolation. I also discuss the use of diffraction to materialize the Fock states of a hadronic projectile and test QCD color transparency

  3. Computer Problem-Solving Coaches for Introductory Physics: Design and Usability Studies

    Science.gov (United States)

    Ryan, Qing X.; Frodermann, Evan; Heller, Kenneth; Hsu, Leonardo; Mason, Andrew

    2016-01-01

    The combination of modern computing power, the interactivity of web applications, and the flexibility of object-oriented programming may finally be sufficient to create computer coaches that can help students develop metacognitive problem-solving skills, an important competence in our rapidly changing technological society. However, no matter how…

  4. Maximum hardness and minimum polarizability principles through lattice energies of ionic compounds

    International Nuclear Information System (INIS)

    Kaya, Savaş; Kaya, Cemal; Islam, Nazmul

    2016-01-01

    The maximum hardness (MHP) and minimum polarizability (MPP) principles have been analyzed using the relationship among the lattice energies of ionic compounds with their electronegativities, chemical hardnesses and electrophilicities. Lattice energy, electronegativity, chemical hardness and electrophilicity values of ionic compounds considered in the present study have been calculated using new equations derived by some of the authors in recent years. For 4 simple reactions, the changes of the hardness (Δη), polarizability (Δα) and electrophilicity index (Δω) were calculated. It is shown that the maximum hardness principle is obeyed by all chemical reactions but minimum polarizability principles and minimum electrophilicity principle are not valid for all reactions. We also proposed simple methods to compute the percentage of ionic characters and inter nuclear distances of ionic compounds. Comparative studies with experimental sets of data reveal that the proposed methods of computation of the percentage of ionic characters and inter nuclear distances of ionic compounds are valid.

  5. Maximum hardness and minimum polarizability principles through lattice energies of ionic compounds

    Energy Technology Data Exchange (ETDEWEB)

    Kaya, Savaş, E-mail: savaskaya@cumhuriyet.edu.tr [Department of Chemistry, Faculty of Science, Cumhuriyet University, Sivas 58140 (Turkey); Kaya, Cemal, E-mail: kaya@cumhuriyet.edu.tr [Department of Chemistry, Faculty of Science, Cumhuriyet University, Sivas 58140 (Turkey); Islam, Nazmul, E-mail: nazmul.islam786@gmail.com [Theoretical and Computational Chemistry Research Laboratory, Department of Basic Science and Humanities/Chemistry Techno Global-Balurghat, Balurghat, D. Dinajpur 733103 (India)

    2016-03-15

    The maximum hardness (MHP) and minimum polarizability (MPP) principles have been analyzed using the relationship among the lattice energies of ionic compounds with their electronegativities, chemical hardnesses and electrophilicities. Lattice energy, electronegativity, chemical hardness and electrophilicity values of ionic compounds considered in the present study have been calculated using new equations derived by some of the authors in recent years. For 4 simple reactions, the changes of the hardness (Δη), polarizability (Δα) and electrophilicity index (Δω) were calculated. It is shown that the maximum hardness principle is obeyed by all chemical reactions but minimum polarizability principles and minimum electrophilicity principle are not valid for all reactions. We also proposed simple methods to compute the percentage of ionic characters and inter nuclear distances of ionic compounds. Comparative studies with experimental sets of data reveal that the proposed methods of computation of the percentage of ionic characters and inter nuclear distances of ionic compounds are valid.

  6. Problems and Issues in Using Computer- Based Support Tools to Enhance 'Soft' Systems Methodologies

    Directory of Open Access Journals (Sweden)

    Mark Stansfield

    2001-11-01

    Full Text Available This paper explores the issue of whether computer-based support tools can enhance the use of 'soft' systems methodologies as applied to real-world problem situations. Although work has been carried out by a number of researchers in applying computer-based technology to concepts and methodologies relating to 'soft' systems thinking such as Soft Systems Methodology (SSM, such attempts appear to be still in their infancy and have not been applied widely to real-world problem situations. This paper will highlight some of the problems that may be encountered in attempting to develop computer-based support tools for 'soft' systems methodologies. Particular attention will be paid to an attempt by the author to develop a computer-based support tool for a particular 'soft' systems method of inquiry known as the Appreciative Inquiry Method that is based upon Vickers' notion of 'appreciation' (Vickers, 196S and Checkland's SSM (Checkland, 1981. The final part of the paper will explore some of the lessons learnt from developing and applying the computer-based support tool to a real world problem situation, as well as considering the feasibility of developing computer-based support tools for 'soft' systems methodologies. This paper will put forward the point that a mixture of manual and computer-based tools should be employed to allow a methodology to be used in an unconstrained manner, but the benefits provided by computer-based technology should be utilised in supporting and enhancing the more mundane and structured tasks.

  7. Utilizing of computational tools on the modelling of a simplified problem of neutron shielding

    Energy Technology Data Exchange (ETDEWEB)

    Lessa, Fabio da Silva Rangel; Platt, Gustavo Mendes; Alves Filho, Hermes [Universidade do Estado do Rio de Janeiro (UERJ), Nova Friburgo, RJ (Brazil). Inst. Politecnico]. E-mails: fsrlessa@gmail.com; gmplatt@iprj.uerj.br; halves@iprj.uerj.br

    2007-07-01

    In the current technology level, the investigation of several problems is studied through computational simulations whose results are in general satisfactory and much less expensive than the conventional forms of investigation (e.g., destructive tests, laboratory measures, etc.). Almost all of the modern scientific studies are executed using computational tools, as computers of superior capacity and their systems applications to make complex calculations, algorithmic iterations, etc. Besides the considerable economy in time and in space that the Computational Modelling provides, there is a financial economy to the scientists. The Computational Modelling is a modern methodology of investigation that asks for the theoretical study of the identified phenomena in the problem, a coherent mathematical representation of such phenomena, the generation of a numeric algorithmic system comprehensible for the computer, and finally the analysis of the acquired solution, or still getting use of pre-existent systems that facilitate the visualization of these results (editors of Cartesian graphs, for instance). In this work, was being intended to use many computational tools, implementation of numeric methods and a deterministic model in the study and analysis of a well known and simplified problem of nuclear engineering (the neutron transport), simulating a theoretical problem of neutron shielding with physical-material hypothetical parameters, of neutron flow in each space junction, programmed with Scilab version 4.0. (author)

  8. Utilizing of computational tools on the modelling of a simplified problem of neutron shielding

    International Nuclear Information System (INIS)

    Lessa, Fabio da Silva Rangel; Platt, Gustavo Mendes; Alves Filho, Hermes

    2007-01-01

    In the current technology level, the investigation of several problems is studied through computational simulations whose results are in general satisfactory and much less expensive than the conventional forms of investigation (e.g., destructive tests, laboratory measures, etc.). Almost all of the modern scientific studies are executed using computational tools, as computers of superior capacity and their systems applications to make complex calculations, algorithmic iterations, etc. Besides the considerable economy in time and in space that the Computational Modelling provides, there is a financial economy to the scientists. The Computational Modelling is a modern methodology of investigation that asks for the theoretical study of the identified phenomena in the problem, a coherent mathematical representation of such phenomena, the generation of a numeric algorithmic system comprehensible for the computer, and finally the analysis of the acquired solution, or still getting use of pre-existent systems that facilitate the visualization of these results (editors of Cartesian graphs, for instance). In this work, was being intended to use many computational tools, implementation of numeric methods and a deterministic model in the study and analysis of a well known and simplified problem of nuclear engineering (the neutron transport), simulating a theoretical problem of neutron shielding with physical-material hypothetical parameters, of neutron flow in each space junction, programmed with Scilab version 4.0. (author)

  9. Internet computer coaches for introductory physics problem solving

    Science.gov (United States)

    Xu Ryan, Qing

    The ability to solve problems in a variety of contexts is becoming increasingly important in our rapidly changing technological society. Problem-solving is a complex process that is important for everyday life and crucial for learning physics. Although there is a great deal of effort to improve student problem solving skills throughout the educational system, national studies have shown that the majority of students emerge from such courses having made little progress toward developing good problem-solving skills. The Physics Education Research Group at the University of Minnesota has been developing Internet computer coaches to help students become more expert-like problem solvers. During the Fall 2011 and Spring 2013 semesters, the coaches were introduced into large sections (200+ students) of the calculus based introductory mechanics course at the University of Minnesota. This dissertation, will address the research background of the project, including the pedagogical design of the coaches and the assessment of problem solving. The methodological framework of conducting experiments will be explained. The data collected from the large-scale experimental studies will be discussed from the following aspects: the usage and usability of these coaches; the usefulness perceived by students; and the usefulness measured by final exam and problem solving rubric. It will also address the implications drawn from this study, including using this data to direct future coach design and difficulties in conducting authentic assessment of problem-solving.

  10. Matrix interdiction problem

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Feng [Los Alamos National Laboratory; Kasiviswanathan, Shiva [Los Alamos National Laboratory

    2010-01-01

    In the matrix interdiction problem, a real-valued matrix and an integer k is given. The objective is to remove k columns such that the sum over all rows of the maximum entry in each row is minimized. This combinatorial problem is closely related to bipartite network interdiction problem which can be applied to prioritize the border checkpoints in order to minimize the probability that an adversary can successfully cross the border. After introducing the matrix interdiction problem, we will prove the problem is NP-hard, and even NP-hard to approximate with an additive n{gamma} factor for a fixed constant {gamma}. We also present an algorithm for this problem that achieves a factor of (n-k) mUltiplicative approximation ratio.

  11. Statistical mechanics of the vertex-cover problem

    Science.gov (United States)

    Hartmann, Alexander K.; Weigt, Martin

    2003-10-01

    We review recent progress in the study of the vertex-cover problem (VC). The VC belongs to the class of NP-complete graph theoretical problems, which plays a central role in theoretical computer science. On ensembles of random graphs, VC exhibits a coverable-uncoverable phase transition. Very close to this transition, depending on the solution algorithm, easy-hard transitions in the typical running time of the algorithms occur. We explain a statistical mechanics approach, which works by mapping the VC to a hard-core lattice gas, and then applying techniques such as the replica trick or the cavity approach. Using these methods, the phase diagram of the VC could be obtained exactly for connectivities c e, the solution of the VC exhibits full replica symmetry breaking. The statistical mechanics approach can also be used to study analytically the typical running time of simple complete and incomplete algorithms for the VC. Finally, we describe recent results for the VC when studied on other ensembles of finite- and infinite-dimensional graphs.

  12. Statistical mechanics of the vertex-cover problem

    International Nuclear Information System (INIS)

    Hartmann, Alexander K; Weigt, Martin

    2003-01-01

    We review recent progress in the study of the vertex-cover problem (VC). The VC belongs to the class of NP-complete graph theoretical problems, which plays a central role in theoretical computer science. On ensembles of random graphs, VC exhibits a coverable-uncoverable phase transition. Very close to this transition, depending on the solution algorithm, easy-hard transitions in the typical running time of the algorithms occur. We explain a statistical mechanics approach, which works by mapping the VC to a hard-core lattice gas, and then applying techniques such as the replica trick or the cavity approach. Using these methods, the phase diagram of the VC could be obtained exactly for connectivities c e, the solution of the VC exhibits full replica symmetry breaking. The statistical mechanics approach can also be used to study analytically the typical running time of simple complete and incomplete algorithms for the VC. Finally, we describe recent results for the VC when studied on other ensembles of finite- and infinite-dimensional graphs

  13. Alcohol, cognitive impairment and the hard to discharge acute hospital inpatients.

    LENUS (Irish Health Repository)

    Popoola, A

    2012-02-03

    AIM: To examine the role of alcohol and alcohol-related cognitive impairment in the clinical presentation of adults in-patients less than 65 years who are \\'hard to discharge\\' in a general hospital. METHOD: Retrospective medical file review of inpatients in CUH referred to the discharge coordinator between March and September 2006. RESULTS: Of 46 patients identified, the case notes of 44 (25 male; age was 52.2 +\\/- 7.7 years) were reviewed. The average length of stay in the hospital was 84.0 +\\/- 72.3 days and mean lost bed days was 15.9 +\\/- 36.6 days. The number of patients documented to have an overt alcohol problem was 15 (34.1%). Patients with alcohol problems were more likely to have cognitive impairment than those without an alcohol problem [12 (80%) and 9 (31%) P = 0.004]. Patients with alcohol problems had a shorter length of stay (81.5 vs. 85.3 days; t = 0.161, df = 42, P = 0.87), fewer lost bed days (8.2 vs. 19.2 days; Mann-Whitney U = 179, P = 0.34) and no mortality (0 vs. 6) compared with hard to discharge patients without alcohol problem. CONCLUSION: Alcohol problems and alcohol-related cognitive impairment are hugely over-represented in acute hospital in-patients who are hard to discharge. Despite these problems, this group appears to have reduced morbidity, less lost bed days and a better outcome than other categories of hard to discharge patients. There is a need to resource acute hospitals to address alcohol-related morbidity in general and Wernicke-Korsakoff Syndrome in particular.

  14. Some unsolved problems in discrete mathematics and mathematical cybernetics

    Energy Technology Data Exchange (ETDEWEB)

    Korshunov, Aleksei D [S.L. Sobolev Institute for Mathematics, Siberian Branch of the Russian Academy of Sciences, Novosibirsk (Russian Federation)

    2009-10-31

    There are many unsolved problems in discrete mathematics and mathematical cybernetics. Writing a comprehensive survey of such problems involves great difficulties. First, such problems are rather numerous and varied. Second, they greatly differ from each other in degree of completeness of their solution. Therefore, even a comprehensive survey should not attempt to cover the whole variety of such problems; only the most important and significant problems should be reviewed. An impersonal choice of problems to include is quite hard. This paper includes 13 unsolved problems related to combinatorial mathematics and computational complexity theory. The problems selected give an indication of the author's studies for 50 years; for this reason, the choice of the problems reviewed here is, to some extent, subjective. At the same time, these problems are very difficult and quite important for discrete mathematics and mathematical cybernetics. Bibliography: 74 items.

  15. A hybrid metaheuristic for the time-dependent vehicle routing problem with hard time windows

    Directory of Open Access Journals (Sweden)

    N. Rincon-Garcia

    2017-01-01

    Full Text Available This article paper presents a hybrid metaheuristic algorithm to solve the time-dependent vehicle routing problem with hard time windows. Time-dependent travel times are influenced by different congestion levels experienced throughout the day. Vehicle scheduling without consideration of congestion might lead to underestimation of travel times and consequently missed deliveries. The algorithm presented in this paper makes use of Large Neighbourhood Search approaches and Variable Neighbourhood Search techniques to guide the search. A first stage is specifically designed to reduce the number of vehicles required in a search space by the reduction of penalties generated by time-window violations with Large Neighbourhood Search procedures. A second stage minimises the travel distance and travel time in an ‘always feasible’ search space. Comparison of results with available test instances shows that the proposed algorithm is capable of obtaining a reduction in the number of vehicles (4.15%, travel distance (10.88% and travel time (12.00% compared to previous implementations in reasonable time.

  16. Tabu search approaches for the multi-level warehouse layout problem with adjacency constraints

    Science.gov (United States)

    Zhang, G. Q.; Lai, K. K.

    2010-08-01

    A new multi-level warehouse layout problem, the multi-level warehouse layout problem with adjacency constraints (MLWLPAC), is investigated. The same item type is required to be located in adjacent cells, and horizontal and vertical unit travel costs are product dependent. An integer programming model is proposed to formulate the problem, which is NP hard. Along with a cube-per-order index policy based heuristic, the standard tabu search (TS), greedy TS, and dynamic neighbourhood based TS are presented to solve the problem. The computational results show that the proposed approaches can reduce the transportation cost significantly.

  17. Solving Large-Scale Computational Problems Using Insights from Statistical Physics

    Energy Technology Data Exchange (ETDEWEB)

    Selman, Bart [Cornell University

    2012-02-29

    Many challenging problems in computer science and related fields can be formulated as constraint satisfaction problems. Such problems consist of a set of discrete variables and a set of constraints between those variables, and represent a general class of so-called NP-complete problems. The goal is to find a value assignment to the variables that satisfies all constraints, generally requiring a search through and exponentially large space of variable-value assignments. Models for disordered systems, as studied in statistical physics, can provide important new insights into the nature of constraint satisfaction problems. Recently, work in this area has resulted in the discovery of a new method for solving such problems, called the survey propagation (SP) method. With SP, we can solve problems with millions of variables and constraints, an improvement of two orders of magnitude over previous methods.

  18. Memory allocation and computations for Laplace’s equation of 3-D arbitrary boundary problems

    Directory of Open Access Journals (Sweden)

    Tsay Tswn-Syau

    2017-01-01

    Full Text Available Computation iteration schemes and memory allocation technique for finite difference method were presented in this paper. The transformed form of a groundwater flow problem in the generalized curvilinear coordinates was taken to be the illustrating example and a 3-dimensional second order accurate 19-point scheme was presented. Traditional element-by-element methods (e.g. SOR are preferred since it is simple and memory efficient but time consuming in computation. For efficient memory allocation, an index method was presented to store the sparse non-symmetric matrix of the problem. For computations, conjugate-gradient-like methods were reported to be computationally efficient. Among them, using incomplete Choleski decomposition as preconditioner was reported to be good method for iteration convergence. In general, the developed index method in this paper has the following advantages: (1 adaptable to various governing and boundary conditions, (2 flexible for higher order approximation, (3 independence of problem dimension, (4 efficient for complex problems when global matrix is not symmetric, (5 convenience for general sparse matrices, (6 computationally efficient in the most time consuming procedure of matrix multiplication, and (7 applicable to any developed matrix solver.

  19. Design and implementation of reliability evaluation of SAS hard disk based on RAID card

    Science.gov (United States)

    Ren, Shaohua; Han, Sen

    2015-10-01

    Because of the huge advantage of RAID technology in storage, it has been widely used. However, the question associated with this technology is that the hard disk based on the RAID card can not be queried by Operating System. Therefore how to read the self-information and log data of hard disk has been a problem, while this data is necessary for reliability test of hard disk. In traditional way, this information can be read just suitable for SATA hard disk, but not for SAS hard disk. In this paper, we provide a method by using LSI RAID card's Application Program Interface, communicating with RAID card and analyzing the feedback data to solve the problem. Then we will get the necessary information to assess the SAS hard disk.

  20. A NEW HEURISTIC ALGORITHM FOR MULTIPLE TRAVELING SALESMAN PROBLEM

    Directory of Open Access Journals (Sweden)

    F. NURIYEVA

    2017-06-01

    Full Text Available The Multiple Traveling Salesman Problem (mTSP is a combinatorial optimization problem in NP-hard class. The mTSP aims to acquire the minimum cost for traveling a given set of cities by assigning each of them to a different salesman in order to create m number of tours. This paper presents a new heuristic algorithm based on the shortest path algorithm to find a solution for the mTSP. The proposed method has been programmed in C language and its performance analysis has been carried out on the library instances. The computational results show the efficiency of this method.

  1. Proceedings of the International Conference on Soft Computing for Problem Solving

    CERN Document Server

    Nagar, Atulya; Pant, Millie; Bansal, Jagdish

    2012-01-01

    The present book is based on the research papers presented in the International Conference on Soft Computing for Problem Solving (SocProS 2011), held at Roorkee, India. This book is divided into two volumes and covers a variety of topics, including mathematical modeling, image processing, optimization, swarm intelligence, evolutionary algorithms, fuzzy logic, neural networks, forecasting, data mining etc. Particular emphasis is laid on Soft Computing and its application to diverse fields. The prime objective of the book is to familiarize the reader with the latest scientific developments that are taking place in various fields and the latest sophisticated problem solving tools that are being developed to deal with the complex and intricate problems that are otherwise difficult to solve by the usual and traditional methods. The book is directed to the researchers and scientists engaged in various fields of Science and Technology.

  2. COYOTE: a finite element computer program for nonlinear heat conduction problems

    International Nuclear Information System (INIS)

    Gartling, D.K.

    1978-06-01

    COYOTE is a finite element computer program designed for the solution of two-dimensional, nonlinear heat conduction problems. The theoretical and mathematical basis used to develop the code is described. Program capabilities and complete user instructions are presented. Several example problems are described in detail to demonstrate the use of the program

  3. Human-computer interfaces applied to numerical solution of the Plateau problem

    Science.gov (United States)

    Elias Fabris, Antonio; Soares Bandeira, Ivana; Ramos Batista, Valério

    2015-09-01

    In this work we present a code in Matlab to solve the Problem of Plateau numerically, and the code will include human-computer interface. The Problem of Plateau has applications in areas of knowledge like, for instance, Computer Graphics. The solution method will be the same one of the Surface Evolver, but the difference will be a complete graphical interface with the user. This will enable us to implement other kinds of interface like ocular mouse, voice, touch, etc. To date, Evolver does not include any graphical interface, which restricts its use by the scientific community. Specially, its use is practically impossible for most of the Physically Challenged People.

  4. Solving algebraic computational problems in geodesy and geoinformatics the answer to modern challenges

    CERN Document Server

    Awange, Joseph L

    2004-01-01

    While preparing and teaching 'Introduction to Geodesy I and II' to - dergraduate students at Stuttgart University, we noticed a gap which motivated the writing of the present book: Almost every topic that we taughtrequiredsomeskillsinalgebra,andinparticular,computeral- bra! From positioning to transformation problems inherent in geodesy and geoinformatics, knowledge of algebra and application of computer algebra software were required. In preparing this book therefore, we haveattemptedtoputtogetherbasicconceptsofabstractalgebra which underpin the techniques for solving algebraic problems. Algebraic c- putational algorithms useful for solving problems which require exact solutions to nonlinear systems of equations are presented and tested on various problems. Though the present book focuses mainly on the two ?elds,theconceptsand techniquespresented hereinarenonetheless- plicable to other ?elds where algebraic computational problems might be encountered. In Engineering for example, network densi?cation and robo...

  5. The multilevel fast multipole algorithm (MLFMA) for solving large-scale computational electromagnetics problems

    CERN Document Server

    Ergul, Ozgur

    2014-01-01

    The Multilevel Fast Multipole Algorithm (MLFMA) for Solving Large-Scale Computational Electromagnetic Problems provides a detailed and instructional overview of implementing MLFMA. The book: Presents a comprehensive treatment of the MLFMA algorithm, including basic linear algebra concepts, recent developments on the parallel computation, and a number of application examplesCovers solutions of electromagnetic problems involving dielectric objects and perfectly-conducting objectsDiscusses applications including scattering from airborne targets, scattering from red

  6. Use of Software Programs as Zero Fill Help in Overcoming the Problem Around Hard Drive

    OpenAIRE

    Eko Prasetyo Nugroho; Fivtatianti Fivtatianti, Skom, MM

    2003-01-01

    Zero Fill, is a software tool programs that are designed for hard disk drive specially branded Quantum. This software is a tool programs that function to format the hard drive. Where is the type of format here is the first format or in other words the software to format the hard drive is working under conditions of low- level or commonly referred to as a low- level format. The advantages of this software is able to fix and remove all existing data within the disk, such as files...

  7. Efficient Sum of Outer Products Dictionary Learning (SOUP-DIL) and Its Application to Inverse Problems.

    Science.gov (United States)

    Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A

    2017-12-01

    The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction.

  8. Computer Use and Behavior Problems in Twice-Exceptional Students

    Science.gov (United States)

    Alloway, Tracy Packiam; Elsworth, Miquela; Miley, Neal; Seckinger, Sean

    2016-01-01

    This pilot study investigated how engagement with computer games and TV exposure may affect behaviors of gifted students. We also compared behavioral and cognitive profiles of twice-exceptional students and children with Attention Deficit/Hyperactivity Disorder (ADHD). Gifted students were divided into those with behavioral problems and those…

  9. A review on economic emission dispatch problems using quantum computational intelligence

    Science.gov (United States)

    Mahdi, Fahad Parvez; Vasant, Pandian; Kallimani, Vish; Abdullah-Al-Wadud, M.

    2016-11-01

    Economic emission dispatch (EED) problems are one of the most crucial problems in power systems. Growing energy demand, limitation of natural resources and global warming make this topic into the center of discussion and research. This paper reviews the use of Quantum Computational Intelligence (QCI) in solving Economic Emission Dispatch problems. QCI techniques like Quantum Genetic Algorithm (QGA) and Quantum Particle Swarm Optimization (QPSO) algorithm are discussed here. This paper will encourage the researcher to use more QCI based algorithm to get better optimal result for solving EED problems.

  10. Proceedings of the International Conference on Soft Computing for Problem Solving

    CERN Document Server

    Nagar, Atulya; Pant, Millie; Bansal, Jagdish

    2012-01-01

    The present book is based on the research papers presented in the International Conference on Soft Computing for Problem Solving (SocProS 2011), held at Roorkee, India. This book is divided into two volumes and covers a variety of topics, including mathematical modeling, image processing, optimization, swarm intelligence, evolutionary algorithms, fuzzy logic, neural networks, forecasting, data mining etc. Particular emphasis is laid on Soft Computing and its application to diverse fields. The prime objective of the book is to familiarize the reader with the latest scientific developments that are taking place in various fields and the latest sophisticated problem solving tools that are being developed to deal with the complex and intricate problems that are otherwise difficult to solve by the usual and traditional methods. The book is directed to the researchers and scientists engaged in various fields of Science and Technology.

  11. On a class of O(n²) problems in computational geometry

    NARCIS (Netherlands)

    Gajentaan, A.; Overmars, M.H.

    1993-01-01

    There are many problems in computational geometry for which the best know algorithms take time (n2) (or more) in the worst case while only very low lower bounds are known. In this paper we describe a large class of problems for which we prove that they are all at least as dicult as the following

  12. Effect of Computer-Presented Organizational/Memory Aids on Problem Solving Behavior.

    Science.gov (United States)

    Steinberg, Esther R.; And Others

    This research studied the effects of computer-presented organizational/memory aids on problem solving behavior. The aids were either matrix or verbal charts shown on the display screen next to the problem. The 104 college student subjects were randomly assigned to one of the four conditions: type of chart (matrix or verbal chart) and use of charts…

  13. To the problem of reliability standardization in computer-aided manufacturing at NPP units

    International Nuclear Information System (INIS)

    Yastrebenetskij, M.A.; Shvyryaev, Yu.V.; Spektor, L.I.; Nikonenko, I.V.

    1989-01-01

    The problems of reliability standardization in computer-aided manufacturing of NPP units considering the following approaches: computer-aided manufacturing of NPP units as a part of automated technological complex; computer-aided manufacturing of NPP units as multi-functional system, are analyzed. Selection of the composition of reliability indeces for computer-aided manufacturing of NPP units for each of the approaches considered is substantiated

  14. A HYBRID HEURISTIC ALGORITHM FOR THE CLUSTERED TRAVELING SALESMAN PROBLEM

    Directory of Open Access Journals (Sweden)

    Mário Mestria

    2016-04-01

    Full Text Available ABSTRACT This paper proposes a hybrid heuristic algorithm, based on the metaheuristics Greedy Randomized Adaptive Search Procedure, Iterated Local Search and Variable Neighborhood Descent, to solve the Clustered Traveling Salesman Problem (CTSP. Hybrid Heuristic algorithm uses several variable neighborhood structures combining the intensification (using local search operators and diversification (constructive heuristic and perturbation routine. In the CTSP, the vertices are partitioned into clusters and all vertices of each cluster have to be visited contiguously. The CTSP is -hard since it includes the well-known Traveling Salesman Problem (TSP as a special case. Our hybrid heuristic is compared with three heuristics from the literature and an exact method. Computational experiments are reported for different classes of instances. Experimental results show that the proposed hybrid heuristic obtains competitive results within reasonable computational time.

  15. Computer use problems and accommodation strategies at work and home for people with systemic sclerosis: a needs assessment.

    Science.gov (United States)

    Baker, Nancy A; Aufman, Elyse L; Poole, Janet L

    2012-01-01

    We identified the extent of the need for interventions and assistive technology to prevent computer use problems in people with systemic sclerosis (SSc) and the accommodation strategies they use to alleviate such problems. Respondents were recruited through the Scleroderma Foundation. Twenty-seven people with SSc who used a computer and reported difficulty in working completed the Computer Problems Survey. All but 1 of the respondents reported one problem with at least one equipment type. The highest number of respondents reported problems with keyboards (88%) and chairs (85%). More than half reported discomfort in the past month associated with the chair, keyboard, and mouse. Respondents used a variety of accommodation strategies. Many respondents experienced problems and discomfort related to computer use. The characteristic symptoms of SSc may contribute to these problems. Occupational therapy interventions for computer use problems in clients with SSc need to be tested. Copyright © 2012 by the American Occupational Therapy Association, Inc.

  16. An algorithm to compute a rule for division problems with multiple references

    Directory of Open Access Journals (Sweden)

    Sánchez Sánchez, Francisca J.

    2012-01-01

    Full Text Available In this paper we consider an extension of the classic division problem with claims: Thedivision problem with multiple references. Hinojosa et al. (2012 provide a solution for this type of pro-blems. The aim of this work is to extend their results by proposing an algorithm that calculates allocationsbased on these results. All computational details are provided in the paper.

  17. Computational approach to large quantum dynamical problems

    International Nuclear Information System (INIS)

    Friesner, R.A.; Brunet, J.P.; Wyatt, R.E.; Leforestier, C.; Binkley, S.

    1987-01-01

    The organizational structure is described for a new program that permits computations on a variety of quantum mechanical problems in chemical dynamics and spectroscopy. Particular attention is devoted to developing and using algorithms that exploit the capabilities of current vector supercomputers. A key component in this procedure is the recursive transformation of the large sparse Hamiltonian matrix into a much smaller tridiagonal matrix. An application to time-dependent laser molecule energy transfer is presented. Rate of energy deposition in the multimode molecule for systematic variations in the molecular intermode coupling parameters is emphasized

  18. Standard Error Computations for Uncertainty Quantification in Inverse Problems: Asymptotic Theory vs. Bootstrapping.

    Science.gov (United States)

    Banks, H T; Holm, Kathleen; Robbins, Danielle

    2010-11-01

    We computationally investigate two approaches for uncertainty quantification in inverse problems for nonlinear parameter dependent dynamical systems. We compare the bootstrapping and asymptotic theory approaches for problems involving data with several noise forms and levels. We consider both constant variance absolute error data and relative error which produces non-constant variance data in our parameter estimation formulations. We compare and contrast parameter estimates, standard errors, confidence intervals, and computational times for both bootstrapping and asymptotic theory methods.

  19. TRUMP3-JR: a finite difference computer program for nonlinear heat conduction problems

    International Nuclear Information System (INIS)

    Ikushima, Takeshi

    1984-02-01

    Computer program TRUMP3-JR is a revised version of TRUMP3 which is a finite difference computer program used for the solution of multi-dimensional nonlinear heat conduction problems. Pre- and post-processings for input data generation and graphical representations of calculation results of TRUMP3 are avaiable in TRUMP3-JR. The calculation equations, program descriptions and user's instruction are presented. A sample problem is described to demonstrate the use of the program. (author)

  20. Finding Solutions to Different Problems Simultaneously in a Multi-molecule Simulated Reactor

    Directory of Open Access Journals (Sweden)

    Jaderick P. Pabico

    2014-12-01

    Full Text Available – In recent years, the chemical metaphor has emerged as a computational paradigm based on the observation of different researchers that the chemical systems of living organisms possess inherent computational properties. In this metaphor, artificial molecules are considered as data or solutions, while the interactions among molecules are defined by an algorithm. In recent studies, the chemical metaphor was used as a distributed stochastic algorithm that simulates an abstract reactor to solve the traveling salesperson problem (TSP. Here, the artificial molecules represent Hamiltonian cycles, while the reactor is governed by reactions that can re-order Hamiltonian cycles. In this paper, a multi-molecule reactor (MMR-n that simulates chemical catalysis is introduced. The MMR-n solves in parallel three NP-hard computational problems namely, the optimization of the genetic parameters of a plant growth simulation model, the solution to large instances of symmetric and asymmetric TSP, and the static aircraft landing scheduling problems (ALSP. The MMR-n was shown as a computational metaphor capable of optimizing the cultivar coefficients of CERES-Rice model, and at the same time, able to find solutions to TSP and ALSP. The MMR-n as a computational paradigm has a better computational wall clock time compared to when these three problems are solved individually by a single-molecule reactor (MMR-1.

  1. Study and application of Dot 3.5 computer code in radiation shielding problems

    International Nuclear Information System (INIS)

    Otto, A.C.; Mendonca, A.G.; Maiorino, J.R.

    1983-01-01

    The application of nuclear transportation code S sub(N), Dot 3.5, to radiation shielding problems is revised. Aiming to study the better available option (convergence scheme, calculation mode), of DOT 3.5 computer code to be applied in radiation shielding problems, a standard model from 'Argonne Code Center' was selected and a combination of several calculation options to evaluate the accuracy of the results and the computational time was used, for then to select the more efficient option. To illustrate the versatility and efficacy in the application of the code for tipical shielding problems, the streaming neutrons calculation along a sodium coolant channel is ilustrated. (E.G.) [pt

  2. Solving Constraint Satisfaction Problems with Networks of Spiking Neurons.

    Science.gov (United States)

    Jonke, Zeno; Habenschuss, Stefan; Maass, Wolfgang

    2016-01-01

    Network of neurons in the brain apply-unlike processors in our current generation of computer hardware-an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling.

  3. Computer-Presented Organizational/Memory Aids as Instruction for Solving Pico-Fomi Problems.

    Science.gov (United States)

    Steinberg, Esther R.; And Others

    1985-01-01

    Describes investigation of effectiveness of computer-presented organizational/memory aids (matrix and verbal charts controlled by computer or learner) as instructional technique for solving Pico-Fomi problems, and the acquisition of deductive inference rules when such aids are present. Results indicate chart use control should be adapted to…

  4. On Wigner's problem, computability theory, and the definition of life

    International Nuclear Information System (INIS)

    Swain, J.

    1998-01-01

    In 1961, Eugene Wigner presented a clever argument that in a world which is adequately described by quantum mechanics, self-reproducing systems in general, and perhaps life in particular, would be incredibly improbable. The problem and some attempts at its solution are examined, and a new solution is presented based on computability theory. In particular, it is shown that computability theory provides limits on what can be known about a system in addition to those which arise from quantum mechanics. (author)

  5. Resource-constrained project scheduling: computing lower bounds by solving minimum cut problems

    NARCIS (Netherlands)

    Möhring, R.H.; Nesetril, J.; Schulz, A.S.; Stork, F.; Uetz, Marc Jochen

    1999-01-01

    We present a novel approach to compute Lagrangian lower bounds on the objective function value of a wide class of resource-constrained project scheduling problems. The basis is a polynomial-time algorithm to solve the following scheduling problem: Given a set of activities with start-time dependent

  6. "Seeing It on the Screen Isn't Really Seeing It": Reading Problems of Writers Using Word Processing.

    Science.gov (United States)

    Haas, Christina

    An observational study examined computer writers' use of hard copy for reading. The study begins with a description, based on interviews, of four kinds of reading problems encountered by writers using word processing; formatting, proofreading, reorganizing, and critical reading ("getting a sense of the text"). Subjects, six freshmen…

  7. Crowd Computing as a Cooperation Problem: An Evolutionary Approach

    Science.gov (United States)

    Christoforou, Evgenia; Fernández Anta, Antonio; Georgiou, Chryssis; Mosteiro, Miguel A.; Sánchez, Angel

    2013-05-01

    Cooperation is one of the socio-economic issues that has received more attention from the physics community. The problem has been mostly considered by studying games such as the Prisoner's Dilemma or the Public Goods Game. Here, we take a step forward by studying cooperation in the context of crowd computing. We introduce a model loosely based on Principal-agent theory in which people (workers) contribute to the solution of a distributed problem by computing answers and reporting to the problem proposer (master). To go beyond classical approaches involving the concept of Nash equilibrium, we work on an evolutionary framework in which both the master and the workers update their behavior through reinforcement learning. Using a Markov chain approach, we show theoretically that under certain----not very restrictive—conditions, the master can ensure the reliability of the answer resulting of the process. Then, we study the model by numerical simulations, finding that convergence, meaning that the system reaches a point in which it always produces reliable answers, may in general be much faster than the upper bounds given by the theoretical calculation. We also discuss the effects of the master's level of tolerance to defectors, about which the theory does not provide information. The discussion shows that the system works even with very large tolerances. We conclude with a discussion of our results and possible directions to carry this research further.

  8. Artificial immune algorithm for multi-depot vehicle scheduling problems

    Science.gov (United States)

    Wu, Zhongyi; Wang, Donggen; Xia, Linyuan; Chen, Xiaoling

    2008-10-01

    In the fast-developing logistics and supply chain management fields, one of the key problems in the decision support system is that how to arrange, for a lot of customers and suppliers, the supplier-to-customer assignment and produce a detailed supply schedule under a set of constraints. Solutions to the multi-depot vehicle scheduling problems (MDVRP) help in solving this problem in case of transportation applications. The objective of the MDVSP is to minimize the total distance covered by all vehicles, which can be considered as delivery costs or time consumption. The MDVSP is one of nondeterministic polynomial-time hard (NP-hard) problem which cannot be solved to optimality within polynomial bounded computational time. Many different approaches have been developed to tackle MDVSP, such as exact algorithm (EA), one-stage approach (OSA), two-phase heuristic method (TPHM), tabu search algorithm (TSA), genetic algorithm (GA) and hierarchical multiplex structure (HIMS). Most of the methods mentioned above are time consuming and have high risk to result in local optimum. In this paper, a new search algorithm is proposed to solve MDVSP based on Artificial Immune Systems (AIS), which are inspirited by vertebrate immune systems. The proposed AIS algorithm is tested with 30 customers and 6 vehicles located in 3 depots. Experimental results show that the artificial immune system algorithm is an effective and efficient method for solving MDVSP problems.

  9. Hard equality constrained integer knapsacks

    NARCIS (Netherlands)

    Aardal, K.I.; Lenstra, A.K.; Cook, W.J.; Schulz, A.S.

    2002-01-01

    We consider the following integer feasibility problem: "Given positive integer numbers a 0, a 1,..., a n, with gcd(a 1,..., a n) = 1 and a = (a 1,..., a n), does there exist a nonnegative integer vector x satisfying ax = a 0?" Some instances of this type have been found to be extremely hard to solve

  10. Problems of mineral tax computation in the oil and gas sector

    Directory of Open Access Journals (Sweden)

    Н. Г. Привалов

    2017-04-01

    Full Text Available The paper demonstrates the role of mineral tax in the overall sum of tax revenues in the budget. Problems of tax computation and payment have been reviewed; taxpayers and taxation basis of the amount of extracted minerals have been clearly defined. Issues of rental content of natural resource taxes are reviewed, as well as problems of right definition of the rental component in the process of mineral tax calculation for liquid and gaseous hydrocarbons.One of important problems in mineral tax calculation is a conflict between two laws – the Subsoil Law and the Tax Code of Russian Federation (26th chapter. There is an ambiguity in the mechanism of calculating amounts of extracted mineral resources – from the positions of the Tax Code and the Subsoil Law. The second problem is in the necessity to amend the mineral tax for oil extraction the same way as it has been done for gas extraction, when characteristics of each field are taken into account.This will provide a basis for correct computation of the natural resource rent for liquid and gaseous hydrocarbons. The paper offers recommendations for Russian authorities on this issue.

  11. Solving multiconstraint assignment problems using learning automata.

    Science.gov (United States)

    Horn, Geir; Oommen, B John

    2010-02-01

    This paper considers the NP-hard problem of object assignment with respect to multiple constraints: assigning a set of elements (or objects) into mutually exclusive classes (or groups), where the elements which are "similar" to each other are hopefully located in the same class. The literature reports solutions in which the similarity constraint consists of a single index that is inappropriate for the type of multiconstraint problems considered here and where the constraints could simultaneously be contradictory. This feature, where we permit possibly contradictory constraints, distinguishes this paper from the state of the art. Indeed, we are aware of no learning automata (or other heuristic) solutions which solve this problem in its most general setting. Such a scenario is illustrated with the static mapping problem, which consists of distributing the processes of a parallel application onto a set of computing nodes. This is a classical and yet very important problem within the areas of parallel computing, grid computing, and cloud computing. We have developed four learning-automata (LA)-based algorithms to solve this problem: First, a fixed-structure stochastic automata algorithm is presented, where the processes try to form pairs to go onto the same node. This algorithm solves the problem, although it requires some centralized coordination. As it is desirable to avoid centralized control, we subsequently present three different variable-structure stochastic automata (VSSA) algorithms, which have superior partitioning properties in certain settings, although they forfeit some of the scalability features of the fixed-structure algorithm. All three VSSA algorithms model the processes as automata having first the hosting nodes as possible actions; second, the processes as possible actions; and, third, attempting to estimate the process communication digraph prior to probabilistically mapping the processes. This paper, which, we believe, comprehensively reports the

  12. Machine learning meliorates computing and robustness in discrete combinatorial optimization problems.

    Directory of Open Access Journals (Sweden)

    Fushing Hsieh

    2016-11-01

    Full Text Available Discrete combinatorial optimization problems in real world are typically defined via an ensemble of potentially high dimensional measurements pertaining to all subjects of a system under study. We point out that such a data ensemble in fact embeds with system's information content that is not directly used in defining the combinatorial optimization problems. Can machine learning algorithms extract such information content and make combinatorial optimizing tasks more efficient? Would such algorithmic computations bring new perspectives into this classic topic of Applied Mathematics and Theoretical Computer Science? We show that answers to both questions are positive. One key reason is due to permutation invariance. That is, the data ensemble of subjects' measurement vectors is permutation invariant when it is represented through a subject-vs-measurement matrix. An unsupervised machine learning algorithm, called Data Mechanics (DM, is applied to find optimal permutations on row and column axes such that the permuted matrix reveals coupled deterministic and stochastic structures as the system's information content. The deterministic structures are shown to facilitate geometry-based divide-and-conquer scheme that helps optimizing task, while stochastic structures are used to generate an ensemble of mimicries retaining the deterministic structures, and then reveal the robustness pertaining to the original version of optimal solution. Two simulated systems, Assignment problem and Traveling Salesman problem, are considered. Beyond demonstrating computational advantages and intrinsic robustness in the two systems, we propose brand new robust optimal solutions. We believe such robust versions of optimal solutions are potentially more realistic and practical in real world settings.

  13. Solving Vertex Cover Problem Using DNA Tile Assembly Model

    Directory of Open Access Journals (Sweden)

    Zhihua Chen

    2013-01-01

    Full Text Available DNA tile assembly models are a class of mathematically distributed and parallel biocomputing models in DNA tiles. In previous works, tile assembly models have been proved be Turing-universal; that is, the system can do what Turing machine can do. In this paper, we use tile systems to solve computational hard problem. Mathematically, we construct three tile subsystems, which can be combined together to solve vertex cover problem. As a result, each of the proposed tile subsystems consists of Θ(1 types of tiles, and the assembly process is executed in a parallel way (like DNA’s biological function in cells; thus the systems can generate the solution of the problem in linear time with respect to the size of the graph.

  14. Correction of facial and mandibular asymmetry using a computer aided design/computer aided manufacturing prefabricated titanium implant.

    Science.gov (United States)

    Watson, Jason; Hatamleh, Muhanad; Alwahadni, Ahed; Srinivasan, Dilip

    2014-05-01

    Patients with significant craniofacial asymmetry may have functional problems associated with their occlusion and aesthetic concerns related to the imbalance in soft and hard tissue profiles. This report details a case of facial asymmetry secondary to left mandible angle deficiency due to undergoing previous radiotherapy. We describe the correction of the bony deformity using computer aided design/computer aided manufacturing custom-made titanium onlay using novel direct metal laser sintering. The direct metal laser sintering onlay proved a very accurate operative fit and showed a good aesthetic correction of the bony defect with no reported complications postoperatively. It is a useful low-morbidity technique, and there is no resorption or associated donor-site complications.

  15. A Theoretical Analysis: Physical Unclonable Functions and The Software Protection Problem

    Energy Technology Data Exchange (ETDEWEB)

    Nithyanand, Rishab [Stony Brook Univ., NY (United States); Solis, John H. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2011-09-01

    Physical Unclonable Functions (PUFs) or Physical One Way Functions (P-OWFs) are physical systems whose responses to input stimuli (i.e., challenges) are easy to measure (within reasonable error bounds) but hard to clone. This property of unclonability is due to the accepted hardness of replicating the multitude of uncontrollable manufacturing characteristics and makes PUFs useful in solving problems such as device authentication, software protection, licensing, and certified execution. In this paper, we focus on the effectiveness of PUFs for software protection and show that traditional non-computational (black-box) PUFs cannot solve the problem against real world adversaries in offline settings. Our contributions are the following: We provide two real world adversary models (weak and strong variants) and present definitions for security against the adversaries. We continue by proposing schemes secure against the weak adversary and show that no scheme is secure against a strong adversary without the use of trusted hardware. Finally, we present a protection scheme secure against strong adversaries based on trusted hardware.

  16. Cobalt allergy in hard metal workers

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, T; Rystedt, I

    1983-03-01

    Hard metal contains about 10% cobalt. 853 hard metal workers were examined and patch tested with substances from their environment. Initial patch tests with 1% cobalt chloride showed 62 positive reactions. By means of secondary serial dilution tests, allergic reactions to cobalt were reproduced in 9 men and 30 women. Weak reactions could not normally be reproduced. A history of hand eczema was found in 36 of the 39 individuals with reproducible positive test reactions to cobalt, while 21 of 23 with a positive initial patch test but negative serial dilution test had never had any skin problems. Hand etching and hand grinding, mainly female activities and traumatic to the hands, were found to involve the greatest risk of cobalt sensitization. 24 individuals had an isolated cobalt allergy. They had probably been sensitized by hard metal work, while the individuals, all women, who had simultaneous nickel allergy had probably been sensitized to nickel before their employment and then became sensitized to cobalt by hard metal work. A traumatic occupation, which causes irritant contact dermatitis and/or a previous contact allergy or atopy is probably a prerequisite for the development of cobalt allergy.

  17. Vectorization on the star computer of several numerical methods for a fluid flow problem

    Science.gov (United States)

    Lambiotte, J. J., Jr.; Howser, L. M.

    1974-01-01

    A reexamination of some numerical methods is considered in light of the new class of computers which use vector streaming to achieve high computation rates. A study has been made of the effect on the relative efficiency of several numerical methods applied to a particular fluid flow problem when they are implemented on a vector computer. The method of Brailovskaya, the alternating direction implicit method, a fully implicit method, and a new method called partial implicitization have been applied to the problem of determining the steady state solution of the two-dimensional flow of a viscous imcompressible fluid in a square cavity driven by a sliding wall. Results are obtained for three mesh sizes and a comparison is made of the methods for serial computation.

  18. Deaf and hard of hearing students' perspectives on bullying and school climate.

    Science.gov (United States)

    Weiner, Mary T; Day, Stefanie J; Galvan, Dennis

    2013-01-01

    Student perspectives reflect school climate. The study examined perspectives among deaf and hard of hearing students in residential and large day schools regarding bullying, and compared these perspectives with those of a national database of hearing students. The participants were 812 deaf and hard of hearing students in 11 U.S. schools. Data were derived from the Olweus Bullying Questionnaire (Olweus, 2007b), a standardized self-reported survey with multiple-choice questions focusing on different aspects of bullying problems. Significant bullying problems were found in deaf school programs. It appears that deaf and hard of hearing students experience bullying at rates 2-3 times higher than those reported by hearing students. Deaf and hard of hearing students reported that school personnel intervened less often when bullying occurred than was reported in the hearing sample. Results indicate the need for school climate improvement for all students, regardless of hearing status.

  19. COGNITIVE COMPUTER GRAPHICS AS A MEANS OF "SOFT" MODELING IN PROBLEMS OF RESTORATION OF FUNCTIONS OF TWO VARIABLES

    Directory of Open Access Journals (Sweden)

    A.N. Khomchenko

    2016-08-01

    Full Text Available The paper considers the problem of bi-cubic interpolation on the final element of serendipity family. With cognitive-graphical analysis the rigid model of Ergatoudis, Irons and Zenkevich (1968 compared with alternative models, obtained by the methods: direct geometric design, a weighted averaging of the basis polynomials, systematic generation of bases (advanced Taylor procedure. The emphasis is placed on the phenomenon of "gravitational repulsion" (Zenkevich paradox. The causes of rising of inadequate physical spectra nodal loads on serendipity elements of higher orders are investigated. Soft modeling allows us to build a lot of serendipity elements of bicubic interpolation, and you do not even need to know the exact form of the rigid model. The different interpretations of integral characteristics of the basis polynomials: geometrical, physical, probability are offered. Under the soft model in the theory of interpolation of function of two variables implies the model amenable to change through the choice of basis. Such changes in the family of Lagrangian finite elements of higher orders are excluded (hard simulation. Standard models of serendipity family (Zenkevich were also tough. It was found that the "responsibility" for the rigidity of serendipity model rests on ruled surfaces (zero Gaussian curvature - conoids that predominate in the base set. Cognitive portraits zero lines of standard serendipity surfaces suggested that in order to "mitigate" of serendipity pattern conoid should better be replaced by surfaces of alternating Gaussian curvature. The article shows the alternative (soft bases of serendipity models. The work is devoted to solving scientific and technological problems aimed at the creation, dissemination and use of cognitive computer graphics in teaching and learning. The results are of interest to students of specialties: "Computer Science and Information Technologies", "System Analysis", "Software Engineering", as well as

  20. Perceived problems with computer gaming and internet use among adolescents: measurement tool for non-clinical survey studies

    Science.gov (United States)

    2014-01-01

    Background Existing instruments for measuring problematic computer and console gaming and internet use are often lengthy and often based on a pathological perspective. The objective was to develop and present a new and short non-clinical measurement tool for perceived problems related to computer use and gaming among adolescents and to study the association between screen time and perceived problems. Methods Cross-sectional school-survey of 11-, 13-, and 15-year old students in thirteen schools in the City of Aarhus, Denmark, participation rate 89%, n = 2100. The main exposure was time spend on weekdays on computer- and console-gaming and internet use for communication and surfing. The outcome measures were three indexes on perceived problems related to computer and console gaming and internet use. Results The three new indexes showed high face validity and acceptable internal consistency. Most schoolchildren with high screen time did not experience problems related to computer use. Still, there was a strong and graded association between time use and perceived problems related to computer gaming, console gaming (only boys) and internet use, odds ratios ranging from 6.90 to 10.23. Conclusion The three new measures of perceived problems related to computer and console gaming and internet use among adolescents are appropriate, reliable and valid for use in non-clinical surveys about young people’s everyday life and behaviour. These new measures do not assess Internet Gaming Disorder as it is listed in the DSM and therefore has no parity with DSM criteria. We found an increasing risk of perceived problems with increasing time spent with gaming and internet use. Nevertheless, most schoolchildren who spent much time with gaming and internet use did not experience problems. PMID:24731270

  1. Revisiting the definition of local hardness and hardness kernel.

    Science.gov (United States)

    Polanco-Ramírez, Carlos A; Franco-Pérez, Marco; Carmona-Espíndola, Javier; Gázquez, José L; Ayers, Paul W

    2017-05-17

    An analysis of the hardness kernel and local hardness is performed to propose new definitions for these quantities that follow a similar pattern to the one that characterizes the quantities associated with softness, that is, we have derived new definitions for which the integral of the hardness kernel over the whole space of one of the variables leads to local hardness, and the integral of local hardness over the whole space leads to global hardness. A basic aspect of the present approach is that global hardness keeps its identity as the second derivative of energy with respect to the number of electrons. Local hardness thus obtained depends on the first and second derivatives of energy and electron density with respect to the number of electrons. When these derivatives are approximated by a smooth quadratic interpolation of energy, the expression for local hardness reduces to the one intuitively proposed by Meneses, Tiznado, Contreras and Fuentealba. However, when one combines the first directional derivatives with smooth second derivatives one finds additional terms that allow one to differentiate local hardness for electrophilic attack from the one for nucleophilic attack. Numerical results related to electrophilic attacks on substituted pyridines, substituted benzenes and substituted ethenes are presented to show the overall performance of the new definition.

  2. Computing quantum discord is NP-complete

    International Nuclear Information System (INIS)

    Huang, Yichen

    2014-01-01

    We study the computational complexity of quantum discord (a measure of quantum correlation beyond entanglement), and prove that computing quantum discord is NP-complete. Therefore, quantum discord is computationally intractable: the running time of any algorithm for computing quantum discord is believed to grow exponentially with the dimension of the Hilbert space so that computing quantum discord in a quantum system of moderate size is not possible in practice. As by-products, some entanglement measures (namely entanglement cost, entanglement of formation, relative entropy of entanglement, squashed entanglement, classical squashed entanglement, conditional entanglement of mutual information, and broadcast regularization of mutual information) and constrained Holevo capacity are NP-hard/NP-complete to compute. These complexity-theoretic results are directly applicable in common randomness distillation, quantum state merging, entanglement distillation, superdense coding, and quantum teleportation; they may offer significant insights into quantum information processing. Moreover, we prove the NP-completeness of two typical problems: linear optimization over classical states and detecting classical states in a convex set, providing evidence that working with classical states is generically computationally intractable. (paper)

  3. Hard decoding algorithm for optimizing thresholds under general Markovian noise

    Science.gov (United States)

    Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond

    2017-04-01

    Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.

  4. Desktop Grid Computing with BOINC and its Use for Solving the RND telecommunication Problem

    International Nuclear Information System (INIS)

    Vega-Rodriguez, M. A.; Vega-Perez, D.; Gomez-Pulido, J. A.; Sanchez-Perez, J. M.

    2007-01-01

    An important problem in mobile/cellular technology is trying to cover a certain geographical area by using the smallest number of radio antennas, and looking for the biggest cover rate. This is the well known Telecommunication problem identified as Radio Network Design (RND). This optimization problem can be solved by bio-inspired algorithms, among other options. In this work we use the PBIL (Population-Based Incremental Learning) algorithm, that has been little studied in this field but we have obtained very good results with it. PBIL is based on genetic algorithms and competitive learning (typical in neural networks), being a population evolution model based on probabilistic models. Due to the high number of configuration parameters of the PBIL, and because we want to test the RND problem with numerous variants, we have used grid computing with BOINC (Berkeley Open Infrastructure for Network Computing). In this way, we have been able to execute thousands of experiments in few days using around 100 computers at the same time. In this paper we present the most interesting results from our work. (Author)

  5. Pathgroups, a dynamic data structure for genome reconstruction problems.

    Science.gov (United States)

    Zheng, Chunfang

    2010-07-01

    Ancestral gene order reconstruction problems, including the median problem, quartet construction, small phylogeny, guided genome halving and genome aliquoting, are NP hard. Available heuristics dedicated to each of these problems are computationally costly for even small instances. We present a data structure enabling rapid heuristic solution to all these ancestral genome reconstruction problems. A generic greedy algorithm with look-ahead based on an automatically generated priority system suffices for all the problems using this data structure. The efficiency of the algorithm is due to fast updating of the structure during run time and to the simplicity of the priority scheme. We illustrate with the first rapid algorithm for quartet construction and apply this to a set of yeast genomes to corroborate a recent gene sequence-based phylogeny. http://albuquerque.bioinformatics.uottawa.ca/pathgroup/Quartet.html chunfang313@gmail.com Supplementary data are available at Bioinformatics online.

  6. Solving optimization problems by the public goods game

    Science.gov (United States)

    Javarone, Marco Alberto

    2017-09-01

    We introduce a method based on the Public Goods Game for solving optimization tasks. In particular, we focus on the Traveling Salesman Problem, i.e. a NP-hard problem whose search space exponentially grows increasing the number of cities. The proposed method considers a population whose agents are provided with a random solution to the given problem. In doing so, agents interact by playing the Public Goods Game using the fitness of their solution as currency of the game. Notably, agents with better solutions provide higher contributions, while those with lower ones tend to imitate the solution of richer agents for increasing their fitness. Numerical simulations show that the proposed method allows to compute exact solutions, and suboptimal ones, in the considered search spaces. As result, beyond to propose a new heuristic for combinatorial optimization problems, our work aims to highlight the potentiality of evolutionary game theory beyond its current horizons.

  7. Stochastic interactions of two Brownian hard spheres in the presence of depletants

    International Nuclear Information System (INIS)

    Karzar-Jeddi, Mehdi; Fan, Tai-Hsi; Tuinier, Remco; Taniguchi, Takashi

    2014-01-01

    A quantitative analysis is presented for the stochastic interactions of a pair of Brownian hard spheres in non-adsorbing polymer solutions. The hard spheres are hypothetically trapped by optical tweezers and allowed for random motion near the trapped positions. The investigation focuses on the long-time correlated Brownian motion. The mobility tensor altered by the polymer depletion effect is computed by the boundary integral method, and the corresponding random displacement is determined by the fluctuation-dissipation theorem. From our computations it follows that the presence of depletion layers around the hard spheres has a significant effect on the hydrodynamic interactions and particle dynamics as compared to pure solvent and uniform polymer solution cases. The probability distribution functions of random walks of the two interacting hard spheres that are trapped clearly shift due to the polymer depletion effect. The results show that the reduction of the viscosity in the depletion layers around the spheres and the entropic force due to the overlapping of depletion zones have a significant influence on the correlated Brownian interactions

  8. FOREWORD: 5th International Workshop on New Computational Methods for Inverse Problems

    Science.gov (United States)

    Vourc'h, Eric; Rodet, Thomas

    2015-11-01

    This volume of Journal of Physics: Conference Series is dedicated to the scientific research presented during the 5th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2015 (http://complement.farman.ens-cachan.fr/NCMIP_2015.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 29, 2015. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011, and secondly at the initiative of Institut Farman, in May 2012, May 2013 and May 2014. The New Computational Methods for Inverse Problems (NCMIP) workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel methods, learning methods

  9. Early identification: Language skills and social functioning in deaf and hard of hearing preschool children.

    Science.gov (United States)

    Netten, Anouk P; Rieffe, Carolien; Theunissen, Stephanie C P M; Soede, Wim; Dirks, Evelien; Korver, Anna M H; Konings, Saskia; Oudesluys-Murphy, Anne Marie; Dekker, Friedo W; Frijns, Johan H M

    2015-12-01

    Permanent childhood hearing impairment often results in speech and language problems that are already apparent in early childhood. Past studies show a clear link between language skills and the child's social-emotional functioning. The aim of this study was to examine the level of language and communication skills after the introduction of early identification services and their relation with social functioning and behavioral problems in deaf and hard of hearing children. Nationwide cross-sectional observation of a cohort of 85 early identified deaf and hard of hearing preschool children (aged 30-66 months). Parents reported on their child's communicative abilities (MacArthur-Bates Communicative Development Inventory III), social functioning and appearance of behavioral problems (Strengths and Difficulties Questionnaire). Receptive and expressive language skills were measured using the Reynell Developmental Language Scale and the Schlichting Expressive Language Test, derived from the child's medical records. Language and communicative abilities of early identified deaf and hard of hearing children are not on a par with hearing peers. Compared to normative scores from hearing children, parents of deaf and hard of hearing children reported lower social functioning and more behavioral problems. Higher communicative abilities were related to better social functioning and less behavioral problems. No relation was found between the degree of hearing loss, age at amplification, uni- or bilateral amplification, mode of communication and social functioning and behavioral problems. These results suggest that improving the communicative abilities of deaf and hard of hearing children could improve their social-emotional functioning. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  10. Computational methods in calculating superconducting current problems

    Science.gov (United States)

    Brown, David John, II

    Various computational problems in treating superconducting currents are examined. First, field inversion in spatial Fourier transform space is reviewed to obtain both one-dimensional transport currents flowing down a long thin tape, and a localized two-dimensional current. The problems associated with spatial high-frequency noise, created by finite resolution and experimental equipment, are presented, and resolved with a smooth Gaussian cutoff in spatial frequency space. Convergence of the Green's functions for the one-dimensional transport current densities is discussed, and particular attention is devoted to the negative effects of performing discrete Fourier transforms alone on fields asymptotically dropping like 1/r. Results of imaging simulated current densities are favorably compared to the original distributions after the resulting magnetic fields undergo the imaging procedure. The behavior of high-frequency spatial noise, and the behavior of the fields with a 1/r asymptote in the imaging procedure in our simulations is analyzed, and compared to the treatment of these phenomena in the published literature. Next, we examine calculation of Mathieu and spheroidal wave functions, solutions to the wave equation in elliptical cylindrical and oblate and prolate spheroidal coordinates, respectively. These functions are also solutions to Schrodinger's equations with certain potential wells, and are useful in solving time-varying superconducting problems. The Mathieu functions are Fourier expanded, and the spheroidal functions expanded in associated Legendre polynomials to convert the defining differential equations to recursion relations. The infinite number of linear recursion equations is converted to an infinite matrix, multiplied by a vector of expansion coefficients, thus becoming an eigenvalue problem. The eigenvalue problem is solved with root solvers, and the eigenvector problem is solved using a Jacobi-type iteration method, after preconditioning the

  11. A short proof that phylogenetic tree reconstruction by maximum likelihood is hard.

    Science.gov (United States)

    Roch, Sebastien

    2006-01-01

    Maximum likelihood is one of the most widely used techniques to infer evolutionary histories. Although it is thought to be intractable, a proof of its hardness has been lacking. Here, we give a short proof that computing the maximum likelihood tree is NP-hard by exploiting a connection between likelihood and parsimony observed by Tuffley and Steel.

  12. A Short Proof that Phylogenetic Tree Reconstruction by Maximum Likelihood is Hard

    OpenAIRE

    Roch, S.

    2005-01-01

    Maximum likelihood is one of the most widely used techniques to infer evolutionary histories. Although it is thought to be intractable, a proof of its hardness has been lacking. Here, we give a short proof that computing the maximum likelihood tree is NP-hard by exploiting a connection between likelihood and parsimony observed by Tuffley and Steel.

  13. Personalized Computer-Assisted Mathematics Problem-Solving Program and Its Impact on Taiwanese Students

    Science.gov (United States)

    Chen, Chiu-Jung; Liu, Pei-Lin

    2007-01-01

    This study evaluated the effects of a personalized computer-assisted mathematics problem-solving program on the performance and attitude of Taiwanese fourth grade students. The purpose of this study was to determine whether the personalized computer-assisted program improved student performance and attitude over the nonpersonalized program.…

  14. Scheduling of Fault-Tolerant Embedded Systems with Soft and Hard Timing Constraints

    DEFF Research Database (Denmark)

    Izosimov, Viacheslav; Pop, Paul; Eles, Petru

    2008-01-01

    In this paper we present an approach to the synthesis of fault-tolerant schedules for embedded applications with soft and hard real-time constraints. We are interested to guarantee the deadlines for the hard processes even in the case of faults, while maximizing the overall utility. We use time....../utility functions to capture the utility of soft processes. Process re-execution is employed to recover from multiple faults. A single static schedule computed off-line is not fault tolerant and is pessimistic in terms of utility, while a purely online approach, which computes a new schedule every time a process...

  15. A DNA Computing Model for the Graph Vertex Coloring Problem Based on a Probe Graph

    Directory of Open Access Journals (Sweden)

    Jin Xu

    2018-02-01

    Full Text Available The biggest bottleneck in DNA computing is exponential explosion, in which the DNA molecules used as data in information processing grow exponentially with an increase of problem size. To overcome this bottleneck and improve the processing speed, we propose a DNA computing model to solve the graph vertex coloring problem. The main points of the model are as follows: ① The exponential explosion problem is solved by dividing subgraphs, reducing the vertex colors without losing the solutions, and ordering the vertices in subgraphs; and ② the bio-operation times are reduced considerably by a designed parallel polymerase chain reaction (PCR technology that dramatically improves the processing speed. In this article, a 3-colorable graph with 61 vertices is used to illustrate the capability of the DNA computing model. The experiment showed that not only are all the solutions of the graph found, but also more than 99% of false solutions are deleted when the initial solution space is constructed. The powerful computational capability of the model was based on specific reactions among the large number of nanoscale oligonucleotide strands. All these tiny strands are operated by DNA self-assembly and parallel PCR. After thousands of accurate PCR operations, the solutions were found by recognizing, splicing, and assembling. We also prove that the searching capability of this model is up to O(359. By means of an exhaustive search, it would take more than 896 000 years for an electronic computer (5 × 1014 s−1 to achieve this enormous task. This searching capability is the largest among both the electronic and non-electronic computers that have been developed since the DNA computing model was proposed by Adleman’s research group in 2002 (with a searching capability of O(220. Keywords: DNA computing, Graph vertex coloring problem, Polymerase chain reaction

  16. Relating hard QCD processes through universality of mass singularities

    International Nuclear Information System (INIS)

    Amati, D.; Petronzio, R.; Veneziano, G.

    1978-01-01

    Hard QCD processes involving final jets are studied and compared by means of a simple approach to mass singularities. This is based on the Lee-Nauenberg-Kinoshita theorem and on a rather subtle use of gauge invariance in hard collinear gluon bremsstrahlung. One-loop results are easily derived for processes involving any number of initial quarks and/or currents. The method greatly simplifies the computation of higher-order loops at the leading log level and the preliminary results allow one to conclude that the crucial features encountered at the one-loop level will persist. The authors are thus able to relate different hard processes and to show that suitable ratios of cross sections, being free from mass singularities, can be computed perturbatively, as usually assumed in QCD-inspired parton models. It is also possible to relate the universal leading mass singularities to leading scaling violations and to extend therefor the results of the operator product expansion method to processes outside the range of the light-cone analysis. Some delicate points caused by confinement-related singularities (e.g. narrow resonance poles) are also discussed. (Auth.)

  17. Phase and vacancy behaviour of hard "slanted" cubes

    NARCIS (Netherlands)

    van Damme, R.; van der Meer, B.; van den Broeke, J. J.; Smallenburg, F.; Filion, L.

    2017-01-01

    We use computer simulations to study the phase behaviour for hard, right rhombic prisms as a function of the angle of their rhombic face (the “slant” angle). More specifically, using a combination of eventdriven molecular dynamics simulations, Monte Carlo simulations, and free-energy calculations,

  18. Fuzzy logic, neural networks, and soft computing

    Science.gov (United States)

    Zadeh, Lofti A.

    1994-01-01

    The past few years have witnessed a rapid growth of interest in a cluster of modes of modeling and computation which may be described collectively as soft computing. The distinguishing characteristic of soft computing is that its primary aims are to achieve tractability, robustness, low cost, and high MIQ (machine intelligence quotient) through an exploitation of the tolerance for imprecision and uncertainty. Thus, in soft computing what is usually sought is an approximate solution to a precisely formulated problem or, more typically, an approximate solution to an imprecisely formulated problem. A simple case in point is the problem of parking a car. Generally, humans can park a car rather easily because the final position of the car is not specified exactly. If it were specified to within, say, a few millimeters and a fraction of a degree, it would take hours or days of maneuvering and precise measurements of distance and angular position to solve the problem. What this simple example points to is the fact that, in general, high precision carries a high cost. The challenge, then, is to exploit the tolerance for imprecision by devising methods of computation which lead to an acceptable solution at low cost. By its nature, soft computing is much closer to human reasoning than the traditional modes of computation. At this juncture, the major components of soft computing are fuzzy logic (FL), neural network theory (NN), and probabilistic reasoning techniques (PR), including genetic algorithms, chaos theory, and part of learning theory. Increasingly, these techniques are used in combination to achieve significant improvement in performance and adaptability. Among the important application areas for soft computing are control systems, expert systems, data compression techniques, image processing, and decision support systems. It may be argued that it is soft computing, rather than the traditional hard computing, that should be viewed as the foundation for artificial

  19. The complexity of computing the MCD-estimator

    DEFF Research Database (Denmark)

    Bernholt, T.; Fischer, Paul

    2004-01-01

    In modem statistics the robust estimation of parameters is a central problem, i.e., an estimation that is not or only slightly affected by outliers in the data. The minimum covariance determinant (MCD) estimator (J. Amer. Statist. Assoc. 79 (1984) 871) is probably one of the most important robust...... estimators of location and scatter. The complexity of computing the MCD, however, was unknown and generally thought to be exponential even if the dimensionality of the data is fixed. Here we present a polynomial time algorithm for MCD for fixed dimension of the data. In contrast we show that computing...... the MCD-estimator is NP-hard if the dimension varies. (C) 2004 Elsevier B.V. All rights reserved....

  20. Programming and Tuning a Quantum Annealing Device to Solve Real World Problems

    Science.gov (United States)

    Perdomo-Ortiz, Alejandro; O'Gorman, Bryan; Fluegemann, Joseph; Smelyanskiy, Vadim

    2015-03-01

    Solving real-world applications with quantum algorithms requires overcoming several challenges, ranging from translating the computational problem at hand to the quantum-machine language to tuning parameters of the quantum algorithm that have a significant impact on the performance of the device. In this talk, we discuss these challenges, strategies developed to enhance performance, and also a more efficient implementation of several applications. Although we will focus on applications of interest to NASA's Quantum Artificial Intelligence Laboratory, the methods and concepts presented here apply to a broader family of hard discrete optimization problems, including those that occur in many machine-learning algorithms.

  1. An Evolutionary Algorithm for Feature Subset Selection in Hard Disk Drive Failure Prediction

    Science.gov (United States)

    Bhasin, Harpreet

    2011-01-01

    Hard disk drives are used in everyday life to store critical data. Although they are reliable, failure of a hard disk drive can be catastrophic, especially in applications like medicine, banking, air traffic control systems, missile guidance systems, computer numerical controlled machines, and more. The use of Self-Monitoring, Analysis and…

  2. Cepstrum analysis and applications to computational fluid dynamic solutions

    Science.gov (United States)

    Meadows, Kristine R.

    1990-04-01

    A novel approach to the problem of spurious reflections introduced by artificial boundary conditions in computational fluid dynamic (CFD) solutions is proposed. Instead of attempting to derive non-reflecting boundary conditions, the approach is to accept the fact that spurious reflections occur, but to remove these reflections with cepstrum analysis, a signal processing technique which has been successfully used to remove echoes from experimental data. First, the theory of the cepstrum method is presented. This includes presentation of two types of cepstra: The Power Cepstrum and the Complex Cepstrum. The definitions of the cepstrum methods are applied theoretically and numerically to the analytical solution of sinusoidal plane wave propagation in a duct. One-D and 3-D time dependent solutions to the Euler equations are computed, and hard-wall conditions are prescribed at the numerical boundaries. The cepstrum method is applied, and the reflections from the boundaries are removed from the solutions. One-D and 3-D solutions are computed with so called nonreflecting boundary conditions, and these solutions are compared to those obtained by prescribing hard wall conditions and processing with the cepstrum.

  3. A linear programming approach to max-sum problem: a review.

    Science.gov (United States)

    Werner, Tomás

    2007-07-01

    The max-sum labeling problem, defined as maximizing a sum of binary (i.e., pairwise) functions of discrete variables, is a general NP-hard optimization problem with many applications, such as computing the MAP configuration of a Markov random field. We review a not widely known approach to the problem, developed by Ukrainian researchers Schlesinger et al. in 1976, and show how it contributes to recent results, most importantly, those on the convex combination of trees and tree-reweighted max-product. In particular, we review Schlesinger et al.'s upper bound on the max-sum criterion, its minimization by equivalent transformations, its relation to the constraint satisfaction problem, the fact that this minimization is dual to a linear programming relaxation of the original problem, and the three kinds of consistency necessary for optimality of the upper bound. We revisit problems with Boolean variables and supermodular problems. We describe two algorithms for decreasing the upper bound. We present an example application for structural image analysis.

  4. Blackboard architecture and qualitative model in a computer aided assistant designed to define computers for HEP computing

    International Nuclear Information System (INIS)

    Nodarse, F.F.; Ivanov, V.G.

    1991-01-01

    Using BLACKBOARD architecture and qualitative model, an expert systm was developed to assist the use in defining the computers method for High Energy Physics computing. The COMEX system requires an IBM AT personal computer or compatible with than 640 Kb RAM and hard disk. 5 refs.; 9 figs

  5. Hard processes in hadronic interactions

    International Nuclear Information System (INIS)

    Satz, H.; Wang, X.N.

    1995-01-01

    Quantum chromodynamics is today accepted as the fundamental theory of strong interactions, even though most hadronic collisions lead to final states for which quantitative QCD predictions are still lacking. It therefore seems worthwhile to take stock of where we stand today and to what extent the presently available data on hard processes in hadronic collisions can be accounted for in terms of QCD. This is one reason for this work. The second reason - and in fact its original trigger - is the search for the quark-gluon plasma in high energy nuclear collisions. The hard processes to be considered here are the production of prompt photons, Drell-Yan dileptons, open charm, quarkonium states, and hard jets. For each of these, we discuss the present theoretical understanding, compare the resulting predictions to available data, and then show what behaviour it leads to at RHIC and LHC energies. All of these processes have the structure mentioned above: they contain a hard partonic interaction, calculable perturbatively, but also the non-perturbative parton distribution within a hadron. These parton distributions, however, can be studied theoretically in terms of counting rule arguments, and they can be checked independently by measurements of the parton structure functions in deep inelastic lepton-hadron scattering. The present volume is the work of Hard Probe Collaboration, a group of theorists who are interested in the problem and were willing to dedicate a considerable amount of their time and work on it. The necessary preparation, planning and coordination of the project were carried out in two workshops of two weeks' duration each, in February 1994 at CERn in Geneva andin July 1994 at LBL in Berkeley

  6. Computer codes for problems of isotope and radiation research

    International Nuclear Information System (INIS)

    Remer, M.

    1986-12-01

    A survey is given of computer codes for problems in isotope and radiation research. Altogether 44 codes are described as titles with abstracts. 17 of them are in the INIS scope and are processed individually. The subjects are indicated in the chapter headings: 1) analysis of tracer experiments, 2) spectrum calculations, 3) calculations of ion and electron trajectories, 4) evaluation of gamma irradiation plants, and 5) general software

  7. THE CLOUD COMPUTING INTRODUCTION IN EDUCATION: PROBLEMS AND PERSPECTIVES

    OpenAIRE

    Y. Dyulicheva

    2013-01-01

    The problems and perspectives of the cloud computing usage in education are investigated in the paper. The examples of the most popular cloud platforms such as Google Apps Education Edition and Microsoft Live@edu used in education are considered. The schema of an interaction between teachers and students in cloud is proposed. The abilities of the cloud storage such as Microsoft SkyDrive and Apple iCloud are considered.

  8. Modeling Students' Problem Solving Performance in the Computer-Based Mathematics Learning Environment

    Science.gov (United States)

    Lee, Young-Jin

    2017-01-01

    Purpose: The purpose of this paper is to develop a quantitative model of problem solving performance of students in the computer-based mathematics learning environment. Design/methodology/approach: Regularized logistic regression was used to create a quantitative model of problem solving performance of students that predicts whether students can…

  9. Systematic hardness studies on lithium niobate crystals

    Indian Academy of Sciences (India)

    Unknown

    crystals with different growth origins, and a Fe-doped sample. The problem of load ... The true hardness of LiNbO3 is found to be 630 ± 30 kg/mm2. .... Experimental. Pure lithium ... the index of d strikes at this simple and meaningful defini-.

  10. A review of metaheuristic scheduling techniques in cloud computing

    Directory of Open Access Journals (Sweden)

    Mala Kalra

    2015-11-01

    Full Text Available Cloud computing has become a buzzword in the area of high performance distributed computing as it provides on-demand access to shared pool of resources over Internet in a self-service, dynamically scalable and metered manner. Cloud computing is still in its infancy, so to reap its full benefits, much research is required across a broad array of topics. One of the important research issues which need to be focused for its efficient performance is scheduling. The goal of scheduling is to map tasks to appropriate resources that optimize one or more objectives. Scheduling in cloud computing belongs to a category of problems known as NP-hard problem due to large solution space and thus it takes a long time to find an optimal solution. There are no algorithms which may produce optimal solution within polynomial time to solve these problems. In cloud environment, it is preferable to find suboptimal solution, but in short period of time. Metaheuristic based techniques have been proved to achieve near optimal solutions within reasonable time for such problems. In this paper, we provide an extensive survey and comparative analysis of various scheduling algorithms for cloud and grid environments based on three popular metaheuristic techniques: Ant Colony Optimization (ACO, Genetic Algorithm (GA and Particle Swarm Optimization (PSO, and two novel techniques: League Championship Algorithm (LCA and BAT algorithm.

  11. A restricted Steiner tree problem is solved by Geometric Method II

    Science.gov (United States)

    Lin, Dazhi; Zhang, Youlin; Lu, Xiaoxu

    2013-03-01

    The minimum Steiner tree problem has wide application background, such as transportation system, communication network, pipeline design and VISL, etc. It is unfortunately that the computational complexity of the problem is NP-hard. People are common to find some special problems to consider. In this paper, we first put forward a restricted Steiner tree problem, which the fixed vertices are in the same side of one line L and we find a vertex on L such the length of the tree is minimal. By the definition and the complexity of the Steiner tree problem, we know that the complexity of this problem is also Np-complete. In the part one, we have considered there are two fixed vertices to find the restricted Steiner tree problem. Naturally, we consider there are three fixed vertices to find the restricted Steiner tree problem. And we also use the geometric method to solve such the problem.

  12. EDDYMULT: a computing system for solving eddy current problems in a multi-torus system

    International Nuclear Information System (INIS)

    Nakamura, Yukiharu; Ozeki, Takahisa

    1989-03-01

    A new computing system EDDYMULT based on the finite element circuit method has been developed to solve actual eddy current problems in a multi-torus system, which consists of many torus-conductors and various kinds of axisymmetric poloidal field coils. The EDDYMULT computing system can deal three-dimensionally with the modal decomposition of eddy current in a multi-torus system, the transient phenomena of eddy current distributions and the resultant magnetic field. Therefore, users can apply the computing system to the solution of the eddy current problems in a tokamak fusion device, such as the design of poloidal field coil power supplies, the mechanical stress design of the intensive electromagnetic loading on device components and the control analysis of plasma position. The present report gives a detailed description of the EDDYMULT system as an user's manual: 1) theory, 2) structure of the code system, 3) input description, 4) problem restrictions, 5) description of the subroutines, etc. (author)

  13. English Computer Discourse: Some Characteristic Features

    Directory of Open Access Journals (Sweden)

    Tatjana Rusko

    2013-12-01

    Full Text Available The problem of virtual discourse is coming into focus of linguistic research. This interest results from the rapid spread of information technology, modern Internet culture incipience, a symbol of information revolution, new opportunities and threats that accompany computer civilization. The emergence of the communicative environment as a particular sphere of language actualization, necessitates new language means of communication or transformation and reframing the already existing ones. Obviously, it’s time to talk about the formation of a new discourse in the new communicative space – computer (electronic, virtual discourse, which subsequently may considerably affect the speech behavior of society. The present article makes an attempt to identify some linguistic and communicative features of virtual discourse. Computer discourse, being a sub-language of hybrid character, combines elements of oral and written discourse with its own specific features. It should be noted that in the context of information culture the problem of communication interaction is among the most topical issues in science and education. There is hardly any doubt that the study and advancement of virtual communication culture is one of higher education distinctive mission components.

  14. ONTOLOGY OF COMPUTATIONAL EXPERIMENT ORGANIZATION IN PROBLEMS OF SEARCHING AND SORTING

    Directory of Open Access Journals (Sweden)

    A. Spivakovsky

    2011-05-01

    Full Text Available Ontologies are a key technology of semantic processing of knowledge. We examine a methodology of ontology’s usage for the organization of computational experiment in problems of searching and sorting in studies of the course "Basics of algorithms and programming".

  15. Elementary EFL Teachers' Computer Phobia and Computer Self-Efficacy in Taiwan

    Science.gov (United States)

    Chen, Kate Tzuching

    2012-01-01

    The advent and application of computer and information technology has increased the overall success of EFL teaching; however, such success is hard to assess, and teachers prone to computer avoidance face negative consequences. Two major obstacles are high computer phobia and low computer self-efficacy. However, little research has been carried out…

  16. A Hybrid Genetic-Simulated Annealing Algorithm for the Location-Inventory-Routing Problem Considering Returns under E-Supply Chain Environment

    Science.gov (United States)

    Guo, Hao; Fu, Jing

    2013-01-01

    Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment. PMID:24489489

  17. A hybrid genetic-simulated annealing algorithm for the location-inventory-routing problem considering returns under e-supply chain environment.

    Science.gov (United States)

    Li, Yanhui; Guo, Hao; Wang, Lin; Fu, Jing

    2013-01-01

    Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment.

  18. A Hybrid Genetic-Simulated Annealing Algorithm for the Location-Inventory-Routing Problem Considering Returns under E-Supply Chain Environment

    Directory of Open Access Journals (Sweden)

    Yanhui Li

    2013-01-01

    Full Text Available Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment.

  19. Introduction to elementary computational modeling essential concepts, principles, and problem solving

    CERN Document Server

    Garrido, Jose

    2011-01-01

    … offers a solid first step into scientific and technical computing for those just getting started. … Through simple examples that are both easy to conceptualize and straightforward to express mathematically (something that isn't trivial to achieve), Garrido methodically guides readers from problem statement and abstraction through algorithm design and basic programming. His approach offers those beginning in a scientific or technical discipline something unique; a simultaneous introduction to programming and computational thinking that is very relevant to the practical application of computin

  20. Constraint-Based Local Search for Constrained Optimum Paths Problems

    Science.gov (United States)

    Pham, Quang Dung; Deville, Yves; van Hentenryck, Pascal

    Constrained Optimum Path (COP) problems arise in many real-life applications and are ubiquitous in communication networks. They have been traditionally approached by dedicated algorithms, which are often hard to extend with side constraints and to apply widely. This paper proposes a constraint-based local search (CBLS) framework for COP applications, bringing the compositionality, reuse, and extensibility at the core of CBLS and CP systems. The modeling contribution is the ability to express compositional models for various COP applications at a high level of abstraction, while cleanly separating the model and the search procedure. The main technical contribution is a connected neighborhood based on rooted spanning trees to find high-quality solutions to COP problems. The framework, implemented in COMET, is applied to Resource Constrained Shortest Path (RCSP) problems (with and without side constraints) and to the edge-disjoint paths problem (EDP). Computational results show the potential significance of the approach.

  1. Calculation of the density shift and broadening of the transition lines in pionic helium: Computational problems

    Energy Technology Data Exchange (ETDEWEB)

    Bakalov, Dimitar, E-mail: dbakalov@inrne.bas.bg [Bulgarian Academy of Sciences, INRNE (Bulgaria)

    2015-08-15

    The potential energy surface and the computational codes, developed for the evaluation of the density shift and broadening of the spectral lines of laser-induced transitions from metastable states of antiprotonic helium, fail to produce convergent results in the case of pionic helium. We briefly analyze the encountered computational problems and outline possible solutions of the problems.

  2. A Study of the Correlation between Computer Games and Adolescent Behavioral Problems

    OpenAIRE

    Shokouhi-Moqhaddam, Solmaz; Khezri-Moghadam, Noshiravan; Javanmard, Zeinab; Sarmadi-Ansar, Hassan; Aminaee, Mehran; Shokouhi-Moqhaddam, Majid; Zivari-Rahman, Mahmoud

    2013-01-01

    Background Today, due to developing communicative technologies, computer games and other audio-visual media as social phenomena, are very attractive and have a great effect on children and adolescents. The increasing popularity of these games among children and adolescents results in the public uncertainties about plausible harmful effects of these games. This study aimed to investigate the correlation between computer games and behavioral problems on male guidance school students. Methods Th...

  3. A soft computing-based approach to optimise queuing-inventory control problem

    Science.gov (United States)

    Alaghebandha, Mohammad; Hajipour, Vahid

    2015-04-01

    In this paper, a multi-product continuous review inventory control problem within batch arrival queuing approach (MQr/M/1) is developed to find the optimal quantities of maximum inventory. The objective function is to minimise summation of ordering, holding and shortage costs under warehouse space, service level and expected lost-sales shortage cost constraints from retailer and warehouse viewpoints. Since the proposed model is Non-deterministic Polynomial-time hard, an efficient imperialist competitive algorithm (ICA) is proposed to solve the model. To justify proposed ICA, both ganetic algorithm and simulated annealing algorithm are utilised. In order to determine the best value of algorithm parameters that result in a better solution, a fine-tuning procedure is executed. Finally, the performance of the proposed ICA is analysed using some numerical illustrations.

  4. Verification of thermal-hydraulic computer codes against standard problems for WWER reflooding

    International Nuclear Information System (INIS)

    Alexander D Efanov; Vladimir N Vinogradov; Victor V Sergeev; Oleg A Sudnitsyn

    2005-01-01

    Full text of publication follows: The computational assessment of reactor core components behavior under accident conditions is impossible without knowledge of the thermal-hydraulic processes occurring in this case. The adequacy of the results obtained using the computer codes to the real processes is verified by carrying out a number of standard problems. In 2000-2003, the fulfillment of three Russian standard problems on WWER core reflooding was arranged using the experiments on full-height electrically heated WWER 37-rod bundle model cooldown in regimes of bottom (SP-1), top (SP-2) and combined (SP-3) reflooding. The representatives from the eight MINATOM's organizations took part in this work, in the course of which the 'blind' and posttest calculations were performed using various versions of the RELAP5, ATHLET, CATHARE, COBRA-TF, TRAP, KORSAR computer codes. The paper presents a brief description of the test facility, test section, test scenarios and conditions as well as the basic results of computational analysis of the experiments. The analysis of the test data revealed a significantly non-one-dimensional nature of cooldown and rewetting of heater rods heated up to a high temperature in a model bundle. This was most pronounced at top and combined reflooding. The verification of the model reflooding computer codes showed that most of computer codes fairly predict the peak rod temperature and the time of bundle cooldown. The exception is provided by the results of calculations with the ATHLET and CATHARE codes. The nature and rate of rewetting front advance in the lower half of the bundle are fairly predicted practically by all computer codes. The disagreement between the calculations and experimental results for the upper half of the bundle is caused by the difficulties of computational simulation of multidimensional effects by 1-D computer codes. In this regard, a quasi-two-dimensional computer code COBRA-TF offers certain advantages. Overall, the closest

  5. THE CLOUD COMPUTING INTRODUCTION IN EDUCATION: PROBLEMS AND PERSPECTIVES

    Directory of Open Access Journals (Sweden)

    Y. Dyulicheva

    2013-03-01

    Full Text Available The problems and perspectives of the cloud computing usage in education are investigated in the paper. The examples of the most popular cloud platforms such as Google Apps Education Edition and Microsoft Live@edu used in education are considered. The schema of an interaction between teachers and students in cloud is proposed. The abilities of the cloud storage such as Microsoft SkyDrive and Apple iCloud are considered.

  6. 5th International Conference on Soft Computing for Problem Solving

    CERN Document Server

    Deep, Kusum; Bansal, Jagdish; Nagar, Atulya; Das, Kedar

    2016-01-01

    This two volume book is based on the research papers presented at the 5th International Conference on Soft Computing for Problem Solving (SocProS 2015) and covers a variety of topics, including mathematical modelling, image processing, optimization methods, swarm intelligence, evolutionary algorithms, fuzzy logic, neural networks, forecasting, medical and health care, data mining, etc. Mainly the emphasis is on Soft Computing and its applications in diverse areas. The prime objective of this book is to familiarize the reader with the latest scientific developments in various fields of Science, Engineering and Technology and is directed to the researchers and scientists engaged in various real-world applications of ‘Soft Computing’.

  7. 4th International Conference on Soft Computing for Problem Solving

    CERN Document Server

    Deep, Kusum; Pant, Millie; Bansal, Jagdish; Nagar, Atulya

    2015-01-01

    This two volume book is based on the research papers presented at the 4th International Conference on Soft Computing for Problem Solving (SocProS 2014) and covers a variety of topics, including mathematical modelling, image processing, optimization methods, swarm intelligence, evolutionary algorithms, fuzzy logic, neural networks, forecasting, medical and healthcare, data mining, etc. Mainly the emphasis is on Soft Computing and its applications in diverse areas. The prime objective of this book is to familiarize the reader with the latest scientific developments in various fields of Science, Engineering and Technology and is directed to the researchers and scientists engaged in various real-world applications of ‘Soft Computing’.

  8. Progress in 1988 1990 with computer applications in the ``hard-rock'' arena: Geochemistry, mineralogy, petrology, and volcanology

    Science.gov (United States)

    Rock, Nicholas M. S.

    This review covers rock, mineral and isotope geochemistry, mineralogy, igneous and metamorphic petrology, and volcanology. Crystallography, exploration geochemistry, and mineral exploration are excluded. Fairly extended comments on software availability, and on computerization of the publication process and of specimen collection indexes, may interest a wider audience. A proliferation of both published and commercial software in the past 3 years indicates increasing interest in what traditionally has been a rather reluctant sphere of geoscience computer activity. However, much of this software duplicates the same old functions (Harker and triangular plots, mineral recalculations, etc.). It usually is more efficient nowadays to use someone else's program, or to employ the command language in one of many general-purpose spreadsheet or statistical packages available, than to program a specialist operation from scratch in, say, FORTRAN. Greatest activity has been in mineralogy, where several journals specifically encourage publication of computer-related activities, and IMA and MSA Working Groups on microcomputers have been convened. In petrology and geochemistry, large national databases of rock and mineral analyses continue to multiply, whereas the international database IGBA grows slowly; some form of integration is necessary to make these disparate systems of lasting value to the global "hard-rock" community. Total merging or separate addressing via an intelligent "front-end" are both possibilities. In volcanology, the BBC's videodisk Volcanoes and the Smithsonian Institution's Global Volcanism Project use the most up-to-date computer technology in an exciting and innovative way, to promote public education.

  9. A heuristics-based solution to the continuous berth allocation and crane assignment problem

    Directory of Open Access Journals (Sweden)

    Mohammad Hamdy Elwany

    2013-12-01

    Full Text Available Effective utilization plans for various resources at a container terminal are essential to reducing the turnaround time of cargo vessels. Among the scarcest resources are the berth and its associated cranes. Thus, two important optimization problems arise, which are the berth allocation and quay crane assignment problems. The berth allocation problem deals with the generation of a berth plan, which determines where and when a ship has to berth alongside the quay. The quay crane assignment problem addresses the problem of determining how many and which quay crane(s will serve each vessel. In this paper, an integrated heuristics-based solution methodology is proposed that tackles both problems simultaneously. The preliminary experimental results show that the proposed approach yields high quality solutions to such an NP-hard problem in a reasonable computational time suggesting its suitability for practical use.

  10. Basic technological aspects and optimization problems in X-ray computed tomography (C.T.)

    International Nuclear Information System (INIS)

    Allemand, R.

    1987-01-01

    The current status and future prospects of physical performance are analysed and the optimization problems are approached for X-ray computed tomography. It is concluded that as long as clinical interest in computed tomography continues, technical advances can be expected in the near future to improve the density resolution, the spatial resolution and the X-ray exposure time. (Auth.)

  11. Computing nilpotent quotients in finitely presented Lie rings

    Directory of Open Access Journals (Sweden)

    Csaba Schneider

    1997-12-01

    Full Text Available A nilpotent quotient algorithm for finitely presented Lie rings over Z (and Q is described. The paper studies the graded and non-graded cases separately. The algorithm computes the so-called nilpotent presentation for a finitely presented, nilpotent Lie ring. A nilpotent presentation consists of generators for the abelian group and the products expressed as linear combinations for pairs formed by generators. Using that presentation the word problem is decidable in L. Provided that the Lie ring L is graded, it is possible to determine the canonical presentation for a lower central factor of L. Complexity is studied and it is shown that optimising the presentation is NP-hard. Computational details are provided with examples, timing and some structure theorems obtained from computations. Implementation in C and GAP interface are available.

  12. Computationally Secure Pattern Matching in the Presence of Malicious Adversaries

    DEFF Research Database (Denmark)

    Hazay, Carmit; Toft, Tomas

    2014-01-01

    for important variations of the secure pattern matching problem that are significantly more efficient than the current state of art solutions: First, we deal with secure pattern matching with wildcards. In this variant the pattern may contain wildcards that match both 0 and 1. Our protocol requires O......We propose a protocol for the problem of secure two-party pattern matching, where Alice holds a text t∈{0,1}∗ of length n, while Bob has a pattern p∈{0,1}∗ of length m. The goal is for Bob to (only) learn where his pattern occurs in Alice’s text, while Alice learns nothing. Private pattern matching...... is an important problem that has many applications in the area of DNA search, computational biology and more. Our construction guarantees full simulation in the presence of malicious, polynomial-time adversaries (assuming the hardness of DDH assumption) and exhibits computation and communication costs of O...

  13. Hardness and Approximation for Network Flow Interdiction

    OpenAIRE

    Chestnut, Stephen R.; Zenklusen, Rico

    2015-01-01

    In the Network Flow Interdiction problem an adversary attacks a network in order to minimize the maximum s-t-flow. Very little is known about the approximatibility of this problem despite decades of interest in it. We present the first approximation hardness, showing that Network Flow Interdiction and several of its variants cannot be much easier to approximate than Densest k-Subgraph. In particular, any $n^{o(1)}$-approximation algorithm for Network Flow Interdiction would imply an $n^{o(1)}...

  14. The computer-aided design of a servo system as a multiple-criteria decision problem

    NARCIS (Netherlands)

    Udink ten Cate, A.J.

    1986-01-01

    This paper treats the selection of controller gains of a servo system as a multiple-criteria decision problem. In contrast to the usual optimization-based approaches to computer-aided design, inequality constraints are included in the problem as unconstrained objectives. This considerably simplifies

  15. An Assembly Line Balancing Problem Automotive Cables

    Directory of Open Access Journals (Sweden)

    Triki Hager

    2015-02-01

    Full Text Available In this paper, an Assembly Line Balancing Problem (ALBP is presented in a real-world automotive cables manufacturer company. This company found it necessary to balance its line, since it needs to increase the production rate. In this ALBP, the number of stations is known and the objective is to minimize cycle time where both precedence and zoning constrains must be satisfied. This problem is formulated as a binary linear program (BLP. Since this problem is NP-hard, an innovative Genetic Algorithm (GA is implemented. The full factorial design is used to obtain the better combination GA parameters and a simple convergence experimental study is performed on the stopping criteria to reduce computational time. Comparison of the proposed GA results with CPLEX software shows that, in a reasonable time, the GA generates consistent solutions that are very close to their optimal ones. Therefore, the proposed GA approach is very effective and competitive.

  16. Bin-packing problems with load balancing and stability constraints

    DEFF Research Database (Denmark)

    Trivella, Alessio; Pisinger, David

    apper in a wide range of disciplines, including transportation and logistics, computer science, engineering, economics and manufacturing. The problem is well-known to be N P-hard and difficult to solve in practice, especially when dealing with the multi-dimensional cases. Closely connected to the BPP...... realistic constraints related to e.g. load balancing, cargo stability and weight limits, in the multi-dimensional BPP. The BPP poses additional challenges compared to the CLP due to the supplementary objective of minimizing the number of bins. In particular, in section 2 we discuss how to integrate bin......-packing and load balancing of items. The problem has only been considered in the literature in simplified versions, e.g. balancing a single bin or introducing a feasible region for the barycenter. In section 3 we generalize the problem to handle cargo stability and weight constraints....

  17. Hybrid setup for micro- and nano-computed tomography in the hard X-ray range

    Science.gov (United States)

    Fella, Christian; Balles, Andreas; Hanke, Randolf; Last, Arndt; Zabler, Simon

    2017-12-01

    With increasing miniaturization in industry and medical technology, non-destructive testing techniques are an area of ever-increasing importance. In this framework, X-ray microscopy offers an efficient tool for the analysis, understanding, and quality assurance of microscopic samples, in particular as it allows reconstructing three-dimensional data sets of the whole sample's volume via computed tomography (CT). The following article describes a compact X-ray microscope in the hard X-ray regime around 9 keV, based on a highly brilliant liquid-metal-jet source. In comparison to commercially available instruments, it is a hybrid that works in two different modes. The first one is a micro-CT mode without optics, which uses a high-resolution detector to allow scans of samples in the millimeter range with a resolution of 1 μm. The second mode is a microscope, which contains an X-ray optical element to magnify the sample and allows resolving 150 nm features. Changing between the modes is possible without moving the sample. Thus, the instrument represents an important step towards establishing high-resolution laboratory-based multi-mode X-ray microscopy as a standard investigation method.

  18. Heuristic algorithms for the minmax regret flow-shop problem with interval processing times.

    Science.gov (United States)

    Ćwik, Michał; Józefczyk, Jerzy

    2018-01-01

    An uncertain version of the permutation flow-shop with unlimited buffers and the makespan as a criterion is considered. The investigated parametric uncertainty is represented by given interval-valued processing times. The maximum regret is used for the evaluation of uncertainty. Consequently, the minmax regret discrete optimization problem is solved. Due to its high complexity, two relaxations are applied to simplify the optimization procedure. First of all, a greedy procedure is used for calculating the criterion's value, as such calculation is NP-hard problem itself. Moreover, the lower bound is used instead of solving the internal deterministic flow-shop. The constructive heuristic algorithm is applied for the relaxed optimization problem. The algorithm is compared with previously elaborated other heuristic algorithms basing on the evolutionary and the middle interval approaches. The conducted computational experiments showed the advantage of the constructive heuristic algorithm with regards to both the criterion and the time of computations. The Wilcoxon paired-rank statistical test confirmed this conclusion.

  19. Perceived problems with computer gaming and Internet use are associated with poorer social relations in adolescence.

    Science.gov (United States)

    Rasmussen, Mette; Meilstrup, Charlotte Riebeling; Bendtsen, Pernille; Pedersen, Trine Pagh; Nielsen, Line; Madsen, Katrine Rich; Holstein, Bjørn E

    2015-02-01

    Young people's engagement in electronic gaming and Internet communication have caused concerns about potential harmful effects on their social relations, but the literature is inconclusive. The aim of this paper was to examine whether perceived problems with computer gaming and Internet communication are associated with young people's social relations. Cross-sectional questionnaire survey in 13 schools in the city of Aarhus, Denmark, in 2009. Response rate 89%, n = 2,100 students in grades 5, 7, and 9. Independent variables were perceived problems related to computer gaming and Internet use, respectively. Outcomes were measures of structural (number of days/week with friends, number of friends) and functional (confidence in others, being bullied, bullying others) dimensions of student's social relations. Perception of problems related to computer gaming were associated with almost all aspects of poor social relations among boys. Among girls, an association was only seen for bullying. For both boys and girls, perceived problems related to Internet use were associated with bullying only. Although the study is cross-sectional, the findings suggest that computer gaming and Internet use may be harmful to young people's social relations.

  20. Comparative metallurgical study of thick hard coatings without cobalt

    International Nuclear Information System (INIS)

    Clemendot, F.; Van Duysen, J.C.; Champredonde, J.

    1992-07-01

    Wear and corrosion of stellite type hard coatings for valves of the PWR primary system raise important problems of contamination. Substitution of these alloys by cobalt-free hard coatings (Colmonoy 4 and 4.26, Cenium 36) should allow to reduce this contamination. A comparative study (chemical, mechanical, thermal, metallurgical), as well as a corrosion study of these coatings were carried out. The results of this characterization show that none of the studied products has globally characteristics as good as those of grade 6 Stellite currently in service

  1. Computer use and vision-related problems among university students in ajman, United arab emirate.

    Science.gov (United States)

    Shantakumari, N; Eldeeb, R; Sreedharan, J; Gopal, K

    2014-03-01

    The extensive use of computers as medium of teaching and learning in universities necessitates introspection into the extent of computer related health disorders among student population. This study was undertaken to assess the pattern of computer usage and related visual problems, among University students in Ajman, United Arab Emirates. A total of 500 Students studying in Gulf Medical University, Ajman and Ajman University of Science and Technology were recruited into this study. Demographic characteristics, pattern of usage of computers and associated visual symptoms were recorded in a validated self-administered questionnaire. Chi-square test was used to determine the significance of the observed differences between the variables. The level of statistical significance was at P computer users were headache - 53.3% (251/471), burning sensation in the eyes - 54.8% (258/471) and tired eyes - 48% (226/471). Female students were found to be at a higher risk. Nearly 72% of students reported frequent interruption of computer work. Headache caused interruption of work in 43.85% (110/168) of the students while tired eyes caused interruption of work in 43.5% (98/168) of the students. When the screen was viewed at distance more than 50 cm, the prevalence of headaches decreased by 38% (50-100 cm - OR: 0.62, 95% of the confidence interval [CI]: 0.42-0.92). Prevalence of tired eyes increased by 89% when screen filters were not used (OR: 1.894, 95% CI: 1.065-3.368). High prevalence of vision related problems was noted among university students. Sustained periods of close screen work without screen filters were found to be associated with occurrence of the symptoms and increased interruptions of work of the students. There is a need to increase the ergonomic awareness among students and corrective measures need to be implemented to reduce the impact of computer related vision problems.

  2. Analysis of problem solving on project based learning with resource based learning approach computer-aided program

    Science.gov (United States)

    Kuncoro, K. S.; Junaedi, I.; Dwijanto

    2018-03-01

    This study aimed to reveal the effectiveness of Project Based Learning with Resource Based Learning approach computer-aided program and analyzed problem-solving abilities in terms of problem-solving steps based on Polya stages. The research method used was mixed method with sequential explanatory design. The subject of this research was the students of math semester 4. The results showed that the S-TPS (Strong Top Problem Solving) and W-TPS (Weak Top Problem Solving) had good problem-solving abilities in each problem-solving indicator. The problem-solving ability of S-MPS (Strong Middle Problem Solving) and (Weak Middle Problem Solving) in each indicator was good. The subject of S-BPS (Strong Bottom Problem Solving) had a difficulty in solving the problem with computer program, less precise in writing the final conclusion and could not reflect the problem-solving process using Polya’s step. While the Subject of W-BPS (Weak Bottom Problem Solving) had not been able to meet almost all the indicators of problem-solving. The subject of W-BPS could not precisely made the initial table of completion so that the completion phase with Polya’s step was constrained.

  3. A constructive heuristic for time-dependent multi-depot vehicle routing problem with time-windows and heterogeneous fleet

    Directory of Open Access Journals (Sweden)

    Behrouz Afshar-Nadjafi

    2017-01-01

    Full Text Available In this paper, we consider the time-dependent multi-depot vehicle routing problem. The objective is to minimize the total heterogeneous fleet cost assuming that the travel time between locations depends on the departure time. Also, hard time window constraints for the customers and limitation on maximum number of the vehicles in depots must be satisfied. The problem is formulated as a mixed integer programming model. A constructive heuristic procedure is proposed for the problem. Also, the efficiency of the proposed algorithm is evaluated on 180 test problems. The obtained computational results indicate that the procedure is capable to obtain a satisfying solution.

  4. Minimizing total weighted tardiness for the single machine scheduling problem with dependent setup time and precedence constraints

    Directory of Open Access Journals (Sweden)

    Hamidreza Haddad

    2012-04-01

    Full Text Available This paper tackles the single machine scheduling problem with dependent setup time and precedence constraints. The primary objective of this paper is minimization of total weighted tardiness. Since the complexity of the resulted problem is NP-hard we use metaheuristics method to solve the resulted model. The proposed model of this paper uses genetic algorithm to solve the problem in reasonable amount of time. Because of high sensitivity of GA to its initial values of parameters, a Taguchi approach is presented to calibrate its parameters. Computational experiments validate the effectiveness and capability of proposed method.

  5. Strong Bisimilarity and Regularity of Basic Parallel Processes is PSPACE-Hard

    DEFF Research Database (Denmark)

    Srba, Jirí

    2002-01-01

    We show that the problem of checking whether two processes definable in the syntax of Basic Parallel Processes (BPP) are strongly bisimilar is PSPACE-hard. We also demonstrate that there is a polynomial time reduction from the strong bisimilarity checking problem of regular BPP to the strong...

  6. Computer utilization for the solution of gas supply problems

    Energy Technology Data Exchange (ETDEWEB)

    Raleigh, J T; Brady, J R

    1968-01-01

    The computer programs in this paper have proven to be useful tools in the solution of gas supply problems. Some of the management type of applications are: (1) long range planning projects; (2) comparison of various proposed gas purchase contracts; (3) to assist with budget and operational planning; (4) to assist in making cost-of-servic and rate predictions; (5) to investigate the feasibility of processing plants at any point on the system; and (6) to assist dispatching in its daily operation for cost and quality control. Competition, not only from the gas industry, but also from other forms of energy, makes it imperative that quantitative and economic information with regard to that marketable resource be available under a variety of assumptions and alternatives. This information can best be made available in a timely manner by the use of the computer.

  7. A security model for saas in cloud computing

    International Nuclear Information System (INIS)

    Abbas, R.; Farooq, A.

    2016-01-01

    Cloud computing is a type of computing that relies on sharing computing resources rather than having local servers or personal devices to handle applications. It has many service modes like Software as-a-Service (SaaS), Platform-as-a-Service (PaaS), Infrastructure-as-a-Service (IaaS). In SaaS model, service providers install and activate the applications in cloud and cloud customers access the software from cloud. So, the user does not have the need to purchase and install a particular software on his/her machine. While using SaaS model, there are multiple security issues and problems like Data security, Data breaches, Network security, Authentication and authorization, Data integrity, Availability, Web application security and Backup which are faced by users. Many researchers minimize these security problems by putting in hard work. A large work has been done to resolve these problems but there are a lot of issues that persist and need to overcome. In this research work, we have developed a security model that improves the security of data according to the desire of the End-user. The proposed model for different data security options can be helpful to increase the data security through which trade-off between functionalities can be optimized for private and public data. (author)

  8. An Enhanced Genetic Algorithm for the Generalized Traveling Salesman Problem

    Directory of Open Access Journals (Sweden)

    H. Jafarzadeh

    2017-12-01

    Full Text Available The generalized traveling salesman problem (GTSP deals with finding the minimum-cost tour in a clustered set of cities. In this problem, the traveler is interested in finding the best path that goes through all clusters. As this problem is NP-hard, implementing a metaheuristic algorithm to solve the large scale problems is inevitable. The performance of these algorithms can be intensively promoted by other heuristic algorithms. In this study, a search method is developed that improves the quality of the solutions and competition time considerably in comparison with Genetic Algorithm. In the proposed algorithm, the genetic algorithms with the Nearest Neighbor Search (NNS are combined and a heuristic mutation operator is applied. According to the experimental results on a set of standard test problems with symmetric distances, the proposed algorithm finds the best solutions in most cases with the least computational time. The proposed algorithm is highly competitive with the published until now algorithms in both solution quality and running time.

  9. Perceived problems with computer gaming and Internet use are associated with poorer social relations in adolescence

    DEFF Research Database (Denmark)

    Rasmussen, Mette; Meilstrup, Charlotte Riebeling; Bendtsen, Pernille

    2015-01-01

    and Internet use, respectively. Outcomes were measures of structural (number of days/week with friends, number of friends) and functional (confidence in others, being bullied, bullying others) dimensions of student's social relations. RESULTS: Perception of problems related to computer gaming were associated......OBJECTIVES: Young people's engagement in electronic gaming and Internet communication have caused concerns about potential harmful effects on their social relations, but the literature is inconclusive. The aim of this paper was to examine whether perceived problems with computer gaming and Internet...... communication are associated with young people's social relations. METHODS: Cross-sectional questionnaire survey in 13 schools in the city of Aarhus, Denmark, in 2009. Response rate 89 %, n = 2,100 students in grades 5, 7, and 9. Independent variables were perceived problems related to computer gaming...

  10. Flexible Job Shop Scheduling Problem Using an Improved Ant Colony Optimization

    Directory of Open Access Journals (Sweden)

    Lei Wang

    2017-01-01

    Full Text Available As an extension of the classical job shop scheduling problem, the flexible job shop scheduling problem (FJSP plays an important role in real production systems. In FJSP, an operation is allowed to be processed on more than one alternative machine. It has been proven to be a strongly NP-hard problem. Ant colony optimization (ACO has been proven to be an efficient approach for dealing with FJSP. However, the basic ACO has two main disadvantages including low computational efficiency and local optimum. In order to overcome these two disadvantages, an improved ant colony optimization (IACO is proposed to optimize the makespan for FJSP. The following aspects are done on our improved ant colony optimization algorithm: select machine rule problems, initialize uniform distributed mechanism for ants, change pheromone’s guiding mechanism, select node method, and update pheromone’s mechanism. An actual production instance and two sets of well-known benchmark instances are tested and comparisons with some other approaches verify the effectiveness of the proposed IACO. The results reveal that our proposed IACO can provide better solution in a reasonable computational time.

  11. Equihash: Asymmetric Proof-of-Work Based on the Generalized Birthday Problem

    Directory of Open Access Journals (Sweden)

    Alex Biryukov

    2017-04-01

    Full Text Available Proof-of-work is a central concept in modern cryptocurrencies and denial-ofservice protection tools, but the requirement for fast verification so far has made it an easy prey for GPU-, ASIC-, and botnet-equipped users. The attempts to rely on memory-intensive computations in order to remedy the disparity between architectures have resulted in slow or broken schemes. In this paper we solve this open problem and show how to construct an asymmetric proof-of-work (PoW based on a computationally-hard problem, which requires a great deal of memory to generate a proof (called a ”memory-hardness” feature but is instant to verify. Our primary proposal, Equihash, is a PoW based on the generalized birthday problem and enhanced Wagner’s algorithm for it. We introduce the new technique of algorithm binding to prevent cost amortization and demonstrate that possible parallel implementations are constrained by memory bandwidth. Our scheme has tunable and steep time-space tradeoffs, which impose large computational penalties if less memory is used. Our solution is practical and ready to deploy: a reference implementation of a proof-of-work requiring 700 MB of RAM runs in 15 seconds on a 2.1 GHz CPU, increases the computations by a factor of 1000 if memory is halved, and presents a proof of just 120 bytes long.

  12. Honey bee-inspired algorithms for SNP haplotype reconstruction problem

    Science.gov (United States)

    PourkamaliAnaraki, Maryam; Sadeghi, Mehdi

    2016-03-01

    Reconstructing haplotypes from SNP fragments is an important problem in computational biology. There have been a lot of interests in this field because haplotypes have been shown to contain promising data for disease association research. It is proved that haplotype reconstruction in Minimum Error Correction model is an NP-hard problem. Therefore, several methods such as clustering techniques, evolutionary algorithms, neural networks and swarm intelligence approaches have been proposed in order to solve this problem in appropriate time. In this paper, we have focused on various evolutionary clustering techniques and try to find an efficient technique for solving haplotype reconstruction problem. It can be referred from our experiments that the clustering methods relying on the behaviour of honey bee colony in nature, specifically bees algorithm and artificial bee colony methods, are expected to result in more efficient solutions. An application program of the methods is available at the following link. http://www.bioinf.cs.ipm.ir/software/haprs/

  13. Solving constraint satisfaction problems with networks of spiking neurons

    Directory of Open Access Journals (Sweden)

    Zeno eJonke

    2016-03-01

    Full Text Available Network of neurons in the brain apply – unlike processors in our current generation ofcomputer hardware – an event-based processing strategy, where short pulses (spikes areemitted sparsely by neurons to signal the occurrence of an event at a particular point intime. Such spike-based computations promise to be substantially more power-efficient thantraditional clocked processing schemes. However it turned out to be surprisingly difficult todesign networks of spiking neurons that can solve difficult computational problems on the levelof single spikes (rather than rates of spikes. We present here a new method for designingnetworks of spiking neurons via an energy function. Furthermore we show how the energyfunction of a network of stochastically firing neurons can be shaped in a quite transparentmanner by composing the networks of simple stereotypical network motifs. We show that thisdesign approach enables networks of spiking neurons to produce approximate solutions todifficult (NP-hard constraint satisfaction problems from the domains of planning/optimizationand verification/logical inference. The resulting networks employ noise as a computationalresource. Nevertheless the timing of spikes (rather than just spike rates plays an essential rolein their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines and Gibbs sampling.

  14. Class and Homework Problems: The Break-Even Radius of Insulation Computed Using Excel Solver and WolframAlpha

    Science.gov (United States)

    Foley, Greg

    2014-01-01

    A problem that illustrates two ways of computing the break-even radius of insulation is outlined. The problem is suitable for students who are taking an introductory module in heat transfer or transport phenomena and who have some previous knowledge of the numerical solution of non- linear algebraic equations. The potential for computer algebra,…

  15. Simplified computational methods for elastic and elastic-plastic fracture problems

    Science.gov (United States)

    Atluri, Satya N.

    1992-01-01

    An overview is given of some of the recent (1984-1991) developments in computational/analytical methods in the mechanics of fractures. Topics covered include analytical solutions for elliptical or circular cracks embedded in isotropic or transversely isotropic solids, with crack faces being subjected to arbitrary tractions; finite element or boundary element alternating methods for two or three dimensional crack problems; a 'direct stiffness' method for stiffened panels with flexible fasteners and with multiple cracks; multiple site damage near a row of fastener holes; an analysis of cracks with bonded repair patches; methods for the generation of weight functions for two and three dimensional crack problems; and domain-integral methods for elastic-plastic or inelastic crack mechanics.

  16. A new mathematical model for single machine batch scheduling problem for minimizing maximum lateness with deteriorating jobs

    Directory of Open Access Journals (Sweden)

    Ahmad Zeraatkar Moghaddam

    2012-01-01

    Full Text Available This paper presents a mathematical model for the problem of minimizing the maximum lateness on a single machine when the deteriorated jobs are delivered to each customer in various size batches. In reality, this issue may happen within a supply chain in which delivering goods to customers entails cost. Under such situation, keeping completed jobs to deliver in batches may result in reducing delivery costs. In literature review of batch scheduling, minimizing the maximum lateness is known as NP-Hard problem; therefore the present issue aiming at minimizing the costs of delivering, in addition to the aforementioned objective function, remains an NP-Hard problem. In order to solve the proposed model, a Simulation annealing meta-heuristic is used, where the parameters are calibrated by Taguchi approach and the results are compared to the global optimal values generated by Lingo 10 software. Furthermore, in order to check the efficiency of proposed method to solve larger scales of problem, a lower bound is generated. The results are also analyzed based on the effective factors of the problem. Computational study validates the efficiency and the accuracy of the presented model.

  17. Computer Use and Vision-Related Problems Among University Students In Ajman, United Arab Emirate

    OpenAIRE

    Shantakumari, N; Eldeeb, R; Sreedharan, J; Gopal, K

    2014-01-01

    Background: The extensive use of computers as medium of teaching and learning in universities necessitates introspection into the extent of computer related health disorders among student population. Aim: This study was undertaken to assess the pattern of computer usage and related visual problems, among University students in Ajman, United Arab Emirates. Materials and Methods: A total of 500 Students studying in Gulf Medical University, Ajman and Ajman University of Science and Technology we...

  18. A Column Generation for the Heterogeneous Fixed Fleet Open Vehicle Routing Problem

    Directory of Open Access Journals (Sweden)

    Majid Yousefikhoshbakht

    2017-07-01

    Full Text Available This paper addressed the heterogeneous fixed fleet open vehicle routing problem (HFFOVRP, in which the vehicles are not required to return to the depot after completing a service. In this new problem, the demands of customers are fulfilled by a heterogeneous fixed fleet of vehicles having various capacities, fixed costs and variable costs. This problem is an important variant of the open vehicle routing problem (OVRP and can cover more practical situations in transportation and logistics. Since this problem belongs to NP-hard Problems, An approach based on column generation (CG is applied to solve the HFFOVRP. A tight integer programming model is presented and the linear programming relaxation of which is solved by the CG technique. Since there have been no existing benchmarks, this study generated 19 test problems and the results of the proposed CG algorithm is compared to the results of exact algorithm. Computational experience confirms that the proposed algorithm can provide better solutions within a comparatively shorter period of time.

  19. Computational science and re-discovery: open-source implementation of ellipsoidal harmonics for problems in potential theory

    International Nuclear Information System (INIS)

    Bardhan, Jaydeep P; Knepley, Matthew G

    2012-01-01

    We present two open-source (BSD) implementations of ellipsoidal harmonic expansions for solving problems of potential theory using separation of variables. Ellipsoidal harmonics are used surprisingly infrequently, considering their substantial value for problems ranging in scale from molecules to the entire solar system. In this paper, we suggest two possible reasons for the paucity relative to spherical harmonics. The first is essentially historical—ellipsoidal harmonics developed during the late 19th century and early 20th, when it was found that only the lowest-order harmonics are expressible in closed form. Each higher-order term requires the solution of an eigenvalue problem, and tedious manual computation seems to have discouraged applications and theoretical studies. The second explanation is practical: even with modern computers and accurate eigenvalue algorithms, expansions in ellipsoidal harmonics are significantly more challenging to compute than those in Cartesian or spherical coordinates. The present implementations reduce the 'barrier to entry' by providing an easy and free way for the community to begin using ellipsoidal harmonics in actual research. We demonstrate our implementation using the specific and physiologically crucial problem of how charged proteins interact with their environment, and ask: what other analytical tools await re-discovery in an era of inexpensive computation?

  20. The hard-core model on random graphs revisited

    International Nuclear Information System (INIS)

    Barbier, Jean; Krzakala, Florent; Zhang, Pan; Zdeborová, Lenka

    2013-01-01

    We revisit the classical hard-core model, also known as independent set and dual to vertex cover problem, where one puts particles with a first-neighbor hard-core repulsion on the vertices of a random graph. Although the case of random graphs with small and very large average degrees respectively are quite well understood, they yield qualitatively different results and our aim here is to reconciliate these two cases. We revisit results that can be obtained using the (heuristic) cavity method and show that it provides a closed-form conjecture for the exact density of the densest packing on random regular graphs with degree K ≥ 20, and that for K > 16 the nature of the phase transition is the same as for large K. This also shows that the hard-code model is the simplest mean-field lattice model for structural glasses and jamming

  1. The neural correlates of problem states: testing FMRI predictions of a computational model of multitasking.

    Directory of Open Access Journals (Sweden)

    Jelmer P Borst

    Full Text Available BACKGROUND: It has been shown that people can only maintain one problem state, or intermediate mental representation, at a time. When more than one problem state is required, for example in multitasking, performance decreases considerably. This effect has been explained in terms of a problem state bottleneck. METHODOLOGY: In the current study we use the complimentary methodologies of computational cognitive modeling and neuroimaging to investigate the neural correlates of this problem state bottleneck. In particular, an existing computational cognitive model was used to generate a priori fMRI predictions for a multitasking experiment in which the problem state bottleneck plays a major role. Hemodynamic responses were predicted for five brain regions, corresponding to five cognitive resources in the model. Most importantly, we predicted the intraparietal sulcus to show a strong effect of the problem state manipulations. CONCLUSIONS: Some of the predictions were confirmed by a subsequent fMRI experiment, while others were not matched by the data. The experiment supported the hypothesis that the problem state bottleneck is a plausible cause of the interference in the experiment and that it could be located in the intraparietal sulcus.

  2. CLASSIFICATION AND COMPUTER SIMULATION OF CONSTRUCTIVE PROBLEM IN THE PLANE GEOMETRY: METHOD OF CIRCLES

    Directory of Open Access Journals (Sweden)

    Ivan H. Lenchuk

    2014-02-01

    Full Text Available Presented article concerns construction problems in plane geometry. Solved the problem of the formation of students' stereotypes efficient, economical in time visual representation of algorithms for solving problems on the modern computer screens. Used universal author’s method of fragmented typing tasks on the method of circles. Allocated rod-type problem with its subsequent filling with ingredients. Previously developed educational software (partially, GeoGebra ensure optimal realization of the construction. Their dynamic characteristics and constructive capabilities - quality visual- shaped stages of "evidence" and "research".

  3. Particular application of methods of AdaBoost and LBP to the problems of computer vision

    OpenAIRE

    Волошин, Микола Володимирович

    2012-01-01

    The application of AdaBoost method and local binary pattern (LBP) method for different spheres of computer vision implementation, such as personality identification and computer iridology, is considered in the article. The goal of the research is to develop error-correcting methods and systems for implements of computer vision and computer iridology, in particular. This article considers the problem of colour spaces, which are used as a filter and as a pre-processing of images. Method of AdaB...

  4. Ranked retrieval of Computational Biology models.

    Science.gov (United States)

    Henkel, Ron; Endler, Lukas; Peters, Andre; Le Novère, Nicolas; Waltemath, Dagmar

    2010-08-11

    The study of biological systems demands computational support. If targeting a biological problem, the reuse of existing computational models can save time and effort. Deciding for potentially suitable models, however, becomes more challenging with the increasing number of computational models available, and even more when considering the models' growing complexity. Firstly, among a set of potential model candidates it is difficult to decide for the model that best suits ones needs. Secondly, it is hard to grasp the nature of an unknown model listed in a search result set, and to judge how well it fits for the particular problem one has in mind. Here we present an improved search approach for computational models of biological processes. It is based on existing retrieval and ranking methods from Information Retrieval. The approach incorporates annotations suggested by MIRIAM, and additional meta-information. It is now part of the search engine of BioModels Database, a standard repository for computational models. The introduced concept and implementation are, to our knowledge, the first application of Information Retrieval techniques on model search in Computational Systems Biology. Using the example of BioModels Database, it was shown that the approach is feasible and extends the current possibilities to search for relevant models. The advantages of our system over existing solutions are that we incorporate a rich set of meta-information, and that we provide the user with a relevance ranking of the models found for a query. Better search capabilities in model databases are expected to have a positive effect on the reuse of existing models.

  5. Computer graphics application in the engineering design integration system

    Science.gov (United States)

    Glatt, C. R.; Abel, R. W.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Stewart, W. A.

    1975-01-01

    The computer graphics aspect of the Engineering Design Integration (EDIN) system and its application to design problems were discussed. Three basic types of computer graphics may be used with the EDIN system for the evaluation of aerospace vehicles preliminary designs: offline graphics systems using vellum-inking or photographic processes, online graphics systems characterized by direct coupled low cost storage tube terminals with limited interactive capabilities, and a minicomputer based refresh terminal offering highly interactive capabilities. The offline line systems are characterized by high quality (resolution better than 0.254 mm) and slow turnaround (one to four days). The online systems are characterized by low cost, instant visualization of the computer results, slow line speed (300 BAUD), poor hard copy, and the early limitations on vector graphic input capabilities. The recent acquisition of the Adage 330 Graphic Display system has greatly enhanced the potential for interactive computer aided design.

  6. Microstructure and macroscopic properties of polydisperse systems of hard spheres

    NARCIS (Netherlands)

    Ogarko, V.

    2014-01-01

    This dissertation describes an investigation of systems of polydisperse smooth hard spheres. This includes the development of a fast contact detection algorithm for computer modelling, the development of macroscopic constitutive laws that are based on microscopic features such as the moments of the

  7. Hard Diffraction - from Blois 1985 to 2005

    Energy Technology Data Exchange (ETDEWEB)

    Gunnar, Ingelman [Uppsala Univ., High Energy Physics (Sweden)

    2005-07-01

    The idea of diffractive processes with a hard scale involved, to resolve the underlying parton dynamics, was presented at the first Blois conference in 1985 and experimentally verified a few years later. Today hard diffraction is an attractive research field with high-quality data and new theoretical models. The trend from Regge-based pomeron models to QCD-based parton level models has given insights on QCD dynamics involving perturbative gluon exchange mechanisms. In the new QCD-based models, the pomeron is not part of the proton wave function, but diffraction is an effect of the scattering process. Models based on interactions with a colour background field provide an interesting approach which avoids conceptual problems of pomeron-based models, such as the pomeron flux, and provide a basis for common theoretical framework for all final states, diffractive gap events as well as non-diffractive events. Finally, the new process of gaps between jets provides strong evidence for the BFKL dynamics as predicted since long by QCD, but so far hard to establish experimentally.

  8. Job shop scheduling problem with late work criterion

    Science.gov (United States)

    Piroozfard, Hamed; Wong, Kuan Yew

    2015-05-01

    Scheduling is considered as a key task in many industries, such as project based scheduling, crew scheduling, flight scheduling, machine scheduling, etc. In the machine scheduling area, the job shop scheduling problems are considered to be important and highly complex, in which they are characterized as NP-hard. The job shop scheduling problems with late work criterion and non-preemptive jobs are addressed in this paper. Late work criterion is a fairly new objective function. It is a qualitative measure and concerns with late parts of the jobs, unlike classical objective functions that are quantitative measures. In this work, simulated annealing was presented to solve the scheduling problem. In addition, operation based representation was used to encode the solution, and a neighbourhood search structure was employed to search for the new solutions. The case studies are Lawrence instances that were taken from the Operations Research Library. Computational results of this probabilistic meta-heuristic algorithm were compared with a conventional genetic algorithm, and a conclusion was made based on the algorithm and problem.

  9. Correlation-based decimation in constraint satisfaction problems

    International Nuclear Information System (INIS)

    Higuchi, Saburo; Mezard, Marc

    2010-01-01

    We study hard constraint satisfaction problems using some decimation algorithms based on mean-field approximations. The message-passing approach is used to estimate, beside the usual one-variable marginals, the pair correlation functions. The identification of strongly correlated pairs allows to use a new decimation procedure, where the relative orientation of a pair of variables is fixed. We apply this novel decimation to locked occupation problems, a class of hard constraint satisfaction problems where the usual belief-propagation guided decimation performs poorly. The pair-decimation approach provides a significant improvement.

  10. Application of computational fluid mechanics to atmospheric pollution problems

    Science.gov (United States)

    Hung, R. J.; Liaw, G. S.; Smith, R. E.

    1986-01-01

    One of the most noticeable effects of air pollution on the properties of the atmosphere is the reduction in visibility. This paper reports the results of investigations of the fluid dynamical and microphysical processes involved in the formation of advection fog on aerosols from combustion-related pollutants, as condensation nuclei. The effects of a polydisperse aerosol distribution, on the condensation/nucleation processes which cause the reduction in visibility are studied. This study demonstrates how computational fluid mechanics and heat transfer modeling can be applied to simulate the life cycle of the atmosphereic pollution problems.

  11. Genetic Algorithm for Traveling Salesman Problem with Modified Cycle Crossover Operator

    Directory of Open Access Journals (Sweden)

    Abid Hussain

    2017-01-01

    Full Text Available Genetic algorithms are evolutionary techniques used for optimization purposes according to survival of the fittest idea. These methods do not ensure optimal solutions; however, they give good approximation usually in time. The genetic algorithms are useful for NP-hard problems, especially the traveling salesman problem. The genetic algorithm depends on selection criteria, crossover, and mutation operators. To tackle the traveling salesman problem using genetic algorithms, there are various representations such as binary, path, adjacency, ordinal, and matrix representations. In this article, we propose a new crossover operator for traveling salesman problem to minimize the total distance. This approach has been linked with path representation, which is the most natural way to represent a legal tour. Computational results are also reported with some traditional path representation methods like partially mapped and order crossovers along with new cycle crossover operator for some benchmark TSPLIB instances and found improvements.

  12. Mental health problems among survivors in hard-hit areas of the 5.12 Wenchuan and 4.20 Lushan earthquakes.

    Science.gov (United States)

    Xie, Zongtang; Xu, Jiuping; Wu, Zhibin

    2017-02-01

    Earthquake exposure has often been associated with psychological distress. However, little is known about the cumulative effect of exposure to two earthquakes on psychological distress and in particular, the effect on the development of post-traumatic stress disorder (PTSD), anxiety and depression disorders. This study explored the effect of exposure on mental health outcomes after a first earthquake and again after a second earthquake. A population-based mental health survey using self-report questionnaires was conducted on 278 people in the hard-hit areas of Lushan and Baoxing Counties 13-16 months after the Wenchuan earthquake (Sample 1). 191 of these respondents were evaluated again 8-9 months after the Lushan earthquake (Sample 2), which struck almost 5 years after the Wenchuan earthquake. In Sample 1, the prevalence rates for PTSD, anxiety and depression disorders were 44.53, 54.25 and 51.82%, respectively, and in Sample 2 the corresponding rates were 27.27, 38.63 and 36.93%. Females, the middle-aged, those of Tibetan nationality, and people who reported fear during the earthquake were at an increased risk of experiencing post-traumatic symptoms. Although the incidence of PTSD, anxiety and depression disorders decreased from Sample 1 to Sample 2, the cumulative effect of exposure to two earthquakes on mental health problems was serious in the hard-hit areas. Therefore, it is important that psychological counseling be provided for earthquake victims, and especially those exposed to multiple earthquakes.

  13. Experiences with explicit finite-difference schemes for complex fluid dynamics problems on STAR-100 and CYBER-203 computers

    Science.gov (United States)

    Kumar, A.; Rudy, D. H.; Drummond, J. P.; Harris, J. E.

    1982-01-01

    Several two- and three-dimensional external and internal flow problems solved on the STAR-100 and CYBER-203 vector processing computers are described. The flow field was described by the full Navier-Stokes equations which were then solved by explicit finite-difference algorithms. Problem results and computer system requirements are presented. Program organization and data base structure for three-dimensional computer codes which will eliminate or improve on page faulting, are discussed. Storage requirements for three-dimensional codes are reduced by calculating transformation metric data in each step. As a result, in-core grid points were increased in number by 50% to 150,000, with a 10% execution time increase. An assessment of current and future machine requirements shows that even on the CYBER-205 computer only a few problems can be solved realistically. Estimates reveal that the present situation is more storage limited than compute rate limited, but advancements in both storage and speed are essential to realistically calculate three-dimensional flow.

  14. NATO Advanced Study Institute on Advances in the Computer Simulations of Liquid Crystals

    CERN Document Server

    Zannoni, Claudio

    2000-01-01

    Computer simulations provide an essential set of tools for understanding the macroscopic properties of liquid crystals and of their phase transitions in terms of molecular models. While simulations of liquid crystals are based on the same general Monte Carlo and molecular dynamics techniques as are used for other fluids, they present a number of specific problems and peculiarities connected to the intrinsic properties of these mesophases. The field of computer simulations of anisotropic fluids is interdisciplinary and is evolving very rapidly. The present volume covers a variety of techniques and model systems, from lattices to hard particle and Gay-Berne to atomistic, for thermotropics, lyotropics, and some biologically interesting liquid crystals. Contributions are written by an excellent panel of international lecturers and provides a timely account of the techniques and problems in the field.

  15. Efficient computation of spaced seeds

    Directory of Open Access Journals (Sweden)

    Ilie Silvana

    2012-02-01

    Full Text Available Abstract Background The most frequently used tools in bioinformatics are those searching for similarities, or local alignments, between biological sequences. Since the exact dynamic programming algorithm is quadratic, linear-time heuristics such as BLAST are used. Spaced seeds are much more sensitive than the consecutive seed of BLAST and using several seeds represents the current state of the art in approximate search for biological sequences. The most important aspect is computing highly sensitive seeds. Since the problem seems hard, heuristic algorithms are used. The leading software in the common Bernoulli model is the SpEED program. Findings SpEED uses a hill climbing method based on the overlap complexity heuristic. We propose a new algorithm for this heuristic that improves its speed by over one order of magnitude. We use the new implementation to compute improved seeds for several software programs. We compute as well multiple seeds of the same weight as MegaBLAST, that greatly improve its sensitivity. Conclusion Multiple spaced seeds are being successfully used in bioinformatics software programs. Enabling researchers to compute very fast high quality seeds will help expanding the range of their applications.

  16. An efficient heuristic versus a robust hybrid meta-heuristic for general framework of serial-parallel redundancy problem

    International Nuclear Information System (INIS)

    Sadjadi, Seyed Jafar; Soltani, R.

    2009-01-01

    We present a heuristic approach to solve a general framework of serial-parallel redundancy problem where the reliability of the system is maximized subject to some general linear constraints. The complexity of the redundancy problem is generally considered to be NP-Hard and the optimal solution is not normally available. Therefore, to evaluate the performance of the proposed method, a hybrid genetic algorithm is also implemented whose parameters are calibrated via Taguchi's robust design method. Then, various test problems are solved and the computational results indicate that the proposed heuristic approach could provide us some promising reliabilities, which are fairly close to optimal solutions in a reasonable amount of time.

  17. Environmental aspects of hard coal mines closure in Poland

    International Nuclear Information System (INIS)

    Chaber, M.; Krogulski, K.; Gawlik, L.

    1998-01-01

    The environmental problems that arise during the closure processes of hard coal mines in Poland are undertaken in the paper. The problems of changes in water balance in rock mass are described with a stress put on underground water management. Regulation concerning ground reclamation and utilisation and removal of existing heat and power plants which after the mines closure will continue to supply surrounding consumers are stressed and the possible solutions are shown. 13 refs

  18. Heuristic methods using grasp, path relinking and variable neighborhood search for the clustered traveling salesman problem

    Directory of Open Access Journals (Sweden)

    Mário Mestria

    2013-08-01

    Full Text Available The Clustered Traveling Salesman Problem (CTSP is a generalization of the Traveling Salesman Problem (TSP in which the set of vertices is partitioned into disjoint clusters and objective is to find a minimum cost Hamiltonian cycle such that the vertices of each cluster are visited contiguously. The CTSP is NP-hard and, in this context, we are proposed heuristic methods for the CTSP using GRASP, Path Relinking and Variable Neighborhood Descent (VND. The heuristic methods were tested using Euclidean instances with up to 2000 vertices and clusters varying between 4 to 150 vertices. The computational tests were performed to compare the performance of the heuristic methods with an exact algorithm using the Parallel CPLEX software. The computational results showed that the hybrid heuristic method using VND outperforms other heuristic methods.

  19. FOREWORD: 4th International Workshop on New Computational Methods for Inverse Problems (NCMIP2014)

    Science.gov (United States)

    2014-10-01

    This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 4th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2014 (http://www.farman.ens-cachan.fr/NCMIP_2014.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 23, 2014. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/), and secondly at the initiative of Institut Farman, in May 2012 and May 2013, (http://www.farman.ens-cachan.fr/NCMIP_2012.html), (http://www.farman.ens-cachan.fr/NCMIP_2013.html). The New Computational Methods for Inverse Problems (NCMIP) Workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the

  20. Technology, attributions, and emotions in post-secondary education: An application of Weiner's attribution theory to academic computing problems.

    Science.gov (United States)

    Maymon, Rebecca; Hall, Nathan C; Goetz, Thomas; Chiarella, Andrew; Rahimi, Sonia

    2018-01-01

    As technology becomes increasingly integrated with education, research on the relationships between students' computing-related emotions and motivation following technological difficulties is critical to improving learning experiences. Following from Weiner's (2010) attribution theory of achievement motivation, the present research examined relationships between causal attributions and emotions concerning academic computing difficulties in two studies. Study samples consisted of North American university students enrolled in both traditional and online universities (total N = 559) who responded to either hypothetical scenarios or experimental manipulations involving technological challenges experienced in academic settings. Findings from Study 1 showed stable and external attributions to be emotionally maladaptive (more helplessness, boredom, guilt), particularly in response to unexpected computing problems. Additionally, Study 2 found stable attributions for unexpected problems to predict more anxiety for traditional students, with both external and personally controllable attributions for minor problems proving emotionally beneficial for students in online degree programs (more hope, less anxiety). Overall, hypothesized negative effects of stable attributions were observed across both studies, with mixed results for personally controllable attributions and unanticipated emotional benefits of external attributions for academic computing problems warranting further study.

  1. Improved teaching-learning-based and JAYA optimization algorithms for solving flexible flow shop scheduling problems

    Science.gov (United States)

    Buddala, Raviteja; Mahapatra, Siba Sankar

    2017-11-01

    Flexible flow shop (or a hybrid flow shop) scheduling problem is an extension of classical flow shop scheduling problem. In a simple flow shop configuration, a job having `g' operations is performed on `g' operation centres (stages) with each stage having only one machine. If any stage contains more than one machine for providing alternate processing facility, then the problem becomes a flexible flow shop problem (FFSP). FFSP which contains all the complexities involved in a simple flow shop and parallel machine scheduling problems is a well-known NP-hard (Non-deterministic polynomial time) problem. Owing to high computational complexity involved in solving these problems, it is not always possible to obtain an optimal solution in a reasonable computation time. To obtain near-optimal solutions in a reasonable computation time, a large variety of meta-heuristics have been proposed in the past. However, tuning algorithm-specific parameters for solving FFSP is rather tricky and time consuming. To address this limitation, teaching-learning-based optimization (TLBO) and JAYA algorithm are chosen for the study because these are not only recent meta-heuristics but they do not require tuning of algorithm-specific parameters. Although these algorithms seem to be elegant, they lose solution diversity after few iterations and get trapped at the local optima. To alleviate such drawback, a new local search procedure is proposed in this paper to improve the solution quality. Further, mutation strategy (inspired from genetic algorithm) is incorporated in the basic algorithm to maintain solution diversity in the population. Computational experiments have been conducted on standard benchmark problems to calculate makespan and computational time. It is found that the rate of convergence of TLBO is superior to JAYA. From the results, it is found that TLBO and JAYA outperform many algorithms reported in the literature and can be treated as efficient methods for solving the FFSP.

  2. Computer science approach to quantum control

    International Nuclear Information System (INIS)

    Janzing, D.

    2006-01-01

    Whereas it is obvious that every computation process is a physical process it has hardly been recognized that many complex physical processes bear similarities to computation processes. This is in particular true for the control of physical systems on the nanoscopic level: usually the system can only be accessed via a rather limited set of elementary control operations and for many purposes only a concatenation of a large number of these basic operations will implement the desired process. This concatenation is in many cases quite similar to building complex programs from elementary steps and principles for designing algorithm may thus be a paradigm for designing control processes. For instance, one can decrease the temperature of one part of a molecule by transferring its heat to the remaining part where it is then dissipated to the environment. But the implementation of such a process involves a complex sequence of electromagnetic pulses. This work considers several hypothetical control processes on the nanoscopic level and show their analogy to computation processes. We show that measuring certain types of quantum observables is such a complex task that every instrument that is able to perform it would necessarily be an extremely powerful computer. Likewise, the implementation of a heat engine on the nanoscale requires to process the heat in a way that is similar to information processing and it can be shown that heat engines with maximal efficiency would be powerful computers, too. In the same way as problems in computer science can be classified by complexity classes we can also classify control problems according to their complexity. Moreover, we directly relate these complexity classes for control problems to the classes in computer science. Unifying notions of complexity in computer science and physics has therefore two aspects: on the one hand, computer science methods help to analyze the complexity of physical processes. On the other hand, reasonable

  3. Hard X-Ray PHA System on the HT-7 Tokamak

    International Nuclear Information System (INIS)

    Lin Shiyao; Shi Yuejiang; Wan Baonian; Chen Zhongyong; Hu Liqun

    2006-01-01

    A new hard X-ray pulse-height analysis (PHA) system has been established on HT-7 tokamak for long pulse steady-state operation. This PHA system consists of hard X-ray diagnostics and multi-channel analysers (MCA). The hard X-ray diagnostics consists of a vertical X-ray detector array (CdTe) and a horizontal X-ray detector array (NaI). The hard X-ray diagnostics can provide the profile of power deposition and the distribution function of fast electron during radio frequency (RF) current drive. The MCA system is the electronic part of the PHA system, which has been modularized and linked to PC through LAN. Each module of MCA can connect with 8 X-ray detectors. The embedded Ethernet adapter in the MCA module makes the data communication between PC and MCA very convenient. A computer can control several modules of MCA through certain software and a hub. The RAM in MCA can store 1024 or more spectra for each detector and therefore the PHA system can be applied in the long pulse discharge of several minutes

  4. Regularization and computational methods for precise solution of perturbed orbit transfer problems

    Science.gov (United States)

    Woollands, Robyn Michele

    The author has developed a suite of algorithms for solving the perturbed Lambert's problem in celestial mechanics. These algorithms have been implemented as a parallel computation tool that has broad applicability. This tool is composed of four component algorithms and each provides unique benefits for solving a particular type of orbit transfer problem. The first one utilizes a Keplerian solver (a-iteration) for solving the unperturbed Lambert's problem. This algorithm not only provides a "warm start" for solving the perturbed problem but is also used to identify which of several perturbed solvers is best suited for the job. The second algorithm solves the perturbed Lambert's problem using a variant of the modified Chebyshev-Picard iteration initial value solver that solves two-point boundary value problems. This method converges over about one third of an orbit and does not require a Newton-type shooting method and thus no state transition matrix needs to be computed. The third algorithm makes use of regularization of the differential equations through the Kustaanheimo-Stiefel transformation and extends the domain of convergence over which the modified Chebyshev-Picard iteration two-point boundary value solver will converge, from about one third of an orbit to almost a full orbit. This algorithm also does not require a Newton-type shooting method. The fourth algorithm uses the method of particular solutions and the modified Chebyshev-Picard iteration initial value solver to solve the perturbed two-impulse Lambert problem over multiple revolutions. The method of particular solutions is a shooting method but differs from the Newton-type shooting methods in that it does not require integration of the state transition matrix. The mathematical developments that underlie these four algorithms are derived in the chapters of this dissertation. For each of the algorithms, some orbit transfer test cases are included to provide insight on accuracy and efficiency of these

  5. Reduction of community alcohol problems: computer simulation experiments in three counties.

    Science.gov (United States)

    Holder, H D; Blose, J O

    1987-03-01

    A series of alcohol abuse prevention strategies was evaluated using computer simulation for three counties in the United States: Wake County, North Carolina, Washington County, Vermont and Alameda County, California. A system dynamics model composed of a network of interacting variables was developed for the pattern of alcoholic beverage consumption in a community. The relationship of community drinking patterns to various stimulus factors was specified in the model based on available empirical research. Stimulus factors included disposable income, alcoholic beverage prices, advertising exposure, minimum drinking age and changes in cultural norms. After a generic model was developed and validated on the national level, a computer-based system dynamics model was developed for each county, and a series of experiments was conducted to project the potential impact of specific prevention strategies. The project concluded that prevention efforts can both lower current levels of alcohol abuse and reduce projected increases in alcohol-related problems. Without such efforts, already high levels of alcohol-related family disruptions in the three counties could be expected to rise an additional 6% and drinking-related work problems 1-5%, over the next 10 years after controlling for population growth. Of the strategies tested, indexing the price of alcoholic beverages to the consumer price index in conjunction with the implementation of a community educational program with well-defined target audiences has the best potential for significant problem reduction in all three counties.

  6. Quark-number susceptibility, thermodynamic sum rule, and the hard thermal loop approximation

    International Nuclear Information System (INIS)

    Chakraborty, Purnendu; Mustafa, Munshi G.; Thoma, Markus H.

    2003-01-01

    The quark number susceptibility, associated with the conserved quark number density, is closely related to the baryon and charge fluctuations in the quark-gluon plasma, which might serve as signature for the quark-gluon plasma formation in ultrarelativistic heavy-ion collisions. In addition to QCD lattice simulations, the quark number susceptibility has been calculated recently using a resummed perturbation theory (hard thermal loop resummation). In the present work we show, based on general arguments, that the computation of this quantity neglecting hard thermal loop vertices contradicts the Ward identity and violates the thermodynamic sum rule following from quark number conservation. We further show that the hard thermal loop perturbation theory is consistent with the thermodynamic sum rule

  7. Grand canonical simulations of hard-disk systems by simulated tempering

    DEFF Research Database (Denmark)

    Döge, G.; Mecke, K.; Møller, Jesper

    2004-01-01

    The melting transition of hard disks in two dimensions is still an unsolved problem and improved simulation algorithms may be helpful for its investigation. We suggest the application of simulating tempering for grand canonical hard-disk systems as an efficient alternative to the commonly......-used Monte Carlo algorithms for canonical systems. This approach allows the direct study of the packing fraction as a function of the chemical potential even in the vicinity of the melting transition. Furthermore, estimates of several spatial characteristics including pair correlation function are studied...

  8. Cellular Neural Networks for NP-Hard Optimization

    Directory of Open Access Journals (Sweden)

    Mária Ercsey-Ravasz

    2009-02-01

    Full Text Available A cellular neural/nonlinear network (CNN is used for NP-hard optimization. We prove that a CNN in which the parameters of all cells can be separately controlled is the analog correspondent of a two-dimensional Ising-type (Edwards-Anderson spin-glass system. Using the properties of CNN, we show that one single operation (template always yields a local minimum of the spin-glass energy function. This way, a very fast optimization method, similar to simulated annealing, can be built. Estimating the simulation time needed on CNN-based computers, and comparing it with the time needed on normal digital computers using the simulated annealing algorithm, the results are astonishing. CNN computers could be faster than digital computers already at 10×10 lattice sizes. The local control of the template parameters was already partially realized on some of the hardwares, we think this study could further motivate their development in this direction.

  9. The Cognitive Correlates of Third-Grade Skill in Arithmetic, Algorithmic Computation, and Arithmetic Word Problems

    Science.gov (United States)

    Fuchs, Lynn S.; Fuchs, Douglas; Compton, Donald L.; Powell, Sarah R.; Seethaler, Pamela M.; Capizzi, Andrea M.; Schatschneider, Christopher; Fletcher, Jack M.

    2006-01-01

    The purpose of this study was to examine the cognitive correlates of RD-grade skill in arithmetic, algorithmic computation, and arithmetic word problems. Third graders (N = 312) were measured on language, nonverbal problem solving, concept formation, processing speed, long-term memory, working memory, phonological decoding, and sight word…

  10. Guaranteed Discrete Energy Optimization on Large Protein Design Problems.

    Science.gov (United States)

    Simoncini, David; Allouche, David; de Givry, Simon; Delmas, Céline; Barbe, Sophie; Schiex, Thomas

    2015-12-08

    In Computational Protein Design (CPD), assuming a rigid backbone and amino-acid rotamer library, the problem of finding a sequence with an optimal conformation is NP-hard. In this paper, using Dunbrack's rotamer library and Talaris2014 decomposable energy function, we use an exact deterministic method combining branch and bound, arc consistency, and tree-decomposition to provenly identify the global minimum energy sequence-conformation on full-redesign problems, defining search spaces of size up to 10(234). This is achieved on a single core of a standard computing server, requiring a maximum of 66GB RAM. A variant of the algorithm is able to exhaustively enumerate all sequence-conformations within an energy threshold of the optimum. These proven optimal solutions are then used to evaluate the frequencies and amplitudes, in energy and sequence, at which an existing CPD-dedicated simulated annealing implementation may miss the optimum on these full redesign problems. The probability of finding an optimum drops close to 0 very quickly. In the worst case, despite 1,000 repeats, the annealing algorithm remained more than 1 Rosetta unit away from the optimum, leading to design sequences that could differ from the optimal sequence by more than 30% of their amino acids.

  11. Recent advances in computational-analytical integral transforms for convection-diffusion problems

    Science.gov (United States)

    Cotta, R. M.; Naveira-Cotta, C. P.; Knupp, D. C.; Zotin, J. L. Z.; Pontes, P. C.; Almeida, A. P.

    2017-10-01

    An unifying overview of the Generalized Integral Transform Technique (GITT) as a computational-analytical approach for solving convection-diffusion problems is presented. This work is aimed at bringing together some of the most recent developments on both accuracy and convergence improvements on this well-established hybrid numerical-analytical methodology for partial differential equations. Special emphasis is given to novel algorithm implementations, all directly connected to enhancing the eigenfunction expansion basis, such as a single domain reformulation strategy for handling complex geometries, an integral balance scheme in dealing with multiscale problems, the adoption of convective eigenvalue problems in formulations with significant convection effects, and the direct integral transformation of nonlinear convection-diffusion problems based on nonlinear eigenvalue problems. Then, selected examples are presented that illustrate the improvement achieved in each class of extension, in terms of convergence acceleration and accuracy gain, which are related to conjugated heat transfer in complex or multiscale microchannel-substrate geometries, multidimensional Burgers equation model, and diffusive metal extraction through polymeric hollow fiber membranes. Numerical results are reported for each application and, where appropriate, critically compared against the traditional GITT scheme without convergence enhancement schemes and commercial or dedicated purely numerical approaches.

  12. Hard wall - soft wall - vorticity scattering in shear flow

    NARCIS (Netherlands)

    Rienstra, S.W.; Singh, D.K.

    2014-01-01

    An analytically exact solution, for the problem of lowMach number incident vorticity scattering at a hard-soft wall transition, is obtained in the form of Fourier integrals by using theWiener-Hopf method. Harmonic vortical perturbations of inviscid linear shear flow are scattered at the wall

  13. Hard wall - soft wall - vorticity scattering in shear flow

    NARCIS (Netherlands)

    Rienstra, S.W.; Singh, D.K.

    2014-01-01

    An analytically exact solution, for the problem of low Mach number incident vorticity scattering at a hard-soft wall transition, is obtained in the form of Fourier integrals by using the Wiener-Hopf method. Harmonic vortical perturbations of inviscid linear shear flow are scattered at the wall

  14. Solving black box computation problems using expert knowledge theory and methods

    International Nuclear Information System (INIS)

    Booker, Jane M.; McNamara, Laura A.

    2004-01-01

    The challenge problems for the Epistemic Uncertainty Workshop at Sandia National Laboratories provide common ground for comparing different mathematical theories of uncertainty, referred to as General Information Theories (GITs). These problems also present the opportunity to discuss the use of expert knowledge as an important constituent of uncertainty quantification. More specifically, how do the principles and methods of eliciting and analyzing expert knowledge apply to these problems and similar ones encountered in complex technical problem solving and decision making? We will address this question, demonstrating how the elicitation issues and the knowledge that experts provide can be used to assess the uncertainty in outputs that emerge from a black box model or computational code represented by the challenge problems. In our experience, the rich collection of GITs provides an opportunity to capture the experts' knowledge and associated uncertainties consistent with their thinking, problem solving, and problem representation. The elicitation process is rightly treated as part of an overall analytical approach, and the information elicited is not simply a source of data. In this paper, we detail how the elicitation process itself impacts the analyst's ability to represent, aggregate, and propagate uncertainty, as well as how to interpret uncertainties in outputs. While this approach does not advocate a specific GIT, answers under uncertainty do result from the elicitation

  15. A Finite-Volume computational mechanics framework for multi-physics coupled fluid-stress problems

    International Nuclear Information System (INIS)

    Bailey, C; Cross, M.; Pericleous, K.

    1998-01-01

    Where there is a strong interaction between fluid flow, heat transfer and stress induced deformation, it may not be sufficient to solve each problem separately (i.e. fluid vs. stress, using different techniques or even different computer codes). This may be acceptable where the interaction is static, but less so, if it is dynamic. It is desirable for this reason to develop software that can accommodate both requirements (i.e. that of fluid flow and that of solid mechanics) in a seamless environment. This is accomplished in the University of Greenwich code PHYSICA, which solves both the fluid flow problem and the stress-strain equations in a unified Finite-Volume environment, using an unstructured computational mesh that can deform dynamically. Example applications are given of the work of the group in the metals casting process (where thermal stresses cause elasto- visco-plastic distortion)

  16. Computer-supported planning on graphic terminals in the staff divisions of hard coal mines. Rechnergestuetzte Planung an grafischen Arbeitsplaetzen in den Stabsstellen von Steinkohlenbergwerken

    Energy Technology Data Exchange (ETDEWEB)

    Seeliger, A [Technische Hochschule Aachen (Germany)

    1990-01-01

    Analysis of the planning activity in the planning department of German hard coal mines have shown that in some branches of the planning process productivity and creativity of the involved experts can be increased, potentials for rationalization be opened up and the cooperation between different engineering disciplines be improved by using computer network systems in combination with graphic systems. This paper reports about the computer-supported planning system 'Grube', which has been developed at the RWTH (technical university) Aachen, and its applications in mine surveying, electro-technical and mechanical planning as well as in the planning of ventilation systems and detailed mine planning. The software module GRUBE-W, which will be in future the centre of the working place for the mine ventilation planning of the Ruhrkohle AG, is discussed in detail. (orig.).

  17. A GPU Implementation of Local Search Operators for Symmetric Travelling Salesman Problem

    Directory of Open Access Journals (Sweden)

    Juraj Fosin

    2013-06-01

    Full Text Available The Travelling Salesman Problem (TSP is one of the most studied combinatorial optimization problem which is significant in many practical applications in transportation problems. The TSP problem is NP-hard problem and requires large computation power to be solved by the exact algorithms. In the past few years, fast development of general-purpose Graphics Processing Units (GPUs has brought huge improvement in decreasing the applications’ execution time. In this paper, we implement 2-opt and 3-opt local search operators for solving the TSP on the GPU using CUDA. The novelty presented in this paper is a new parallel iterated local search approach with 2-opt and 3-opt operators for symmetric TSP, optimized for the execution on GPUs. With our implementation large TSP problems (up to 85,900 cities can be solved using the GPU. We will show that our GPU implementation can be up to 20x faster without losing quality for all TSPlib problems as well as for our CRO TSP problem.

  18. Custom Hardware Processor to Compute a Figure of Merit for the Fit of X-Ray Diffraction

    International Nuclear Information System (INIS)

    Gomez-Pulido, P.J.A.; Vega-Rodriguez, M.A.; Sanchez-Perez, J.M.; Sanchez-Bajo, F.; Santos, S.P.D.

    2008-01-01

    A custom processor based on re configurable hardware technology is proposed in order to compute the figure of merit used to measure the quality of the fit of X-ray diffraction peaks. As the experimental X-ray profiles can present many peaks severely overlapped, it is necessary to select the best model among a large set of reasonably good solutions. Determining the best solution is computationally intensive, because this is a hard combinatorial optimization problem. The proposed processors, working in parallel, increase the performance relative to a software implementation.

  19. Developing a multimodal biometric authentication system using soft computing methods.

    Science.gov (United States)

    Malcangi, Mario

    2015-01-01

    Robust personal authentication is becoming ever more important in computer-based applications. Among a variety of methods, biometric offers several advantages, mainly in embedded system applications. Hard and soft multi-biometric, combined with hard and soft computing methods, can be applied to improve the personal authentication process and to generalize the applicability. This chapter describes the embedded implementation of a multi-biometric (voiceprint and fingerprint) multimodal identification system based on hard computing methods (DSP) for feature extraction and matching, an artificial neural network (ANN) for soft feature pattern matching, and a fuzzy logic engine (FLE) for data fusion and decision.

  20. Magnetic hyperthermia with hard-magnetic nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Kashevsky, Bronislav E., E-mail: bekas@itmo.by [A.V Luikov Heat and Mass Transfer Institute, Belarus Academy of Sciences, P. Brovka str. 15, Minsk 220072 (Belarus); Kashevsky, Sergey B.; Korenkov, Victor S. [A.V Luikov Heat and Mass Transfer Institute, Belarus Academy of Sciences, P. Brovka str. 15, Minsk 220072 (Belarus); Istomin, Yuri P. [N. N. Alexandrov National Cancer Center of Belarus, Lesnoy-2, Minsk 223040 (Belarus); Terpinskaya, Tatyana I.; Ulashchik, Vladimir S. [Institute of Physiology, Belarus Academy of Sciences, Akademicheskaya str. 28, Minsk 220072 (Belarus)

    2015-04-15

    Recent clinical trials of magnetic hyperthermia have proved, and even hardened, the Ankinson-Brezovich restriction as upon magnetic field conditions applicable to any site of human body. Subject to this restriction, which is harshly violated in numerous laboratory and small animal studies, magnetic hyperthermia can relay on rather moderate heat source, so that optimization of the whole hyperthermia system remains, after all, the basic problem predetermining its clinical perspectives. We present short account of our complex (theoretical, laboratory and small animal) studies to demonstrate that such perspectives should be related with the hyperthermia system based on hard-magnetic (Stoner–Wohlfarth type) nanoparticles and strong low-frequency fields rather than with superparamagnetic (Brownian or Neél) nanoparticles and weak high-frequency fields. This conclusion is backed by an analytical evaluation of the maximum absorption rates possible under the field restriction in the ideal hard-magnetic (Stoner–Wohlarth) and the ideal superparamagnetic (single relaxation time) systems, by theoretical and experimental studies of the dynamic magnetic hysteresis in suspensions of movable hard-magnetic particles, by producing nanoparticles with adjusted coercivity and suspensions of such particles capable of effective energy absorption and intratumoral penetration, and finally, by successful treatment of a mice model tumor under field conditions acceptable for whole human body. - Highlights: • Hard-magnetic nanoparticles are shown superior for hyperthetmia to superparamagnetic. • Optimal system parameters are found from magnetic reversal model in movable particle. • Penetrating suspension of HM particles with aggregation-independent SAR is developed. • For the first time, mice with tumors are healed in AC field acceptable for human body.

  1. The structure and formation of functional hard coatings: a short review

    Directory of Open Access Journals (Sweden)

    Diciuc Vlad

    2017-01-01

    Full Text Available Turning tools come in different shapes and sizes, geometry, base material and coating, according to their destination. They are widely used both for obtaining parts and for machinability tests. In this paper a short review about high-speed steel (HSS turning tools and their coatings is presented. Hard coatings formed on the tool material should be functional depending on the tool final application. Requirements for hard coatings and technological problems for layer formation on the real cutting tool are discussed.

  2. A hybrid guided neighborhood search for the disjunctively constrained knapsack problem

    Directory of Open Access Journals (Sweden)

    Mhand Hifi

    2015-12-01

    Full Text Available In this paper, we investigate the use of a hybrid guided neighborhood search for solving the disjunctively constrained knapsack problem. The studied problem may be viewed as a combination of two NP-hard combinatorial optimization problems: the weighted-independent set and the classical binary knapsack. The proposed algorithm is a hybrid approach that combines both deterministic and random local searches. The deterministic local search is based on a descent method, where both building and exploring procedures are alternatively used for improving the solution at hand. In order to escape from a local optima, a random local search strategy is introduced which is based on a modified ant colony optimization system. During the search process, the ant colony optimization system tries to diversify and to enhance the solutions using some informations collected from the previous iterations. Finally, the proposed algorithm is computationally analyzed on a set of benchmark instances available in the literature. The provided results are compared to those realized by both the Cplex solver and a recent algorithm of the literature. The computational part shows that the obtained results improve most existing solution values.

  3. From transistor to trapped-ion computers for quantum chemistry.

    Science.gov (United States)

    Yung, M-H; Casanova, J; Mezzacapo, A; McClean, J; Lamata, L; Aspuru-Guzik, A; Solano, E

    2014-01-07

    Over the last few decades, quantum chemistry has progressed through the development of computational methods based on modern digital computers. However, these methods can hardly fulfill the exponentially-growing resource requirements when applied to large quantum systems. As pointed out by Feynman, this restriction is intrinsic to all computational models based on classical physics. Recently, the rapid advancement of trapped-ion technologies has opened new possibilities for quantum control and quantum simulations. Here, we present an efficient toolkit that exploits both the internal and motional degrees of freedom of trapped ions for solving problems in quantum chemistry, including molecular electronic structure, molecular dynamics, and vibronic coupling. We focus on applications that go beyond the capacity of classical computers, but may be realizable on state-of-the-art trapped-ion systems. These results allow us to envision a new paradigm of quantum chemistry that shifts from the current transistor to a near-future trapped-ion-based technology.

  4. 2nd International Workshop on Eigenvalue Problems : Algorithms, Software and Applications in Petascale Computing

    CERN Document Server

    Zhang, Shao-Liang; Imamura, Toshiyuki; Yamamoto, Yusaku; Kuramashi, Yoshinobu; Hoshi, Takeo

    2017-01-01

    This book provides state-of-the-art and interdisciplinary topics on solving matrix eigenvalue problems, particularly by using recent petascale and upcoming post-petascale supercomputers. It gathers selected topics presented at the International Workshops on Eigenvalue Problems: Algorithms; Software and Applications, in Petascale Computing (EPASA2014 and EPASA2015), which brought together leading researchers working on the numerical solution of matrix eigenvalue problems to discuss and exchange ideas – and in so doing helped to create a community for researchers in eigenvalue problems. The topics presented in the book, including novel numerical algorithms, high-performance implementation techniques, software developments and sample applications, will contribute to various fields that involve solving large-scale eigenvalue problems.

  5. Problem specific heuristics for group scheduling problems in cellular manufacturing

    OpenAIRE

    Neufeld, Janis Sebastian

    2016-01-01

    The group scheduling problem commonly arises in cellular manufacturing systems, where parts are grouped into part families. It is characterized by a sequencing task on two levels: on the one hand, a sequence of jobs within each part family has to be identified while, on the other hand, a family sequence has to be determined. In order to solve this NP-hard problem usually heuristic solution approaches are used. In this thesis different aspects of group scheduling are discussed and problem spec...

  6. Modeling biological problems in computer science: a case study in genome assembly.

    Science.gov (United States)

    Medvedev, Paul

    2018-01-30

    As computer scientists working in bioinformatics/computational biology, we often face the challenge of coming up with an algorithm to answer a biological question. This occurs in many areas, such as variant calling, alignment and assembly. In this tutorial, we use the example of the genome assembly problem to demonstrate how to go from a question in the biological realm to a solution in the computer science realm. We show the modeling process step-by-step, including all the intermediate failed attempts. Please note this is not an introduction to how genome assembly algorithms work and, if treated as such, would be incomplete and unnecessarily long-winded. © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Evaluation of the computer code system RADHEAT-V4 by analysing benchmark problems on radiation shielding

    International Nuclear Information System (INIS)

    Sakamoto, Yukio; Naito, Yoshitaka

    1990-11-01

    A computer code system RADHEAT-V4 has been developed for safety evaluation on radiation shielding of nuclear fuel facilities. To evaluate the performance of the code system, 18 benchmark problem were selected and analysed. Evaluated radiations are neutron and gamma-ray. Benchmark problems consist of penetration, streaming and skyshine. The computed results show more accurate than those by the Sn codes ANISN and DOT3.5 or the Monte Carlo code MORSE. Big core memory and many times I/O are, however, required for RADHEAT-V4. (author)

  8. Digital dissection - using contrast-enhanced computed tomography scanning to elucidate hard- and soft-tissue anatomy in the Common Buzzard Buteo buteo.

    Science.gov (United States)

    Lautenschlager, Stephan; Bright, Jen A; Rayfield, Emily J

    2014-04-01

    Gross dissection has a long history as a tool for the study of human or animal soft- and hard-tissue anatomy. However, apart from being a time-consuming and invasive method, dissection is often unsuitable for very small specimens and often cannot capture spatial relationships of the individual soft-tissue structures. The handful of comprehensive studies on avian anatomy using traditional dissection techniques focus nearly exclusively on domestic birds, whereas raptorial birds, and in particular their cranial soft tissues, are essentially absent from the literature. Here, we digitally dissect, identify, and document the soft-tissue anatomy of the Common Buzzard (Buteo buteo) in detail, using the new approach of contrast-enhanced computed tomography using Lugol's iodine. The architecture of different muscle systems (adductor, depressor, ocular, hyoid, neck musculature), neurovascular, and other soft-tissue structures is three-dimensionally visualised and described in unprecedented detail. The three-dimensional model is further presented as an interactive PDF to facilitate the dissemination and accessibility of anatomical data. Due to the digital nature of the data derived from the computed tomography scanning and segmentation processes, these methods hold the potential for further computational analyses beyond descriptive and illustrative proposes. © 2013 The Authors. Journal of Anatomy published by John Wiley & Sons Ltd on behalf of Anatomical Society.

  9. Multimode Preemptive Resource Investment Problem Subject to Due Dates for Activities: Formulation and Solution Procedure

    Directory of Open Access Journals (Sweden)

    Behrouz Afshar-Nadjafi

    2014-01-01

    Full Text Available The preemptive Multimode resource investment problem is investigated. The Objective is to minimize the total renewable/nonrenewable resource costs and earliness-tardiness costs by a given project deadline and due dates for activities. In this problem setting preemption is allowed with no setup cost or time. The project contains activities interrelated by finish-start type precedence relations with a time lag of zero, which require a set of renewable and nonrenewable resources. The problem formed in this way is an NP-hard. A mixed integer programming formulation is proposed for the problem and parameters tuned genetic algorithm (GA is proposed to solve it. To evaluate the performance of the proposed algorithm, 120 test problems are used. Comparative statistical results reveal that the proposed GA is efficient and effective in terms of the objective function and computational times.

  10. Development of ITER diagnostics: Neutronic analysis and radiation hardness

    Energy Technology Data Exchange (ETDEWEB)

    Vukolov, Konstantin, E-mail: vukolov_KY@nrcki.ru; Borisov, Andrey; Deryabina, Natalya; Orlovskiy, Ilya

    2015-10-15

    Highlights: • Problems of ITER diagnostics caused by neutron radiation from hot DT plasma considered. • Careful neutronic analysis is necessary for ITER diagnostics development. • Effective nuclear shielding for ITER diagnostics in the 11th equatorial port plug proposed. • Requirements for study of radiation hardness of diagnostic elements defined. • Results of optical glasses irradiation tests in a fission reactor given. - Abstract: The paper is dedicated to the problems of ITER diagnostics caused by effects of radiation from hot DT plasma. An effective nuclear shielding must be arranged in diagnostic port plugs to meet the nuclear safety requirements and to provide reliable operation of the diagnostics. This task can be solved with the help of neutronic analysis of the diagnostics environment within the port plugs at the design stage. Problems of neutronic calculations are demonstrated for the 11th equatorial port plug. The numerical simulation includes the calculations of neutron fluxes in the port-plug and in the interspace. Options for nuclear shielding, such as tungsten collimator, boron carbide and water moderators, stainless steel and lead screens are considered. Data on neutron fluxes along diagnostic labyrinths allow to define radiation hardness requirements for the diagnostic components and to specify their materials. Options for windows and lenses materials for optical diagnostics are described. The results of irradiation of flint and silica glasses in nuclear reactor have shown that silica KU-1 and KS-4V retain transparency in visible range after neutron fluence of 10{sup 17} cm{sup −2}. Flints required for achromatic objectives have much less radiation hardness about 5 × 10{sup 14} n/cm{sup 2}.

  11. Classification of soft and hard impacts-Application to aircraft crash

    International Nuclear Information System (INIS)

    Koechlin, Pierre; Potapov, Serguei

    2009-01-01

    Before modeling an aircraft crash on a shield building of a nuclear power plant, it is very important to understand the physical phenomena and the structural behavior associated with this kind of impact. In the scientific literature, aircraft crash is classified as a soft impact, or as an impact of deformable missile. Nevertheless the existing classifications are not precise enough to be able to predict 'a priori' the structural response mode. The aim of this paper is to characterize very precisely what is a soft and a hard impact in the frame of aircraft crash on nuclear power plant. First the existing qualitative definition of soft and hard impact is quickly reviewed in order to introduce a new criterion to make a quantitative distinction between soft and hard impact. Then the experimental tests carried out during the last thirty years in the research field of aircraft crash are presented in the light of the new classification. The authors show that this characterization of soft and hard impacts has a real physical interest because it is linked to the failure mode for perforation: for soft impacts, perforation is the consequence of a shear plug breaking away and for hard impact it comes from local failure and projectile penetration. Moreover the boundary between soft and hard impact is the limit for the use of an impact force in an uncoupled computation of the impact

  12. Hard Decision Fusion based Cooperative Spectrum Sensing in Cognitive Radio System

    Directory of Open Access Journals (Sweden)

    N. Armi N.M. Saad

    2013-09-01

    Full Text Available Cooperative spectrum sensing was proposed to combat fading, noise uncertainty, shadowing, and even hidden node problem due to primary users (PUs activity that is not spatially localized. It improves the probability of detection by collaborating to detect PUs signal in cognitive radio (CR system as well. This paper studies cooperative spectrum sensing and signal detection in CR system by implementing hard decision combining in data fusion centre. Through computer simulation, we evaluate the performances of cooperative spectrum sensing and signal detection by employing OR and AND rules as decision combining. Energy detector is used to observe the presence of primary user (PU signal. Those results are compared to non-cooperative signal detection for evaluation. They show that cooperative technique has better performance than non-cooperative. Moreover, signal to noise ratio (SNR with greater than or equal 10 dB and 15 collaborated users in CR system has optimal value for probability of detection.

  13. Bond-orientational analysis of hard-disk and hard-sphere structures.

    Science.gov (United States)

    Senthil Kumar, V; Kumaran, V

    2006-05-28

    We report the bond-orientational analysis results for the thermodynamic, random, and homogeneously sheared inelastic structures of hard-disks and hard-spheres. The thermodynamic structures show a sharp rise in the order across the freezing transition. The random structures show the absence of crystallization. The homogeneously sheared structures get ordered at a packing fraction higher than the thermodynamic freezing packing fraction, due to the suppression of crystal nucleation. On shear ordering, strings of close-packed hard-disks in two dimensions and close-packed layers of hard-spheres in three dimensions, oriented along the velocity direction, slide past each other. Such a flow creates a considerable amount of fourfold order in two dimensions and body-centered-tetragonal (bct) structure in three dimensions. These transitions are the flow analogs of the martensitic transformations occurring in metals due to the stresses induced by a rapid quench. In hard-disk structures, using the bond-orientational analysis we show the presence of fourfold order. In sheared inelastic hard-sphere structures, even though the global bond-orientational analysis shows that the system is highly ordered, a third-order rotational invariant analysis shows that only about 40% of the spheres have face-centered-cubic (fcc) order, even in the dense and near-elastic limits, clearly indicating the coexistence of multiple crystalline orders. When layers of close-packed spheres slide past each other, in addition to the bct structure, the hexagonal-close-packed (hcp) structure is formed due to the random stacking faults. Using the Honeycutt-Andersen pair analysis and an analysis based on the 14-faceted polyhedra having six quadrilateral and eight hexagonal faces, we show the presence of bct and hcp signatures in shear ordered inelastic hard-spheres. Thus, our analysis shows that the dense sheared inelastic hard-spheres have a mixture of fcc, bct, and hcp structures.

  14. Theory of the interface between a classical plasma and a hard wall

    International Nuclear Information System (INIS)

    Ballone, P.; Pastore, G.; Tosi, M.P.; Trieste Univ.

    1983-09-01

    The interfacial density profile of a classical one-component plasma confined by a hard wall is studied in planar and spherical geometries. The approach adapts to interfacial problems a modified hypernetted-chain approximation developed by Lado and by Rosenfeld and Ashcroft for the bulk structure of simple liquids. The specific new aim is to embody self-consistently into the theory a 'contact theorem', fixing the plasma density at the wall through an equilibrium condition which involves the electrical potential drop across the interface and the bulk pressure. The theory is brought into fully quantitative contact with computer simulation data for a plasma confined in a spherical cavity of large but finite radius. It is also shown that the interfacial potential at the point of zero charge is accurately reproduced by suitably combining the contact theorem with relevant bulk properties in a simple, approximate representation of the interfacial charge density profile. (author)

  15. Hard-tip, soft-spring lithography.

    Science.gov (United States)

    Shim, Wooyoung; Braunschweig, Adam B; Liao, Xing; Chai, Jinan; Lim, Jong Kuk; Zheng, Gengfeng; Mirkin, Chad A

    2011-01-27

    Nanofabrication strategies are becoming increasingly expensive and equipment-intensive, and consequently less accessible to researchers. As an alternative, scanning probe lithography has become a popular means of preparing nanoscale structures, in part owing to its relatively low cost and high resolution, and a registration accuracy that exceeds most existing technologies. However, increasing the throughput of cantilever-based scanning probe systems while maintaining their resolution and registration advantages has from the outset been a significant challenge. Even with impressive recent advances in cantilever array design, such arrays tend to be highly specialized for a given application, expensive, and often difficult to implement. It is therefore difficult to imagine commercially viable production methods based on scanning probe systems that rely on conventional cantilevers. Here we describe a low-cost and scalable cantilever-free tip-based nanopatterning method that uses an array of hard silicon tips mounted onto an elastomeric backing. This method-which we term hard-tip, soft-spring lithography-overcomes the throughput problems of cantilever-based scanning probe systems and the resolution limits imposed by the use of elastomeric stamps and tips: it is capable of delivering materials or energy to a surface to create arbitrary patterns of features with sub-50-nm resolution over centimetre-scale areas. We argue that hard-tip, soft-spring lithography is a versatile nanolithography strategy that should be widely adopted by academic and industrial researchers for rapid prototyping applications.

  16. Solving Multi-Pollutant Emission Dispatch Problem Using Computational Intelligence Technique

    Directory of Open Access Journals (Sweden)

    Nur Azzammudin Rahmat

    2016-06-01

    Full Text Available Economic dispatch is a crucial process conducted by the utilities to correctly determine the satisfying amount of power to be generated and distributed to the consumers. During the process, the utilities also consider pollutant emission as the consequences of fossil-fuel consumption. Fossil-fuel includes petroleum, coal, and natural gas; each has its unique chemical composition of pollutants i.e. sulphur oxides (SOX, nitrogen oxides (NOX and carbon oxides (COX. This paper presents multi-pollutant emission dispatch problem using computational intelligence technique. In this study, a novel emission dispatch technique is formulated to determine the amount of the pollutant level. It utilizes a pre-developed optimization technique termed as differential evolution immunized ant colony optimization (DEIANT for the emission dispatch problem. The optimization results indicated high level of COX level, regardless of any type of fossil fuel consumed.

  17. EZLP: An Interactive Computer Program for Solving Linear Programming Problems. Final Report.

    Science.gov (United States)

    Jarvis, John J.; And Others

    Designed for student use in solving linear programming problems, the interactive computer program described (EZLP) permits the student to input the linear programming model in exactly the same manner in which it would be written on paper. This report includes a brief review of the development of EZLP; narrative descriptions of program features,…

  18. Photon technology. Hard photon technology; Photon technology. Hard photon gijutsu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-03-01

    Research results of hard photon technology have been summarized as a part of novel technology development highly utilizing the quantum nature of photon. Hard photon technology refers to photon beam technologies which use photon in the 0.1 to 200 nm wavelength region. Hard photon has not been used in industry due to the lack of suitable photon sources and optical devices. However, hard photon in this wavelength region is expected to bring about innovations in such areas as ultrafine processing and material synthesis due to its atom selective reaction, inner shell excitation reaction, and spatially high resolution. Then, technological themes and possibility have been surveyed. Although there are principle proposes and their verification of individual technologies for the technologies of hard photon generation, regulation and utilization, they are still far from the practical applications. For the photon source technology, the laser diode pumped driver laser technology, laser plasma photon source technology, synchrotron radiation photon source technology, and vacuum ultraviolet photon source technology are presented. For the optical device technology, the multi-layer film technology for beam mirrors and the non-spherical lens processing technology are introduced. Also are described the reduction lithography technology, hard photon excitation process, and methods of analysis and measurement. 430 refs., 165 figs., 23 tabs.

  19. A result-driven minimum blocking method for PageRank parallel computing

    Science.gov (United States)

    Tao, Wan; Liu, Tao; Yu, Wei; Huang, Gan

    2017-01-01

    Matrix blocking is a common method for improving computational efficiency of PageRank, but the blocking rules are hard to be determined, and the following calculation is complicated. In tackling these problems, we propose a minimum blocking method driven by result needs to accomplish a parallel implementation of PageRank algorithm. The minimum blocking just stores the element which is necessary for the result matrix. In return, the following calculation becomes simple and the consumption of the I/O transmission is cut down. We do experiments on several matrixes of different data size and different sparsity degree. The results show that the proposed method has better computational efficiency than traditional blocking methods.

  20. Efficient bounding schemes for the two-center hybrid flow shop scheduling problem with removal times.

    Science.gov (United States)

    Hidri, Lotfi; Gharbi, Anis; Louly, Mohamed Aly

    2014-01-01

    We focus on the two-center hybrid flow shop scheduling problem with identical parallel machines and removal times. The job removal time is the required duration to remove it from a machine after its processing. The objective is to minimize the maximum completion time (makespan). A heuristic and a lower bound are proposed for this NP-Hard problem. These procedures are based on the optimal solution of the parallel machine scheduling problem with release dates and delivery times. The heuristic is composed of two phases. The first one is a constructive phase in which an initial feasible solution is provided, while the second phase is an improvement one. Intensive computational experiments have been conducted to confirm the good performance of the proposed procedures.

  1. A modified genetic algorithm with fuzzy roulette wheel selection for job-shop scheduling problems

    Science.gov (United States)

    Thammano, Arit; Teekeng, Wannaporn

    2015-05-01

    The job-shop scheduling problem is one of the most difficult production planning problems. Since it is in the NP-hard class, a recent trend in solving the job-shop scheduling problem is shifting towards the use of heuristic and metaheuristic algorithms. This paper proposes a novel metaheuristic algorithm, which is a modification of the genetic algorithm. This proposed algorithm introduces two new concepts to the standard genetic algorithm: (1) fuzzy roulette wheel selection and (2) the mutation operation with tabu list. The proposed algorithm has been evaluated and compared with several state-of-the-art algorithms in the literature. The experimental results on 53 JSSPs show that the proposed algorithm is very effective in solving the combinatorial optimization problems. It outperforms all state-of-the-art algorithms on all benchmark problems in terms of the ability to achieve the optimal solution and the computational time.

  2. Finding small OBDDs for incompletely specified truth tables is hard

    DEFF Research Database (Denmark)

    Miltersen, Peter Bro; Kristensen, Jesper Torp

    2006-01-01

    We present an efficient reduction mapping undirected graphs G with n = 2^k vertices for integers k to tables of partially specified Boolean functions g: {0,1}^(4k+1) -> {0,1,*} so that for any integer m, G has a vertex colouring using m colours if and only if g has a consistent ordered binary dec...... decision diagram with at most (2m + 2)n^2 + 4n decision nodes. From this it follows that the problem of finding a minimum-sized consistent OBDD for an incompletely specified truth table is NP-hard and also hard to approximate....

  3. The Sizing and Optimization Language, (SOL): Computer language for design problems

    Science.gov (United States)

    Lucas, Stephen H.; Scotti, Stephen J.

    1988-01-01

    The Sizing and Optimization Language, (SOL), a new high level, special purpose computer language was developed to expedite application of numerical optimization to design problems and to make the process less error prone. SOL utilizes the ADS optimization software and provides a clear, concise syntax for describing an optimization problem, the OPTIMIZE description, which closely parallels the mathematical description of the problem. SOL offers language statements which can be used to model a design mathematically, with subroutines or code logic, and with existing FORTRAN routines. In addition, SOL provides error checking and clear output of the optimization results. Because of these language features, SOL is best suited to model and optimize a design concept when the model consits of mathematical expressions written in SOL. For such cases, SOL's unique syntax and error checking can be fully utilized. SOL is presently available for DEC VAX/VMS systems. A SOL package is available which includes the SOL compiler, runtime library routines, and a SOL reference manual.

  4. Large scale inverse problems computational methods and applications in the earth sciences

    CERN Document Server

    Scheichl, Robert; Freitag, Melina A; Kindermann, Stefan

    2013-01-01

    This book is thesecond volume of three volume series recording the ""Radon Special Semester 2011 on Multiscale Simulation & Analysis in Energy and the Environment"" taking place in Linz, Austria, October 3-7, 2011. The volume addresses the common ground in the mathematical and computational procedures required for large-scale inverse problems and data assimilation in forefront applications.

  5. SUPPORT OF NEW COMPUTER HARDWARE AT LUCH'S MC and A SYSTEM: PROBLEMS AND A SOLUTION

    International Nuclear Information System (INIS)

    Fedoseev, Victor; Shanin, Oleg

    2009-01-01

    Microsoft Windows NT 4.0 operating system is the only software product certified in Russia for using in MC and A systems. In the paper a solution for allowing the installation of this outdated operating system on new computers is discussed. The solution has been successfully tested and has been in use at Luch's network since March 2008. Furthermore, it is being recommended for other Russian enterprises for the same purpose. Introduction Typically, the software part of a nuclear material control and accounting (MC and A) system consists of an operating system (OS), database management systems (DBMS), accounting program itself and database of nuclear materials. Russian regulations require the operating system and database for MC and A be certified for information security, and the whole system must pass an accreditation. Historically, the only certified operating system for MC and A still continues to be Microsoft Windows NT 4.0 Server/Workstation. Attempts to certify newer versions of Windows failed. Luch, like most other Russian sites, uses Microsoft Windows NT 4.0 and SQL Server 6.5. Luch's specialists have developed an application (LuchMAS) for accounting purposes. Starting from about 2004, some problems appeared in Luch's accounting system. They were related to the complexity of installing Windows NT 4.0 on new computers. At first, it was possible to solve the problem choosing computer equipment that is compatible with Windows NT 4.0 or selecting certain operating system settings. Over time, the problem worsened and now it is almost impossible to install Windows NT 4.0 on new computers. The reason is the lack of hardware drivers in the outdated operating system. The problem was serious enough that it could have affected the long-term sustainability of Luch's MC and A system if adequate alternate measures were not developed.

  6. Retrieval and organizational strategies in conceptual memory a computer model

    CERN Document Server

    Kolodner, Janet L

    2014-01-01

    'Someday we expect that computers will be able to keep us informed about the news. People have imagined being able to ask their home computers questions such as "What's going on in the world?"…'. Originally published in 1984, this book is a fascinating look at the world of memory and computers before the internet became the mainstream phenomenon it is today. It looks at the early development of a computer system that could keep us informed in a way that we now take for granted. Presenting a theory of remembering, based on human information processing, it begins to address many of the hard problems implicated in the quest to make computers remember. The book had two purposes in presenting this theory of remembering. First, to be used in implementing intelligent computer systems, including fact retrieval systems and intelligent systems in general. Any intelligent program needs to use and store and use a great deal of knowledge. The strategies and structures in the book were designed to be used for that purpos...

  7. Generic Cospark of a Matrix Can Be Computed in Polynomial Time

    OpenAIRE

    Zhong, Sichen; Zhao, Yue

    2017-01-01

    The cospark of a matrix is the cardinality of the sparsest vector in the column space of the matrix. Computing the cospark of a matrix is well known to be an NP hard problem. Given the sparsity pattern (i.e., the locations of the non-zero entries) of a matrix, if the non-zero entries are drawn from independently distributed continuous probability distributions, we prove that the cospark of the matrix equals, with probability one, to a particular number termed the generic cospark of the matrix...

  8. Comprehensive hard materials

    CERN Document Server

    2014-01-01

    Comprehensive Hard Materials deals with the production, uses and properties of the carbides, nitrides and borides of these metals and those of titanium, as well as tools of ceramics, the superhard boron nitrides and diamond and related compounds. Articles include the technologies of powder production (including their precursor materials), milling, granulation, cold and hot compaction, sintering, hot isostatic pressing, hot-pressing, injection moulding, as well as on the coating technologies for refractory metals, hard metals and hard materials. The characterization, testing, quality assurance and applications are also covered. Comprehensive Hard Materials provides meaningful insights on materials at the leading edge of technology. It aids continued research and development of these materials and as such it is a critical information resource to academics and industry professionals facing the technological challenges of the future. Hard materials operate at the leading edge of technology, and continued res...

  9. Computer codes for the analysis of flask impact problems

    International Nuclear Information System (INIS)

    Neilson, A.J.

    1984-09-01

    This review identifies typical features of the design of transportation flasks and considers some of the analytical tools required for the analysis of impact events. Because of the complexity of the physical problem, it is unlikely that a single code will adequately deal with all the aspects of the impact incident. Candidate codes are identified on the basis of current understanding of their strengths and limitations. It is concluded that the HONDO-II, DYNA3D AND ABAQUS codes which ar already mounted on UKAEA computers will be suitable tools for use in the analysis of experiments conducted in the proposed AEEW programme and of general flask impact problems. Initial attention should be directed at the DYNA3D and ABAQUS codes with HONDO-II being reserved for situations where the three-dimensional elements of DYNA3D may provide uneconomic simulations in planar or axisymmetric geometries. Attention is drawn to the importance of access to suitable mesh generators to create the nodal coordinate and element topology data required by these structural analysis codes. (author)

  10. Solving the Mystery of the Short-Hard Gamma-Ray Bursts

    Science.gov (United States)

    Fox, Derek

    2005-07-01

    Eight years after the afterglow detections that revolutionized studies of the long-soft gamma-ray bursts, not even one afterglow of a short-hard GRB has been seen, and the nature of these events has become one of the most important problems in GRB research. The Swift satellite, expected to be in full operation throughout Cycle 14, will report few-arcsecond localizations for short-hard bursts in minutes, enabling prompt, deep optical afterglow searches for the first time. Discovery and observation of the first short-hard optical afterglows will answer most of the critical questions about these events: What are their distances and energies? Do they occur in distant galaxies, and if so, in which regions of those galaxies? Are they the result of collimated or quasi-spherical explosions? In combination with an extensive rapid-response ground-based campaign, we propose to make the critical high-sensitivity HST TOO observations that will allow us to answer these questions. If theorists are correct in attributing the short-hard bursts to binary neutron star coalescence events, then they will serve as signposts to the primary targeted source population for ground-based gravitational-wave detectors, and short-hard burst studies will have a vital role to play in guiding those observations.

  11. Fundamental challenging problems for developing new nuclear safety standard computer codes

    International Nuclear Information System (INIS)

    Wong, P.K.; Wong, A.E.; Wong, A.

    2005-01-01

    Based on the claims of the US Basic patents number 5,084,232; 5,848,377 and 6,430,516 that can be obtained from typing the Patent Numbers into the Box of the Web site http://164.195.100.11/netahtml/srchnum.htm and their associated published technical papers having been presented and published at International Conferences in the last three years and that all these had been sent into US-NRC by E-mail on March 26, 2003 at 2:46 PM., three fundamental challenging problems for developing new nuclear safety standard computer codes had been presented at the US-NRC RIC2003 Session W4. 2:15-3:15 PM. at the Washington D.C. Capital Hilton Hotel, Presidential Ballroom on April 16, 2003 in front of more than 800 nuclear professionals from many countries worldwide. The objective and scope of this paper is to invite all nuclear professionals to examine and evaluate all the current computer codes being used in their own countries by means of comparison of numerical data from these three specific openly challenging fundamental problems in order to set up a global safety standard for all nuclear power plants in the world. (authors)

  12. Modeling of the flow stress for AISI H13 Tool Steel during Hard Machining Processes

    Science.gov (United States)

    Umbrello, Domenico; Rizzuti, Stefania; Outeiro, José C.; Shivpuri, Rajiv

    2007-04-01

    In general, the flow stress models used in computer simulation of machining processes are a function of effective strain, effective strain rate and temperature developed during the cutting process. However, these models do not adequately describe the material behavior in hard machining, where a range of material hardness between 45 and 60 HRC are used. Thus, depending on the specific material hardness different material models must be used in modeling the cutting process. This paper describes the development of a hardness-based flow stress and fracture models for the AISI H13 tool steel, which can be applied for range of material hardness mentioned above. These models were implemented in a non-isothermal viscoplastic numerical model to simulate the machining process for AISI H13 with various hardness values and applying different cutting regime parameters. Predicted results are validated by comparing them with experimental results found in the literature. They are found to predict reasonably well the cutting forces as well as the change in chip morphology from continuous to segmented chip as the material hardness change.

  13. Modeling of the flow stress for AISI H13 Tool Steel during Hard Machining Processes

    International Nuclear Information System (INIS)

    Umbrello, Domenico; Rizzuti, Stefania; Outeiro, Jose C.; Shivpuri, Rajiv

    2007-01-01

    In general, the flow stress models used in computer simulation of machining processes are a function of effective strain, effective strain rate and temperature developed during the cutting process. However, these models do not adequately describe the material behavior in hard machining, where a range of material hardness between 45 and 60 HRC are used. Thus, depending on the specific material hardness different material models must be used in modeling the cutting process. This paper describes the development of a hardness-based flow stress and fracture models for the AISI H13 tool steel, which can be applied for range of material hardness mentioned above. These models were implemented in a non-isothermal viscoplastic numerical model to simulate the machining process for AISI H13 with various hardness values and applying different cutting regime parameters. Predicted results are validated by comparing them with experimental results found in the literature. They are found to predict reasonably well the cutting forces as well as the change in chip morphology from continuous to segmented chip as the material hardness change

  14. Computational Psychometrics for the Measurement of Collaborative Problem Solving Skills

    Science.gov (United States)

    Polyak, Stephen T.; von Davier, Alina A.; Peterschmidt, Kurt

    2017-01-01

    This paper describes a psychometrically-based approach to the measurement of collaborative problem solving skills, by mining and classifying behavioral data both in real-time and in post-game analyses. The data were collected from a sample of middle school children who interacted with a game-like, online simulation of collaborative problem solving tasks. In this simulation, a user is required to collaborate with a virtual agent to solve a series of tasks within a first-person maze environment. The tasks were developed following the psychometric principles of Evidence Centered Design (ECD) and are aligned with the Holistic Framework developed by ACT. The analyses presented in this paper are an application of an emerging discipline called computational psychometrics which is growing out of traditional psychometrics and incorporates techniques from educational data mining, machine learning and other computer/cognitive science fields. In the real-time analysis, our aim was to start with limited knowledge of skill mastery, and then demonstrate a form of continuous Bayesian evidence tracing that updates sub-skill level probabilities as new conversation flow event evidence is presented. This is performed using Bayes' rule and conversation item conditional probability tables. The items are polytomous and each response option has been tagged with a skill at a performance level. In our post-game analysis, our goal was to discover unique gameplay profiles by performing a cluster analysis of user's sub-skill performance scores based on their patterns of selected dialog responses. PMID:29238314

  15. Computational Psychometrics for the Measurement of Collaborative Problem Solving Skills

    Directory of Open Access Journals (Sweden)

    Stephen T. Polyak

    2017-11-01

    Full Text Available This paper describes a psychometrically-based approach to the measurement of collaborative problem solving skills, by mining and classifying behavioral data both in real-time and in post-game analyses. The data were collected from a sample of middle school children who interacted with a game-like, online simulation of collaborative problem solving tasks. In this simulation, a user is required to collaborate with a virtual agent to solve a series of tasks within a first-person maze environment. The tasks were developed following the psychometric principles of Evidence Centered Design (ECD and are aligned with the Holistic Framework developed by ACT. The analyses presented in this paper are an application of an emerging discipline called computational psychometrics which is growing out of traditional psychometrics and incorporates techniques from educational data mining, machine learning and other computer/cognitive science fields. In the real-time analysis, our aim was to start with limited knowledge of skill mastery, and then demonstrate a form of continuous Bayesian evidence tracing that updates sub-skill level probabilities as new conversation flow event evidence is presented. This is performed using Bayes' rule and conversation item conditional probability tables. The items are polytomous and each response option has been tagged with a skill at a performance level. In our post-game analysis, our goal was to discover unique gameplay profiles by performing a cluster analysis of user's sub-skill performance scores based on their patterns of selected dialog responses.

  16. Computational Psychometrics for the Measurement of Collaborative Problem Solving Skills.

    Science.gov (United States)

    Polyak, Stephen T; von Davier, Alina A; Peterschmidt, Kurt

    2017-01-01

    This paper describes a psychometrically-based approach to the measurement of collaborative problem solving skills, by mining and classifying behavioral data both in real-time and in post-game analyses. The data were collected from a sample of middle school children who interacted with a game-like, online simulation of collaborative problem solving tasks. In this simulation, a user is required to collaborate with a virtual agent to solve a series of tasks within a first-person maze environment. The tasks were developed following the psychometric principles of Evidence Centered Design (ECD) and are aligned with the Holistic Framework developed by ACT. The analyses presented in this paper are an application of an emerging discipline called computational psychometrics which is growing out of traditional psychometrics and incorporates techniques from educational data mining, machine learning and other computer/cognitive science fields. In the real-time analysis, our aim was to start with limited knowledge of skill mastery, and then demonstrate a form of continuous Bayesian evidence tracing that updates sub-skill level probabilities as new conversation flow event evidence is presented. This is performed using Bayes' rule and conversation item conditional probability tables. The items are polytomous and each response option has been tagged with a skill at a performance level. In our post-game analysis, our goal was to discover unique gameplay profiles by performing a cluster analysis of user's sub-skill performance scores based on their patterns of selected dialog responses.

  17. Calculation of the D-COM blind problem with computer codes PIN and RELA

    International Nuclear Information System (INIS)

    Pazdera, F.; Barta, O.; Smid, J.

    1985-01-01

    The results of the blind and post-experimental calculations of the 'D-COM Blind Problem on Fission Gas Release', performed within the framework of the IAEA coordinated research programme for 'The Development of Computer Models for Fuel Element Behaviour in Water Reactors', are presented. The results are compared with experimental data. A sensitivity study shows a possible explanation of some discrepancies between calculated and experimental results during the bump test performed after base irradiation. The calculations were performed with the computer codes PIN and RELA. Some submodels used in the calculations are also described. (author)

  18. Remotely Telling Humans and Computers Apart: An Unsolved Problem

    Science.gov (United States)

    Hernandez-Castro, Carlos Javier; Ribagorda, Arturo

    The ability to tell humans and computers apart is imperative to protect many services from misuse and abuse. For this purpose, tests called CAPTCHAs or HIPs have been designed and put into production. Recent history shows that most (if not all) can be broken given enough time and commercial interest: CAPTCHA design seems to be a much more difficult problem than previously thought. The assumption that difficult-AI problems can be easily converted into valid CAPTCHAs is misleading. There are also some extrinsic problems that do not help, especially the big number of in-house designs that are put into production without any prior public critique. In this paper we present a state-of-the-art survey of current HIPs, including proposals that are now into production. We classify them regarding their basic design ideas. We discuss current attacks as well as future attack paths, and we also present common errors in design, and how many implementation flaws can transform a not necessarily bad idea into a weak CAPTCHA. We present examples of these flaws, using specific well-known CAPTCHAs. In a more theoretical way, we discuss the threat model: confronted risks and countermeasures. Finally, we introduce and discuss some desirable properties that new HIPs should have, concluding with some proposals for future work, including methodologies for design, implementation and security assessment.

  19. Magnetic hard disks for audio-visual use; AV yo jiki disk baitai

    Energy Technology Data Exchange (ETDEWEB)

    Tei, Y.; Sakaguchi, S.; Uwazumi, H. [Fuji Electric Co. Ltd., Tokyo (Japan)

    1999-11-10

    Computers, consumer, and communications are converging and fusing. The key device in homes in the near future will be an audiovisual hard disk drive (AV-HDD). The reason is that there is no other AV cash memory with high capacity, high speed, and a low price than the HDD. Fuji Electric has early started developing an AV magnetic hard disk, a core-functional element of the AV-HDD, to take the initiative in the market. This paper describes the state of plastic medium development, which is regarded as a next-generation strategic commodity. (author)

  20. Characteristic thermal-hydraulic problems in NHRs: Overview of experimental investigations and computer codes

    Energy Technology Data Exchange (ETDEWEB)

    Falikov, A A; Vakhrushev, V V; Kuul, V S; Samoilov, O B; Tarasov, G I [OKBM, Nizhny Novgorod (Russian Federation)

    1997-09-01

    The paper briefly reviews the specific thermal-hydraulic problems for AST-type NHRs, the experimental investigations that have been carried out in the RF, and the design procedures and computer codes used for AST-500 thermohydraulic characteristics and safety validation. (author). 13 refs, 10 figs, 1 tab.

  1. A New Optimization Model for Computer-Aided Molecular Design Problems

    DEFF Research Database (Denmark)

    Zhang, Lei; Cignitti, Stefano; Gani, Rafiqul

    Computer-Aided Molecular Design (CAMD) is a method to design molecules with desired properties. That is, through CAMD, it is possible to generate molecules that match a specified set of target properties. CAMD has attracted much attention in recent years due to its ability to design novel as well...... with structure information considered due to the increased size of the mathematical problem and number of alternatives. Thus, decomposition-based approach is proposed to solve the problem. In this approach, only first-order groups are considered in the first step to obtain the building block of the designed...... molecule, then the property model is refined with second-order groups based on the results of the first step. However, this may result in the possibility of an optimal solution being excluded. Samudra and Sahinidis [4] used property relaxation method in the first step to avoid this situation...

  2. Strategies for MCMC computation inquantitative genetics

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus; Ibánēz-Escriche, Noelia; Sorensen, Daniel

    another extension of the linear mixed model introducing genetic random effects influencing the log residual variances of the observations thereby producing a genetically structured variance heterogeneity. Considerable computational problems arise when abandoning the standard linear mixed model. Maximum...... the various algorithms in the context of the heterogeneous variance model. Apart from being a model of great interest in its own right, this model has proven to be a hard test for MCMC methods. We compare the performances of the different algorithms when applied to three real datasets which differ markedly...... results of applying two MCMC schemes to data sets with pig litter sizes, rabbit litter sizes, and snail weights. Some concluding remarks are given in Section 5....

  3. 6th International Workshop on New Computational Methods for Inverse Problems

    International Nuclear Information System (INIS)

    2016-01-01

    Foreword This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 6 th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2016 (http://complement.farman.ens-cachan.fr/NCMIP 2016.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 20, 2016. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011, and secondly at the initiative of Institut Farman, in May 2012, May 2013, May 2014 and May 2015. The New Computational Methods for Inverse Problems (NCMIP) workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists in estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one- day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel

  4. Identification and simulation of the power quality problems using computer models

    International Nuclear Information System (INIS)

    Abro, M.R.; Memon, A.P.; Memon, Z.A.

    2005-01-01

    The Power Quality has become the main factor in our life. If this quality of power is being polluted over the Electrical Power Network, serious problems could arise within the modem social structure and its conveniences. The Nonlinear Characteristics of various office and Industrial equipment connected to the power grid could cause electrical disturbances to poor power quality. In many cases the electric power consumed is first converted to different form and such conversion process introduces harmonic pollution in the grid. These electrical disturbances could destroy certain sensitive equipment connected to the grid or in some cases could cause them to malfunction. In the huge power network identifying the source of such disturbance without causing interruption to the supply is a big problem. This paper attempts to study the power quality problem caused by typical loads using computer models paving the way to identify the source of the problem. PSB (Power System Blockset) Toolbox of MATLAB is used for this paper, which is designed to provide modem tool that rapidly and easily builds models and simulates the power system. The blockset uses the Simulink environment, allowing a model to be built using simple click and drag procedures. (author)

  5. SOCIAL NETWORK OPTIMIZATION A NEW METHAHEURISTIC FOR GENERAL OPTIMIZATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    Hassan Sherafat

    2017-12-01

    Full Text Available In the recent years metaheuristics were studied and developed as powerful technics for hard optimization problems. Some of well-known technics in this field are: Genetic Algorithms, Tabu Search, Simulated Annealing, Ant Colony Optimization, and Swarm Intelligence, which are applied successfully to many complex optimization problems. In this paper, we introduce a new metaheuristic for solving such problems based on social networks concept, named as Social Network Optimization – SNO. We show that a wide range of np-hard optimization problems may be solved by SNO.

  6. Technology skills assessment for deaf and hard of hearing students in secondary school.

    Science.gov (United States)

    Luft, Pamela; Bonello, Mary; Zirzow, Nichole K

    2009-01-01

    To BE COMPETITIVE in the workplace, deaf and hard of hearing students must not only possess basic computer literacy but also know how to use and care for personal assistive and listening technology. An instrument was developed and pilot-tested on 45 middle school and high school deaf and hard of hearing students in 5 public school programs, 4 urban and 1 suburban, to assess these students' current technology skills and to prepare them for post-high school expectations. The researchers found that the students' computer skills depended on their access to technology, which was not always present in the schools. Many students also did not know basic care practices or troubleshooting techniques for their own personal hearing aids (if worn), or how to access or use personal assistive technology.

  7. The situation of computer utilization in radiation therapy in Japan and other countries and problems

    International Nuclear Information System (INIS)

    Onai, Yoshio

    1981-01-01

    The uses of computers in radiation therapy are various, such as radiation dose calculation, clinical history management, radiotherapeutical instrument automation and biological model. To grasp the situation in this field, a survey by questionnaire was carried out internationally at the time of the 7th International Conference on the Use of Computers in Radiation Therapy held in Kawasaki and Tokyo in September, 1980. The surveyed nations totaled 21 including Japan; the number of facilities answered were 203 in Japan and 111 in other countries, and the period concerned is December, 1979, to September, 1980. The results of the survey are described as follows: areas of use of computers in hospitals, computer utilization in radiation department, computer uses in radiation therapy, and evaluation of radiotherapeutical computer uses and problems. (J.P.N.)

  8. Phase transition and computational complexity in a stochastic prime number generator

    Energy Technology Data Exchange (ETDEWEB)

    Lacasa, L; Luque, B [Departamento de Matematica Aplicada y EstadIstica, ETSI Aeronauticos, Universidad Politecnica de Madrid, Plaza Cardenal Cisneros 3, Madrid 28040 (Spain); Miramontes, O [Departamento de Sistemas Complejos, Instituto de FIsica, Universidad Nacional Autonoma de Mexico, Mexico 01415 DF (Mexico)], E-mail: lucas@dmae.upm.es

    2008-02-15

    We introduce a prime number generator in the form of a stochastic algorithm. The character of this algorithm gives rise to a continuous phase transition which distinguishes a phase where the algorithm is able to reduce the whole system of numbers into primes and a phase where the system reaches a frozen state with low prime density. In this paper, we firstly present a broader characterization of this phase transition, both in analytical and numerical terms. Critical exponents are calculated, and data collapse is provided. Further on, we redefine the model as a search problem, fitting it in the hallmark of computational complexity theory. We suggest that the system belongs to the class NP. The computational cost is maximal around the threshold, as is common in many algorithmic phase transitions, revealing the presence of an easy-hard-easy pattern. We finally relate the nature of the phase transition to an average-case classification of the problem.

  9. Integrated network design and scheduling problems :

    Energy Technology Data Exchange (ETDEWEB)

    Nurre, Sarah G.; Carlson, Jeffrey J.

    2014-01-01

    We consider the class of integrated network design and scheduling problems. These problems focus on selecting and scheduling operations that will change the characteristics of a network, while being speci cally concerned with the performance of the network over time. Motivating applications of INDS problems include infrastructure restoration after extreme events and building humanitarian distribution supply chains. While similar models have been proposed, no one has performed an extensive review of INDS problems from their complexity, network and scheduling characteristics, information, and solution methods. We examine INDS problems under a parallel identical machine scheduling environment where the performance of the network is evaluated by solving classic network optimization problems. We classify that all considered INDS problems as NP-Hard and propose a novel heuristic dispatching rule algorithm that selects and schedules sets of arcs based on their interactions in the network. We present computational analysis based on realistic data sets representing the infrastructures of coastal New Hanover County, North Carolina, lower Manhattan, New York, and a realistic arti cial community CLARC County. These tests demonstrate the importance of a dispatching rule to arrive at near-optimal solutions during real-time decision making activities. We extend INDS problems to incorporate release dates which represent the earliest an operation can be performed and exible release dates through the introduction of specialized machine(s) that can perform work to move the release date earlier in time. An online optimization setting is explored where the release date of a component is not known.

  10. KOMPETENSI PROFESIONAL INSTRUKTUR DALAM PENCAPAIAN “HARD SKILL” PESERTA DIDIK

    Directory of Open Access Journals (Sweden)

    Veti Kurnia

    2017-03-01

    Full Text Available Penelitian ini bertujuan untuk mendeskripsikan kompetensi profesional yang dimiliki instruktur dalam pencapaian “hard skill” peserta didik beserta faktor pendukung dan penghambatnya. Pendekatan penelitian kualitatif. Subyek penelitian berjumlah 6 orang yaitu 3 instruktur perakitan komputer dan pemrograman dan 3 orang peserta didik perakitan komputer dan pemrograman dengan informan berjumlah 3 orang terdiri dari Kepala UPT LLK Dinsosnakertrans dan 2 Instruktur lain. Pengumpulan data dengan wawancara, observasi, dan dokumentasi. Keabsahan data menggunakan triangulasi sumber, metode dan teori. Teknik analisis data adalah pengumpulan data, reduksi data, penyajian data dan penarikan kesimpulan. Hasil dari penelitian ini menyimpulkan bahwa instruktur memiliki kompetensi profesional di dalam pencapain “hard skill” peserta didik sesuai dengan delapan indikator kompetensi profesional. Faktor pendukung kompetensi yaitu latar belakang pendidikan instruktur dan lingkungan yang mendukung. Sedangkan faktor penghambatnya yaitu rendahnya motivasi instruktur untuk berinovasi dan berkreativitas, fasilitas yang kurang mendukung, waktu pelatihan yang singkat.This study aimed to describe what the profesional instructor competence in achieving the hard skills of students along supporting and inhibiting factors. This study used a qualitative approach to data collection through interviews, observation and documentation. The research subjects are 6 people that 3 computer assembly and programming instructor and learners assembly 3 computers and programming with informants totaling 3 consists of the Head of UPT LLK Dinsosnakertrans and 2 other instructors. Data collection through interviews, observation, and documentation. The validity of the data using triangulation source. Data analysis techniques are data collection, data reduction, data presentation and conclusion. The study concluded that are instructors have professional competence in the achievement of hard

  11. The Shortlist Method for fast computation of the Earth Mover's Distance and finding optimal solutions to transportation problems.

    Science.gov (United States)

    Gottschlich, Carsten; Schuhmacher, Dominic

    2014-01-01

    Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method.

  12. Upgrade of RMS computers for Y2K problems in RX and related building of HANARO

    International Nuclear Information System (INIS)

    Kim, Jung Taek; Kim, J. T.; Ham, C. S.; Kim, C. H.; Lee, Bong Jae; Jae, Yoo Kyung

    2000-08-01

    The Objectives of this Project are as follows : - To resolve the problems of Y2k and operation and maintenance of RMS Computers in RX and related Building of HANARO - To upgrade 486 PC to Pentium II PC - To make Windows NT-Based platform for aspects of user - To make an information structure for radiation using ireless and network devices The Contents of the Project are as follows : - To make Windows NT-Based platform for Radiation Monitoring System - To make Software Platform and Environment for the developing the application program - To design and implement Database Structure - To implement RS232c communication program between local indicators and scanning computers - To implement IEEE 802.3 ethernet communication program between scanning computers and RMTs - To implement user interface for radiation monitoring - To test and inspect Y2k problems

  13. Upgrade of RMS computers for Y2K problems in RX and related building of HANARO

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jung Taek; Kim, J. T.; Ham, C. S.; Kim, C. H.; Lee, Bong Jae; Jae, Yoo Kyung

    2000-08-01

    The Objectives of this Project are as follows : - To resolve the problems of Y2k and operation and maintenance of RMS Computers in RX and related Building of HANARO - To upgrade 486 PC to Pentium II PC - To make Windows NT-Based platform for aspects of user - To make an information structure for radiation using ireless and network devices The Contents of the Project are as follows : - To make Windows NT-Based platform for Radiation Monitoring System - To make Software Platform and Environment for the developing the application program - To design and implement Database Structure - To implement RS232c communication program between local indicators and scanning computers - To implement IEEE 802.3 ethernet communication program between scanning computers and RMTs - To implement user interface for radiation monitoring - To test and inspect Y2k problems.

  14. Nonnegative Matrix Factorization with Rank Regularization and Hard Constraint.

    Science.gov (United States)

    Shang, Ronghua; Liu, Chiyang; Meng, Yang; Jiao, Licheng; Stolkin, Rustam

    2017-09-01

    Nonnegative matrix factorization (NMF) is well known to be an effective tool for dimensionality reduction in problems involving big data. For this reason, it frequently appears in many areas of scientific and engineering literature. This letter proposes a novel semisupervised NMF algorithm for overcoming a variety of problems associated with NMF algorithms, including poor use of prior information, negative impact on manifold structure of the sparse constraint, and inaccurate graph construction. Our proposed algorithm, nonnegative matrix factorization with rank regularization and hard constraint (NMFRC), incorporates label information into data representation as a hard constraint, which makes full use of prior information. NMFRC also measures pairwise similarity according to geodesic distance rather than Euclidean distance. This results in more accurate measurement of pairwise relationships, resulting in more effective manifold information. Furthermore, NMFRC adopts rank constraint instead of norm constraints for regularization to balance the sparseness and smoothness of data. In this way, the new data representation is more representative and has better interpretability. Experiments on real data sets suggest that NMFRC outperforms four other state-of-the-art algorithms in terms of clustering accuracy.

  15. CO II laser free-form processing of hard tissue

    Science.gov (United States)

    Werner, Martin; Klasing, Manfred; Ivanenko, Mikhail; Harbecke, Daniela; Steigerwald, Hendrik; Hering, Peter

    2007-07-01

    Drilling and surface processing of bone and tooth tissue belongs to standard medical procedures (bores and embeddings for implants, trepanation etc.). Small circular bores can be generally quickly produced with mechanical drills. However problems arise at angled drilling, the need to execute drilling procedures without damaging of sensitive soft tissue structures underneath the bone or the attempt to mill small non-circular cavities in hard tissue with high precision. We present investigations on laser hard tissue "milling", which can be advantageous for solving these problems. The processing of bone is done with a CO II laser (10.6 μm) with pulse durations of 50 - 100 μs, combined with a PC-controlled fast galvanic laser beam scanner and a fine water-spray, which helps keeping the ablation process effective and without thermal side-effects. Laser "milling" of non-circular cavities with 1 - 4 mm width and about 10 mm depth can be especially interesting for dental implantology. In ex-vivo investigations we found conditions for fast laser processing of these cavities without thermal damage and with minimised tapering. It included the exploration of different filling patterns (concentric rings, crosshatch, parallel lines, etc.), definition of maximal pulse duration, repetition rate and laser power, and optimal water spray position. The optimised results give evidence for the applicability of pulsed CO II lasers for biologically tolerable effective processing of deep cavities in hard tissue.

  16. Enhancing the Hardness of Sintered SS 17-4PH Using Nitriding Process for Bracket Orthodontic Application

    Science.gov (United States)

    Suharno, B.; Supriadi, S.; Ayuningtyas, S. T.; Widjaya, T.; Baek, E. R.

    2018-01-01

    Brackets orthodontic create teeth movement by applying force from wire to bracket then transferred to teeth. However, emergence of friction between brackets and wires reduces load for teeth movement towards desired area. In order to overcome these problem, surface treatment like nitriding chosen as a process which could escalate efficiency of transferred force by improving material hardness since hard materials have low friction levels. This work investigated nitriding treatment to form nitride layer which affecting hardness of sintered SS 17-4PH. The nitride layers produced after nitriding process at various temperature i.e. 470°C, 500°C, 530°C with 8hr holding time under 50% NH3 atmosphere. Optical metallography was conducted to compare microstructure of base and surface metal while the increasing of surface hardness then observed using vickers microhardness tester. Hardened surface layer was obtained after gaseous nitriding process because of nitride layer that contains Fe4N, CrN and Fe-αN formed. Hardness layers can achieved value 1051 HV associated with varies thickness from 53 to 119 μm. The presence of a precipitation process occurring in conjunction with nitriding process can lead to a decrease in hardness due to nitrogen content diminishing in solid solution phase. This problem causes weakening of nitrogen expansion in martensite lattice.

  17. Multi-objective problem of the modified distributed parallel machine and assembly scheduling problem (MDPMASP) with eligibility constraints

    Science.gov (United States)

    Amallynda, I.; Santosa, B.

    2017-11-01

    This paper proposes a new generalization of the distributed parallel machine and assembly scheduling problem (DPMASP) with eligibility constraints referred to as the modified distributed parallel machine and assembly scheduling problem (MDPMASP) with eligibility constraints. Within this generalization, we assume that there are a set non-identical factories or production lines, each one with a set unrelated parallel machine with different speeds in processing them disposed to a single assembly machine in series. A set of different products that are manufactured through an assembly program of a set of components (jobs) according to the requested demand. Each product requires several kinds of jobs with different sizes. Beside that we also consider to the multi-objective problem (MOP) of minimizing mean flow time and the number of tardy products simultaneously. This is known to be NP-Hard problem, is important to practice, as the former criterions to reflect the customer's demand and manufacturer's perspective. This is a realistic and complex problem with wide range of possible solutions, we propose four simple heuristics and two metaheuristics to solve it. Various parameters of the proposed metaheuristic algorithms are discussed and calibrated by means of Taguchi technique. All proposed algorithms are tested by Matlab software. Our computational experiments indicate that the proposed problem and fourth proposed algorithms are able to be implemented and can be used to solve moderately-sized instances, and giving efficient solutions, which are close to optimum in most cases.

  18. Technology, attributions, and emotions in post-secondary education: An application of Weiner’s attribution theory to academic computing problems

    Science.gov (United States)

    Hall, Nathan C.; Goetz, Thomas; Chiarella, Andrew; Rahimi, Sonia

    2018-01-01

    As technology becomes increasingly integrated with education, research on the relationships between students’ computing-related emotions and motivation following technological difficulties is critical to improving learning experiences. Following from Weiner’s (2010) attribution theory of achievement motivation, the present research examined relationships between causal attributions and emotions concerning academic computing difficulties in two studies. Study samples consisted of North American university students enrolled in both traditional and online universities (total N = 559) who responded to either hypothetical scenarios or experimental manipulations involving technological challenges experienced in academic settings. Findings from Study 1 showed stable and external attributions to be emotionally maladaptive (more helplessness, boredom, guilt), particularly in response to unexpected computing problems. Additionally, Study 2 found stable attributions for unexpected problems to predict more anxiety for traditional students, with both external and personally controllable attributions for minor problems proving emotionally beneficial for students in online degree programs (more hope, less anxiety). Overall, hypothesized negative effects of stable attributions were observed across both studies, with mixed results for personally controllable attributions and unanticipated emotional benefits of external attributions for academic computing problems warranting further study. PMID:29529039

  19. The method of diagnosis and classification of the gingival line defects of the teeth hard tissues

    Directory of Open Access Journals (Sweden)

    Olena Bulbuk

    2017-06-01

    Full Text Available For solving the problem of diagnosis and treatment of hard tissue defects the significant role belongs to the choice of tactics for dental treatment of hard tissue defects located in the gingival line of any tooth. This work aims to study the problems of diagnosis and classification of gingival line defects of the teeth hard tissues. That will contribute to the objectification of differentiated diagnostic and therapeutic approaches in the dental treatment of various clinical variants of these defects localization. The objective of the study – is to develop the anatomical-functional classification for differentiated estimation of hard tissue defects in the gingival part, as the basis for the application of differential diagnostic-therapeutic approaches to the dental treatment of hard tissue defects disposed in the gingival part of any tooth. Materials and methods of investigation: There was conducted the examination of 48 patients with hard tissue defects located in the gingival part of any tooth. To assess the magnitude of gingival line destruction the periodontal probe and X-ray examination were used. Results. The result of the performed research the classification of the gingival line defects of the hard tissues was offered using exponent power. The value of this indicator is equal to an integer number expressed in millimeters of distance from the epithelial attachment to the cavity’s bottom of defect. Conclusions. The proposed classification fills an obvious gap in academic representations about hard tissue defects located in the gingival part of any tooth. Also it offers the prospects of consensus on differentiated diagnostic-therapeutic approaches in different clinical variants of location.  This classification builds methodological “bridge of continuity” between therapeutic and prosthetic dentistry in the field of treatment of the gingival line defects of dental hard tissues.

  20. Structured pigeonhole principle, search problems and hard tautologies

    Czech Academy of Sciences Publication Activity Database

    Krajíček, Jan

    2005-01-01

    Roč. 70, č. 2 (2005), s. 619-630 ISSN 0022-4812 R&D Projects: GA AV ČR(CZ) IAA1019401; GA MŠk(CZ) LN00A056 Institutional research plan: CEZ:AV0Z10190503 Keywords : proof complexity * pigeonhole principle * serch problems Subject RIV: BA - General Mathematics Impact factor: 0.470, year: 2005

  1. The hardness test: a real mechanical test

    International Nuclear Information System (INIS)

    Rezakhanlou, R.

    1993-02-01

    During the service life, the mechanical properties of the PWR components change. It is necessary to determine precisely this evolution, but it is not always possible to draw a sample with the adequate size for the characterization. For this latter case we intend to calculate the stress-strain curve of a material from a hardness test results, because it is appropriate for testing on site and do not need any particular sample shape. This paper is the first bibliographical part of a larger study on the relation between the values measured during a hardness test (applied load, indentation diameter) and the mechanical properties of a solid obtained by a traction test. We have treated the problem within the general setting of two solids in contact. Thus, we expose general elastic, elasto-plastic and plastic models describing the indentation of a solid by a rigid indenter

  2. On the Computation of the Efficient Frontier of the Portfolio Selection Problem

    Directory of Open Access Journals (Sweden)

    Clara Calvo

    2012-01-01

    Full Text Available An easy-to-use procedure is presented for improving the ε-constraint method for computing the efficient frontier of the portfolio selection problem endowed with additional cardinality and semicontinuous variable constraints. The proposed method provides not only a numerical plotting of the frontier but also an analytical description of it, including the explicit equations of the arcs of parabola it comprises and the change points between them. This information is useful for performing a sensitivity analysis as well as for providing additional criteria to the investor in order to select an efficient portfolio. Computational results are provided to test the efficiency of the algorithm and to illustrate its applications. The procedure has been implemented in Mathematica.

  3. ALE-PSO: An Adaptive Swarm Algorithm to Solve Design Problems of Laminates

    Directory of Open Access Journals (Sweden)

    Paolo Vannucci

    2009-04-01

    Full Text Available This paper presents an adaptive PSO algorithm whose numerical parameters can be updated following a scheduled protocol respecting some known criteria of convergence in order to enhance the chances to reach the global optimum of a hard combinatorial optimization problem, such those encountered in global optimization problems of composite laminates. Some examples concerning hard design problems are provided, showing the effectiveness of the approach.

  4. Remember Hard but Think Softly: Metaphorical Effects of Hardness/Softness on Cognitive Functions

    Directory of Open Access Journals (Sweden)

    Jiushu Xie

    2016-09-01

    Full Text Available Previous studies have found that bodily stimulation, such as hardness, biases social judgment and evaluation via metaphorical association; however, it remains unclear whether bodily stimulation also affects cognitive functions, such as memory and creativity. The current study used metaphorical associations between hard and rigid and between soft and flexible in Chinese, to investigate whether the experience of hardness affected cognitive functions requiring either rigidity (memory or flexibility (creativity. In Experiment 1, we found that Chinese-speaking participants performed better at recalling previously memorized words while sitting on a hard-surface stool (the hard condition than a cushioned one (the soft condition. In Experiment 2, participants sitting on a cushioned stool outperformed those sitting on a hard-surface stool on a Chinese riddle task, which required creative/flexible thinking, but not on an analogical reasoning task, which required both rigid and flexible thinking. The results suggest the hardness experience affects cognitive functions that are metaphorically associated with rigidity and flexibility. They support the embodiment proposition that cognitive functions and representations could be grounded via metaphorical association in bodily states.

  5. ANALYSIS AND PERFORMANCE MEASUREMENT OF EXISTING SOLUTION METHODS OF QUADRATIC ASSIGNMENT PROBLEM

    Directory of Open Access Journals (Sweden)

    Morteza KARAMI

    2014-01-01

    Full Text Available Quadratic Assignment Problem (QAP is known as one of the most difficult combinatorial optimization problems that is classified in the category of NP-hard problems. Quadratic Assignment Problem Library (QAPLIB is a full database of QAPs which contains several problems from different authors and different sizes. Many exact and meta-heuristic solution methods have been introduced to solve QAP. In this study we focus on previously introduced solution methods of QAP e.g. Branch and Bound (B&B, Simulated Annealing (SA Algorithm, Greedy Randomized Adaptive Search Procedure (GRASP for dense and sparse QAPs. The codes of FORTRAN for these methods were downloaded from QAPLIB. All problems of QAPLIB were solved by the abovementioned methods. Several results were obtained from the computational experiments part. The Results show that the Branch and Bound method is able to introduce a feasible solution for all problems while Simulated Annealing Algorithm and GRASP methods are not able to find any solution for some problems. On the other hand, Simulated Annealing and GRASP methods have shorter run time comparing to the Branch and Bound method. In addition, the performance of the methods on the objective function value is discussed.

  6. Asymmetric continuous-time neural networks without local traps for solving constraint satisfaction problems.

    Directory of Open Access Journals (Sweden)

    Botond Molnár

    Full Text Available There has been a long history of using neural networks for combinatorial optimization and constraint satisfaction problems. Symmetric Hopfield networks and similar approaches use steepest descent dynamics, and they always converge to the closest local minimum of the energy landscape. For finding global minima additional parameter-sensitive techniques are used, such as classical simulated annealing or the so-called chaotic simulated annealing, which induces chaotic dynamics by addition of extra terms to the energy landscape. Here we show that asymmetric continuous-time neural networks can solve constraint satisfaction problems without getting trapped in non-solution attractors. We concentrate on a model solving Boolean satisfiability (k-SAT, which is a quintessential NP-complete problem. There is a one-to-one correspondence between the stable fixed points of the neural network and the k-SAT solutions and we present numerical evidence that limit cycles may also be avoided by appropriately choosing the parameters of the model. This optimal parameter region is fairly independent of the size and hardness of instances, this way parameters can be chosen independently of the properties of problems and no tuning is required during the dynamical process. The model is similar to cellular neural networks already used in CNN computers. On an analog device solving a SAT problem would take a single operation: the connection weights are determined by the k-SAT instance and starting from any initial condition the system searches until finding a solution. In this new approach transient chaotic behavior appears as a natural consequence of optimization hardness and not as an externally induced effect.

  7. Radiation hardness qualification of PbWO4 scintillation crystals for the CMS Electromagnetic Calorimeter

    CERN Document Server

    Adzic, P.; Andelin, D.; Anicin, I.; Antunovic, Z.; Arcidiacono, R.; Arenton, M.W.; Auffray, E.; Argiro, S.; Askew, A.; Baccaro, S.; Baffioni, S.; Balazs, M.; Bandurin, D.; Barney, D.; Barone, L.M.; Bartoloni, A.; Baty, C.; Beauceron, S.; Bell, K.W.; Bernet, C.; Besancon, M.; Betev, B.; Beuselinck, R.; Biino, C.; Blaha, J.; Bloch, P.; Borisevitch, A.; Bornheim, A.; Bourotte, J.; Brown, R.M.; Buehler, M.; Busson, P.; Camanzi, B.; Camporesi, T.; Cartiglia, N.; Cavallari, F.; Cecilia, A.; Chang, P.; Chang, Y.H.; Charlot, C.; Chen, E.A.; Chen, W.T.; Chen, Z.; Chipaux, R.; Choudhary, B.C.; Choudhury, R.K.; Cockerill, D.J.A.; Conetti, S.; Cooper, S.I.; Cossutti, F.; Cox, B.; Cussans, D.G.; Dafinei, I.; Da Silva Di Calafiori, D.R.; Daskalakis, G.; David, A.; Deiters, K.; Dejardin, M.; De Benedetti, A.; Della Ricca, G.; Del Re, D.; Denegri, D.; Depasse, P.; Descamps, J.; Diemoz, M.; Di Marco, E.; Dissertori, G.; Dittmar, M.; Djambazov, L.; Djordjevic, M.; Dobrzynski, L.; Dolgopolov, A.; Drndarevic, S.; Drobychev, G.; Dutta, D.; Dzelalija, M.; Elliott-Peisert, A.; El Mamouni, H.; Evangelou, I.; Fabbro, B.; Faure, J.L.; Fay, J.; Fedorov, A.; Ferri, F.; Franci, D.; Franzoni, G.; Freudenreich, K.; Funk, W.; Ganjour, S.; Gascon, S.; Gataullin, M.; Gentit, F.X.; Ghezzi, A.; Givernaud, A.; Gninenko, S.; Go, A.; Gobbo, B.; Godinovic, N.; Golubev, N.; Govoni, P.; Grant, N.; Gras, P.; Haguenauer, M.; Hamel de Monchenault, G.; Hansen, M.; Haupt, J.; Heath, H.F.; Heltsley, B.; Cornell U., LNS.; Hintz, W.; Hirosky, R.; Hobson, P.R.; Honma, A.; Hou, G.W.S.; Hsiung, Y.; Huhtinen, M.; Ille, B.; Ingram, Q.; Inyakin, A.; Jarry, P.; Jessop, C.; Jovanovic, D.; Kaadze, K.; Kachanov, V.; Kailas, S.; Kataria, S.K.; Kennedy, B.W.; Kokkas, P.; Kolberg, T.; Korjik, M.; Krasnikov, N.; Krpic, D.; Kubota, Y.; Kuo, C.M.; Kyberd, P.; Kyriakis, A.; Lebeau, M.; Lecomte, P.; Lecoq, P.; Ledovskoy, A.; Lethuillier, M.; Lin, S.W.; Lin, W.; Litvine, V.; Locci, E.; Longo, E.; Loukas, D.; Luckey, P.D.; Lustermann, W.; Ma, Y.; Malberti, M.; Malcles, J.; Maletic, D.; Manthos, N.; Maravin, Y.; Marchica, C.; Marinelli, N.; Markou, A.; Markou, C.; Marone, M.; Matveev, V.; Mavrommatis, C.; Meridiani, P.; Milenovic, P.; Mine, P.; Missevitch, O.; Mohanty, A.K.; Moortgat, F.; Musella, P.; Musienko, Y.; Nardulli, A.; Nash, J.; Nedelec, P.; Negri, P.; Newman, H.B.; Nikitenko, A.; Nessi-Tedaldi, F.; Obertino, M.M.; Organtini, G.; Orimoto, T.; Paganoni, M.; Paganini, P.; Palma, A.; Pant, L.; Papadakis, A.; Papadakis, I.; Papadopoulos, I.; Paramatti, R.; Parracho, P.; Pastrone, N.; Patterson, J.R.; Pauss, F.; Peigneux, J.P.; Petrakou, E.; Phillips, D.G.; Piroue, P.; Ptochos, F.; Puljak, I.; Pullia, A.; Punz, T.; Puzovic, J.; Ragazzi, S.; Rahatlou, S.; Rander, J.; Razis, P.A.; Redaelli, N.; Renker, D.; Reucroft, S.; Ribeiro, P.; Rogan, C.; Ronquest, M.; Rosowsky, A.; Rovelli, C.; Rumerio, P.; Rusack, R.; Rusakov, S.V.; Ryan, M.J.; Sala, L.; Salerno, R.; Schneegans, M.; Seez, C.; Sharp, P.; Shepherd-Themistocleous, C.H.; Shiu, J.G.; Shivpuri, R.K.; Shukla, P.; Siamitros, C.; Sillou, D.; Silva, J.; Silva, P.; Singovsky, A.; Sirois, Y.; Sirunyan, A.; Smith, V.J.; Stockli, F.; Swain, J.; Tabarelli de Fatis, T.; Takahashi, M.; Tancini, V.; Teller, O.; Theofilatos, K.; Thiebaux, C.; Timciuc, V.; Timlin, C.; Titov, Maxim P.; Topkar, A.; Triantis, F.A.; Troshin, S.; Tyurin, N.; Ueno, K.; Uzunian, A.; Varela, J.; Verrecchia, P.; Veverka, J.; Virdee, T.; Wang, M.; Wardrope, D.; Weber, M.; Weng, J.; Williams, J.H.; Yang, Y.; Yaselli, I.; Yohay, R.; Zabi, A.; Zelepoukine, S.; Zhang, J.; Zhang, L.Y.; Zhu, K.; Zhu, R.Y.

    2010-01-01

    Ensuring the radiation hardness of PbWO4 crystals was one of the main priorities during the construction of the electromagnetic calorimeter of the CMS experiment at CERN. The production on an industrial scale of radiation hard crystals and their certification over a period of several years represented a difficult challenge both for CMS and for the crystal suppliers. The present article reviews the related scientific and technological problems encountered.

  8. Quantum control with noisy fields: computational complexity versus sensitivity to noise

    International Nuclear Information System (INIS)

    Kallush, S; Khasin, M; Kosloff, R

    2014-01-01

    A closed quantum system is defined as completely controllable if an arbitrary unitary transformation can be executed using the available controls. In practice, control fields are a source of unavoidable noise, which has to be suppressed to retain controllability. Can one design control fields such that the effect of noise is negligible on the time-scale of the transformation? This question is intimately related to the fundamental problem of a connection between the computational complexity of the control problem and the sensitivity of the controlled system to noise. The present study considers a paradigm of control, where the Lie-algebraic structure of the control Hamiltonian is fixed, while the size of the system increases with the dimension of the Hilbert space representation of the algebra. We find two types of control tasks, easy and hard. Easy tasks are characterized by a small variance of the evolving state with respect to the operators of the control operators. They are relatively immune to noise and the control field is easy to find. Hard tasks have a large variance, are sensitive to noise and the control field is hard to find. The influence of noise increases with the size of the system, which is measured by the scaling factor N of the largest weight of the representation. For fixed time and control field the ability to control degrades as O(N) for easy tasks and as O(N 2 ) for hard tasks. As a consequence, even in the most favorable estimate, for large quantum systems, generic noise in the controls dominates for a typical class of target transformations, i.e. complete controllability is destroyed by noise. (paper)

  9. Improved Fractal Space Filling Curves Hybrid Optimization Algorithm for Vehicle Routing Problem.

    Science.gov (United States)

    Yue, Yi-xiang; Zhang, Tong; Yue, Qun-xing

    2015-01-01

    Vehicle Routing Problem (VRP) is one of the key issues in optimization of modern logistics system. In this paper, a modified VRP model with hard time window is established and a Hybrid Optimization Algorithm (HOA) based on Fractal Space Filling Curves (SFC) method and Genetic Algorithm (GA) is introduced. By incorporating the proposed algorithm, SFC method can find an initial and feasible solution very fast; GA is used to improve the initial solution. Thereafter, experimental software was developed and a large number of experimental computations from Solomon's benchmark have been studied. The experimental results demonstrate the feasibility and effectiveness of the HOA.

  10. Software Defined Resource Orchestration System for Multitask Application in Heterogeneous Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Qi Qi

    2016-01-01

    Full Text Available The mobile cloud computing (MCC that combines mobile computing and cloud concept takes wireless access network as the transmission medium and uses mobile devices as the client. When offloading the complicated multitask application to the MCC environment, each task executes individually in terms of its own computation, storage, and bandwidth requirement. Due to user’s mobility, the provided resources contain different performance metrics that may affect the destination choice. Nevertheless, these heterogeneous MCC resources lack integrated management and can hardly cooperate with each other. Thus, how to choose the appropriate offload destination and orchestrate the resources for multitask is a challenge problem. This paper realizes a programming resource provision for heterogeneous energy-constrained computing environments, where a software defined controller is responsible for resource orchestration, offload, and migration. The resource orchestration is formulated as multiobjective optimal problem that contains the metrics of energy consumption, cost, and availability. Finally, a particle swarm algorithm is used to obtain the approximate optimal solutions. Simulation results show that the solutions for all of our studied cases almost can hit Pareto optimum and surpass the comparative algorithm in approximation, coverage, and execution time.

  11. Structure problems in the analog computation

    International Nuclear Information System (INIS)

    Braffort, P.L.

    1957-01-01

    The recent mathematical development showed the importance of elementary structures (algebraic, topological, etc.) in abeyance under the great domains of classical analysis. Such structures in analog computation are put in evidence and possible development of applied mathematics are discussed. It also studied the topological structures of the standard representation of analog schemes such as additional triangles, integrators, phase inverters and functions generators. The analog method gives only the function of the variable: time, as results of its computations. But the course of computation, for systems including reactive circuits, introduces order structures which are called 'chronological'. Finally, it showed that the approximation methods of ordinary numerical and digital computation present the same structure as these analog computation. The structure analysis permits fruitful comparisons between the several domains of applied mathematics and suggests new important domains of application for analog method. (M.P.)

  12. Computer-assisted acoustic emission analysis in alternating current magnetization and hardness testing of reactor pressure vessel steels

    International Nuclear Information System (INIS)

    Blochwitz, M.; Kretzschmar, F.; Rattke, R.

    1985-01-01

    Non-destructive determination of material characteristics such as nilductility transition temperature is of high importance in component monitoring during long-term operation. An attempt has been made to obtain characteristics correlating with mechanico-technological material characteristics by both acoustic resonance through magnetization (ARDM) and acoustic emission analysis in Vickers hardness tests. Taking into account the excitation mechanism of acoustic emission generation, which has a quasistationary stochastic character in a.c. magnetization and a transient nature in hardness testing, a microcomputerized device has been constructed for frequency analysis of the body sound level in ARDM evaluation and for measuring the pulse sum and/or pulse rate during indentation of the test specimen in hardness evaluation. Prerequisite for evaluating the measured values is the knowledge of the frequency dependence of the sensors and the instrument system. The results obtained are presented. (author)

  13. Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle–Pock algorithm

    DEFF Research Database (Denmark)

    Sidky, Emil Y.; Jørgensen, Jakob Heide; Pan, Xiaochuan

    2012-01-01

    The primal–dual optimization algorithm developed in Chambolle and Pock (CP) (2011 J. Math. Imag. Vis. 40 1–26) is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems...... for the purpose of designing iterative image reconstruction algorithms for CT. The primal–dual algorithm is briefly summarized in this paper, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application...

  14. Peer Support for the Hardly Reached: A Systematic Review.

    Science.gov (United States)

    Sokol, Rebeccah; Fisher, Edwin

    2016-07-01

    Health disparities are aggravated when prevention and care initiatives fail to reach those they are intended to help. Groups can be classified as hardly reached according to a variety of circumstances that fall into 3 domains: individual (e.g., psychological factors), demographic (e.g., socioeconomic status), and cultural-environmental (e.g., social network). Several reports have indicated that peer support is an effective means of reaching hardly reached individuals. However, no review has explored peer support effectiveness in relation to the circumstances associated with being hardly reached or across diverse health problems. To conduct a systematic review assessing the reach and effectiveness of peer support among hardly reached individuals, as well as peer support strategies used. Three systematic searches conducted in PubMed identified studies that evaluated peer support programs among hardly reached individuals. In aggregate, the searches covered articles published from 2000 to 2015. Eligible interventions provided ongoing support for complex health behaviors, including prioritization of hardly reached populations, assistance in applying behavior change plans, and social-emotional support directed toward disease management or quality of life. Studies were excluded if they addressed temporally isolated behaviors, were limited to protocol group classes, included peer support as the dependent variable, did not include statistical tests of significance, or incorporated comparison conditions that provided appreciable social support. We abstracted data regarding the primary health topic, categorizations of hardly reached groups, program reach, outcomes, and strategies employed. We conducted a 2-sample t test to determine whether reported strategies were related to reach. Forty-seven studies met our inclusion criteria, and these studies represented each of the 3 domains of circumstances assessed (individual, demographic, and cultural-environmental). Interventions

  15. On Computing Breakpoint Distances for Genomes with Duplicate Genes.

    Science.gov (United States)

    Shao, Mingfu; Moret, Bernard M E

    2017-06-01

    A fundamental problem in comparative genomics is to compute the distance between two genomes in terms of its higher level organization (given by genes or syntenic blocks). For two genomes without duplicate genes, we can easily define (and almost always efficiently compute) a variety of distance measures, but the problem is NP-hard under most models when genomes contain duplicate genes. To tackle duplicate genes, three formulations (exemplar, maximum matching, and any matching) have been proposed, all of which aim to build a matching between homologous genes so as to minimize some distance measure. Of the many distance measures, the breakpoint distance (the number of nonconserved adjacencies) was the first one to be studied and remains of significant interest because of its simplicity and model-free property. The three breakpoint distance problems corresponding to the three formulations have been widely studied. Although we provided last year a solution for the exemplar problem that runs very fast on full genomes, computing optimal solutions for the other two problems has remained challenging. In this article, we describe very fast, exact algorithms for these two problems. Our algorithms rely on a compact integer-linear program that we further simplify by developing an algorithm to remove variables, based on new results on the structure of adjacencies and matchings. Through extensive experiments using both simulations and biological data sets, we show that our algorithms run very fast (in seconds) on mammalian genomes and scale well beyond. We also apply these algorithms (as well as the classic orthology tool MSOAR) to create orthology assignment, then compare their quality in terms of both accuracy and coverage. We find that our algorithm for the "any matching" formulation significantly outperforms other methods in terms of accuracy while achieving nearly maximum coverage.

  16. Thick and large area PIN diodes for hard X-ray astronomy

    CERN Document Server

    Ota, N; Sugizaki, M; Kaneda, M; Tamura, T; Ozawa, H; Kamae, T; Makishima, K; Takahashi, T; Tashiro, M; Fukazawa, Y; Kataoka, J; Yamaoka, K; Kubo, S; Tanihata, C; Uchiyama, Y; Matsuzaki, K; Iyomoto, N; Kokubun, M; Nakazawa, T; Kubota, A; Mizuno, T; Matsumoto, Y; Isobe, N; Terada, Y; Sugiho, M; Onishi, T; Kubo, H; Ikeda, H; Nomachi, M; Ohsugi, T; Muramatsu, M; Akahori, H

    1999-01-01

    Thick and large area PIN diodes for the hard X-ray astronomy in the 10-60 keV range are developed. To cover this energy range in a room temperature and in a low background environment, Si PIN junction diodes of 2 mm in thickness with 2.5 cm sup 2 in effective area were developed, and will be used in the bottom of the Phoswich Hard X-ray Detector (HXD), on-board the ASTRO-E satellite. Problems related to a high purity Si and a thick depletion layer during our development and performance of the PIN diodes are presented in detail.

  17. Initialization and Restart in Stochastic Local Search: Computing a Most Probable Explanation in Bayesian Networks

    Science.gov (United States)

    Mengshoel, Ole J.; Wilkins, David C.; Roth, Dan

    2010-01-01

    For hard computational problems, stochastic local search has proven to be a competitive approach to finding optimal or approximately optimal problem solutions. Two key research questions for stochastic local search algorithms are: Which algorithms are effective for initialization? When should the search process be restarted? In the present work we investigate these research questions in the context of approximate computation of most probable explanations (MPEs) in Bayesian networks (BNs). We introduce a novel approach, based on the Viterbi algorithm, to explanation initialization in BNs. While the Viterbi algorithm works on sequences and trees, our approach works on BNs with arbitrary topologies. We also give a novel formalization of stochastic local search, with focus on initialization and restart, using probability theory and mixture models. Experimentally, we apply our methods to the problem of MPE computation, using a stochastic local search algorithm known as Stochastic Greedy Search. By carefully optimizing both initialization and restart, we reduce the MPE search time for application BNs by several orders of magnitude compared to using uniform at random initialization without restart. On several BNs from applications, the performance of Stochastic Greedy Search is competitive with clique tree clustering, a state-of-the-art exact algorithm used for MPE computation in BNs.

  18. An efficient genetic algorithm for a hybrid flow shop scheduling problem with time lags and sequence-dependent setup time

    Directory of Open Access Journals (Sweden)

    Farahmand-Mehr Mohammad

    2014-01-01

    Full Text Available In this paper, a hybrid flow shop scheduling problem with a new approach considering time lags and sequence-dependent setup time in realistic situations is presented. Since few works have been implemented in this field, the necessity of finding better solutions is a motivation to extend heuristic or meta-heuristic algorithms. This type of production system is found in industries such as food processing, chemical, textile, metallurgical, printed circuit board, and automobile manufacturing. A mixed integer linear programming (MILP model is proposed to minimize the makespan. Since this problem is known as NP-Hard class, a meta-heuristic algorithm, named Genetic Algorithm (GA, and three heuristic algorithms (Johnson, SPTCH and Palmer are proposed. Numerical experiments of different sizes are implemented to evaluate the performance of presented mathematical programming model and the designed GA in compare to heuristic algorithms and a benchmark algorithm. Computational results indicate that the designed GA can produce near optimal solutions in a short computational time for different size problems.

  19. Heuristic Method for Decision-Making in Common Scheduling Problems

    Directory of Open Access Journals (Sweden)

    Edyta Kucharska

    2017-10-01

    Full Text Available The aim of the paper is to present a heuristic method for decision-making regarding an NP-hard scheduling problem with limitations related to tasks and the resources dependent on the current state of the process. The presented approach is based on the algebraic-logical meta-model (ALMM, which enables making collective decisions in successive process stages, not separately for individual objects or executors. Moreover, taking into account the limitations of the problem, it involves constructing only an acceptable solution and significantly reduces the amount of calculations. A general algorithm based on the presented method is composed of the following elements: preliminary analysis of the problem, techniques for the choice of decision at a given state, the pruning non-perspective trajectory, selection technique of the initial state for the trajectory final part, and the trajectory generation parameters modification. The paper includes applications of the presented approach to scheduling problems on unrelated parallel machines with a deadline and machine setup time dependent on the process state, where the relationship between tasks is defined by the graph. The article also presents the results of computational experiments.

  20. A minimum resource neural network framework for solving multiconstraint shortest path problems.

    Science.gov (United States)

    Zhang, Junying; Zhao, Xiaoxue; He, Xiaotao

    2014-08-01

    Characterized by using minimum hard (structural) and soft (computational) resources, a novel parameter-free minimal resource neural network (MRNN) framework is proposed for solving a wide range of single-source shortest path (SP) problems for various graph types. The problems are the k-shortest time path problems with any combination of three constraints: time, hop, and label constraints, and the graphs can be directed, undirected, or bidirected with symmetric and/or asymmetric traversal time, which can be real and time dependent. Isomorphic to the graph where the SP is to be sought, the network is activated by generating autowave at source neuron and the autowave travels automatically along the paths with the speed of a hop in an iteration. Properties of the network are studied, algorithms are presented, and computation complexity is analyzed. The framework guarantees globally optimal solutions of a series of problems during the iteration process of the network, which provides insight into why even the SP is still too long to be satisfied. The network facilitates very large scale integrated circuit implementation and adapt to very large scale problems due to its massively parallel processing and minimum resource utilization. When implemented in a sequentially processing computer, experiments on synthetic graphs, road maps of cities of the USA, and vehicle routing with time windows indicate that the MRNN is especially efficient for large scale sparse graphs and even dense graphs with some constraints, e.g., the CPU time taken and the iteration number used for the road maps of cities of the USA is even less than  ∼ 2% and 0.5% that of the Dijkstra's algorithm.

  1. Performance of clustering techniques for solving multi depot vehicle routing problem

    Directory of Open Access Journals (Sweden)

    Eliana M. Toro-Ocampo

    2016-01-01

    Full Text Available The vehicle routing problem considering multiple depots is classified as NP-hard. MDVRP determines simultaneously the routes of a set of vehicles and aims to meet a set of clients with a known demand. The objective function of the problem is to minimize the total distance traveled by the routes given that all customers must be served considering capacity constraints in depots and vehicles. This paper presents a hybrid methodology that combines agglomerative clustering techniques to generate initial solutions with an iterated local search algorithm (ILS to solve the problem. Although previous studies clustering methods have been proposed like strategies to generate initial solutions, in this work the search is intensified on the information generated after applying the clustering technique. Besides an extensive analysis on the performance of techniques, and their effect in the final solution is performed. The operation of the proposed methodology is feasible and effective to solve the problem regarding the quality of the answers and computational times obtained on request evaluated literature

  2. Conference on "State of the Art in Global Optimization : Computational Methods and Applications"

    CERN Document Server

    Pardalos, P

    1996-01-01

    Optimization problems abound in most fields of science, engineering, and technology. In many of these problems it is necessary to compute the global optimum (or a good approximation) of a multivariable function. The variables that define the function to be optimized can be continuous and/or discrete and, in addition, many times satisfy certain constraints. Global optimization problems belong to the complexity class of NP-hard prob­ lems. Such problems are very difficult to solve. Traditional descent optimization algorithms based on local information are not adequate for solving these problems. In most cases of practical interest the number of local optima increases, on the aver­ age, exponentially with the size of the problem (number of variables). Furthermore, most of the traditional approaches fail to escape from a local optimum in order to continue the search for the global solution. Global optimization has received a lot of attention in the past ten years, due to the success of new algorithms for solvin...

  3. Enriching Elementary Quantum Mechanics with the Computer: Self-Consistent Field Problems in One Dimension

    Science.gov (United States)

    Bolemon, Jay S.; Etzold, David J.

    1974-01-01

    Discusses the use of a small computer to solve self-consistent field problems of one-dimensional systems of two or more interacting particles in an elementary quantum mechanics course. Indicates that the calculation can serve as a useful introduction to the iterative technique. (CC)

  4. Hard coal; Steinkohle

    Energy Technology Data Exchange (ETDEWEB)

    Loo, Kai van de; Sitte, Andreas-Peter [Gesamtverband Steinkohle e.V., Herne (Germany)

    2013-04-01

    The year 2012 benefited from a growth of the consumption of hard coal at the national level as well as at the international level. Worldwide, the hard coal still is the number one energy source for power generation. This leads to an increasing demand for power plant coal. In this year, the conversion of hard coal into electricity also increases in this year. In contrast to this, the demand for coking coal as well as for coke of the steel industry is still declining depending on the market conditions. The enhanced utilization of coal for the domestic power generation is due to the reduction of the nuclear power from a relatively bad year for wind power as well as reduced import prices and low CO{sub 2} prices. Both justify a significant price advantage for coal in comparison to the utilisation of natural gas in power plants. This was mainly due to the price erosion of the inexpensive US coal which partly was replaced by the expansion of shale gas on the domestic market. As a result of this, the inexpensive US coal looked for an outlet for sales in Europe. The domestic hard coal has continued the process of adaptation and phase-out as scheduled. Two further hard coal mines were decommissioned in the year 2012. RAG Aktiengesellschaft (Herne, Federal Republic of Germany) running the hard coal mining in this country begins with the preparations for the activities after the time of mining.

  5. 30 CFR 75.1720-1 - Distinctively colored hard hats, or hard caps; identification for newly employed, inexperienced...

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Distinctively colored hard hats, or hard caps... STANDARDS-UNDERGROUND COAL MINES Miscellaneous § 75.1720-1 Distinctively colored hard hats, or hard caps; identification for newly employed, inexperienced miners. Hard hats or hard caps distinctively different in color...

  6. Optimizing Green Computing Awareness for Environmental Sustainability and Economic Security as a Stochastic Optimization Problem

    Directory of Open Access Journals (Sweden)

    Emmanuel Okewu

    2017-10-01

    Full Text Available The role of automation in sustainable development is not in doubt. Computerization in particular has permeated every facet of human endeavour, enhancing the provision of information for decision-making that reduces cost of operation, promotes productivity and socioeconomic prosperity and cohesion. Hence, a new field called information and communication technology for development (ICT4D has emerged. Nonetheless, the need to ensure environmentally friendly computing has led to this research study with particular focus on green computing in Africa. This is against the backdrop that the continent is feared to suffer most from the vulnerability of climate change and the impact of environmental risk. Using Nigeria as a test case, this paper gauges the green computing awareness level of Africans via sample survey. It also attempts to institutionalize green computing maturity model with a view to optimizing the level of citizens awareness amid inherent uncertainties like low bandwidth, poor network and erratic power in an emerging African market. Consequently, we classified the problem as a stochastic optimization problem and applied metaheuristic search algorithm to determine the best sensitization strategy. Although there are alternative ways of promoting green computing education, the metaheuristic search we conducted indicated that an online real-time solution that not only drives but preserves timely conversations on electronic waste (e-waste management and energy saving techniques among the citizenry is cutting edge. The authors therefore reviewed literature, gathered requirements, modelled the proposed solution using Universal Modelling Language (UML and developed a prototype. The proposed solution is a web-based multi-tier e-Green computing system that educates computer users on innovative techniques of managing computers and accessories in an environmentally friendly way. We found out that such a real-time web-based interactive forum does not

  7. 30 CFR 77.1710-1 - Distinctively colored hard hats or hard caps; identification for newly employed, inexperienced...

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Distinctively colored hard hats or hard caps... Distinctively colored hard hats or hard caps; identification for newly employed, inexperienced miners. Hard hats or hard caps distinctively different in color from those worn by experienced miners shall be worn at...

  8. Integrated Berth Allocation and Quay Crane Assignment Problem: Set partitioning models and computational results

    DEFF Research Database (Denmark)

    Iris, Cagatay; Pacino, Dario; Røpke, Stefan

    2015-01-01

    Most of the operational problems in container terminals are strongly interconnected. In this paper, we study the integrated Berth Allocation and Quay Crane Assignment Problem in seaport container terminals. We will extend the current state-of-the-art by proposing novel set partitioning models....... To improve the performance of the set partitioning formulations, a number of variable reduction techniques are proposed. Furthermore, we analyze the effects of different discretization schemes and the impact of using a time-variant/invariant quay crane allocation policy. Computational experiments show...

  9. Computational problems in Arctic Research

    International Nuclear Information System (INIS)

    Petrov, I

    2016-01-01

    This article is to inform about main problems in the area of Arctic shelf seismic prospecting and exploitation of the Northern Sea Route: simulation of the interaction of different ice formations (icebergs, hummocks, and drifting ice floes) with fixed ice-resistant platforms; simulation of the interaction of icebreakers and ice- class vessels with ice formations; modeling of the impact of the ice formations on the underground pipelines; neutralization of damage for fixed and mobile offshore industrial structures from ice formations; calculation of the strength of the ground pipelines; transportation of hydrocarbons by pipeline; the problem of migration of large ice formations; modeling of the formation of ice hummocks on ice-resistant stationary platform; calculation the stability of fixed platforms; calculation dynamic processes in the water and air of the Arctic with the processing of data and its use to predict the dynamics of ice conditions; simulation of the formation of large icebergs, hummocks, large ice platforms; calculation of ridging in the dynamics of sea ice; direct and inverse problems of seismic prospecting in the Arctic; direct and inverse problems of electromagnetic prospecting of the Arctic. All these problems could be solved by up-to-date numerical methods, for example, using grid-characteristic method. (paper)

  10. Improved iterative image reconstruction algorithm for the exterior problem of computed tomography

    International Nuclear Information System (INIS)

    Guo, Yumeng; Zeng, Li

    2017-01-01

    In industrial applications that are limited by the angle of a fan-beam and the length of a detector, the exterior problem of computed tomography (CT) uses only the projection data that correspond to the external annulus of the objects to reconstruct an image. Because the reconstructions are not affected by the projection data that correspond to the interior of the objects, the exterior problem is widely applied to detect cracks in the outer wall of large-sized objects, such as in-service pipelines. However, image reconstruction in the exterior problem is still a challenging problem due to truncated projection data and beam-hardening, both of which can lead to distortions and artifacts. Thus, developing an effective algorithm and adopting a scanning trajectory suited for the exterior problem may be valuable. In this study, an improved iterative algorithm that combines total variation minimization (TVM) with a region scalable fitting (RSF) model was developed for a unilateral off-centered scanning trajectory and can be utilized to inspect large-sized objects for defects. Experiments involving simulated phantoms and real projection data were conducted to validate the practicality of our algorithm. Furthermore, comparative experiments show that our algorithm outperforms others in suppressing the artifacts caused by truncated projection data and beam-hardening.

  11. Improved iterative image reconstruction algorithm for the exterior problem of computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Yumeng [Chongqing University, College of Mathematics and Statistics, Chongqing 401331 (China); Chongqing University, ICT Research Center, Key Laboratory of Optoelectronic Technology and System of the Education Ministry of China, Chongqing 400044 (China); Zeng, Li, E-mail: drlizeng@cqu.edu.cn [Chongqing University, College of Mathematics and Statistics, Chongqing 401331 (China); Chongqing University, ICT Research Center, Key Laboratory of Optoelectronic Technology and System of the Education Ministry of China, Chongqing 400044 (China)

    2017-01-11

    In industrial applications that are limited by the angle of a fan-beam and the length of a detector, the exterior problem of computed tomography (CT) uses only the projection data that correspond to the external annulus of the objects to reconstruct an image. Because the reconstructions are not affected by the projection data that correspond to the interior of the objects, the exterior problem is widely applied to detect cracks in the outer wall of large-sized objects, such as in-service pipelines. However, image reconstruction in the exterior problem is still a challenging problem due to truncated projection data and beam-hardening, both of which can lead to distortions and artifacts. Thus, developing an effective algorithm and adopting a scanning trajectory suited for the exterior problem may be valuable. In this study, an improved iterative algorithm that combines total variation minimization (TVM) with a region scalable fitting (RSF) model was developed for a unilateral off-centered scanning trajectory and can be utilized to inspect large-sized objects for defects. Experiments involving simulated phantoms and real projection data were conducted to validate the practicality of our algorithm. Furthermore, comparative experiments show that our algorithm outperforms others in suppressing the artifacts caused by truncated projection data and beam-hardening.

  12. Modelling human hard palate shape with Bézier curves.

    Directory of Open Access Journals (Sweden)

    Rick Janssen

    Full Text Available People vary at most levels, from the molecular to the cognitive, and the shape of the hard palate (the bony roof of the mouth is no exception. The patterns of variation in the hard palate are important for the forensic sciences and (palaeoanthropology, and might also play a role in speech production, both in pathological cases and normal variation. Here we describe a method based on Bézier curves, whose main aim is to generate possible shapes of the hard palate in humans for use in computer simulations of speech production and language evolution. Moreover, our method can also capture existing patterns of variation using few and easy-to-interpret parameters, and fits actual data obtained from MRI traces very well with as little as two or three free parameters. When compared to the widely-used Principal Component Analysis (PCA, our method fits actual data slightly worse for the same number of degrees of freedom. However, it is much better at generating new shapes without requiring a calibration sample, its parameters have clearer interpretations, and their ranges are grounded in geometrical considerations.

  13. Experimental realization of a one-way quantum computer algorithm solving Simon's problem.

    Science.gov (United States)

    Tame, M S; Bell, B A; Di Franco, C; Wadsworth, W J; Rarity, J G

    2014-11-14

    We report an experimental demonstration of a one-way implementation of a quantum algorithm solving Simon's problem-a black-box period-finding problem that has an exponential gap between the classical and quantum runtime. Using an all-optical setup and modifying the bases of single-qubit measurements on a five-qubit cluster state, key representative functions of the logical two-qubit version's black box can be queried and solved. To the best of our knowledge, this work represents the first experimental realization of the quantum algorithm solving Simon's problem. The experimental results are in excellent agreement with the theoretical model, demonstrating the successful performance of the algorithm. With a view to scaling up to larger numbers of qubits, we analyze the resource requirements for an n-qubit version. This work helps highlight how one-way quantum computing provides a practical route to experimentally investigating the quantum-classical gap in the query complexity model.

  14. Ising formulations of many NP problems

    OpenAIRE

    Lucas, Andrew

    2013-01-01

    We provide Ising formulations for many NP-complete and NP-hard problems, including all of Karp's 21 NP-complete problems. This collects and extends mappings to the Ising model from partitioning, covering and satisfiability. In each case, the required number of spins is at most cubic in the size of the problem. This work may be useful in designing adiabatic quantum optimization algorithms.

  15. On the complexity of the balanced vertex ordering problem

    Directory of Open Access Journals (Sweden)

    Jan Kara

    2007-01-01

    Full Text Available We consider the problem of finding a balanced ordering of the vertices of a graph. More precisely, we want to minimise the sum, taken over all vertices v, of the difference between the number of neighbours to the left and right of v. This problem, which has applications in graph drawing, was recently introduced by Biedl et al. [Discrete Applied Math. 148:27--48, 2005]. They proved that the problem is solvable in polynomial time for graphs with maximum degree three, but NP-hard for graphs with maximum degree six. One of our main results is to close the gap in these results, by proving NP-hardness for graphs with maximum degree four. Furthermore, we prove that the problem remains NP-hard for planar graphs with maximum degree four and for 5-regular graphs. On the other hand, we introduce a polynomial time algorithm that determines whetherthere is a vertex ordering with total imbalance smaller than a fixed constant, and a polynomial time algorithm that determines whether a given multigraph with even degrees has an `almost balanced' ordering.

  16. Elastic energy loss and longitudinal straggling of a hard jet

    International Nuclear Information System (INIS)

    Majumder, A.

    2009-01-01

    The elastic energy loss encountered by jets produced in deep-inelastic scattering (DIS) off a large nucleus is studied in the collinear limit. In close analogy to the case of (nonradiative) transverse momentum broadening, which is dependent on the medium transport coefficient q, a class of medium enhanced higher twist operators which contribute to the nonradiative loss of the forward light-cone momentum of the jet (q - ) are identified and the leading correction in the limit of asymptotically high q - is isolated. Based on these operator products, a new transport coefficient e is motivated which quantifies the energy loss per unit length encountered by the hard jet. These operator products are then computed, explicitly, in the case of a similar hard jet traversing a deconfined quark-gluon plasma (QGP) in the hard-thermal-loop (HTL) approximation. This is followed by an evaluation of subleading contributions which are suppressed by the inverse light-cone momentum q - , which yields the longitudinal 'straggling', i.e., a slight change in light cone momentum due to the Brownian propagation through a medium with a fluctuating color field.

  17. Applying Graph Theory to Problems in Air Traffic Management

    Science.gov (United States)

    Farrahi, Amir H.; Goldberg, Alan T.; Bagasol, Leonard N.; Jung, Jaewoo

    2017-01-01

    Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it isshown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.

  18. Online Reading Practices of Students Who Are Deaf/Hard of Hearing

    Science.gov (United States)

    Donne, Vicki; Rugg, Natalie

    2015-01-01

    This study sought to investigate reading perceptions, computer use perceptions, and online reading comprehension strategy use of 26 students who are deaf/hard of hearing in grades 4 through 8 attending public school districts in a tri-state area of the U.S. Students completed an online questionnaire and descriptive analysis indicated that students…

  19. Temporal, spatial and substrate-dependent variations of Danish hard-bottom macrofauna

    DEFF Research Database (Denmark)

    Dahl, L.; Dahl, K.

    2002-01-01

    Detailed knowledge of the Danish hard-bottom fauna is at present limited because of sampling problems. In this study, two different sampling units were used to yield quantitative results of the fauna on two stone reefs in Kattegat: natural holdfasts of Laminaria digitata and plastic pan-scourers ......Detailed knowledge of the Danish hard-bottom fauna is at present limited because of sampling problems. In this study, two different sampling units were used to yield quantitative results of the fauna on two stone reefs in Kattegat: natural holdfasts of Laminaria digitata and plastic pan...... on the Shannon-Wiener diversity index, and it showed a high degree of spatial and temporal variation. ANOSIM analyses showed a significant difference in species compositions between both sampling location, time and substrate type. The plastic pan-scourers proved to be a valuable substrate for quantitative...... investigations of the fauna. In contrast, the Laminaria holdfasts were too small and variable to be suitable for such studies...

  20. Target costing as an element of the hard coal extraction cost planning process

    Directory of Open Access Journals (Sweden)

    Katarzyna Segeth-Boniecka

    2017-09-01

    Full Text Available Target costing as an element of the hard coal extraction cost planning process Striving for the efficiency of activities is of great significance in the management of hard coal extractive enterprises, which are constantly subjected to the process of restructuring. Effective cost management is an important condition of the increase in the efficiency of the researched business entities’ activity. One of the tools whose basic objective is conscious influencing cost levels is target costing. The aim of this article is to analyse the conditions of implementing target costing in the planning of hard coal extraction costs in hard coal mines in Poland. The subject area raises a topical and important problem of the scope of solutions concerning cost analysis in hard coal mines in Poland, which has not been thoroughly researched yet. To achieve the abovementioned aim, the theoretical works of the subject area have been referenced. The mine management process is difficult and requires the application of best suited and most modern tools, including those used in the planning process of hard coal extraction costs in order to support the economic efficiency of mining operations. The use of the target costing concept in the planning of hard coal mine operations aims to support the decision-making process, so as to achieve a specified level of economic efficiency of the operations carried out in a territorially designated site of hard coal extraction.